Our Core Philosophy
Without true understanding, there is no hope to solve any problem satisfactorily. It is necessary to understand not only the problem domain but, first and foremost, our client's ideas, goals, and wishes. Sometimes clients are the best domain experts themselves; sometimes external expertise is needed. Before we proceed, enough initial "critical mass" must be present to analyze the problem.
Methodology
- We characterize the problem and identify its class. For example, whether it is well-posed or ill-posed, discrete or continuous, what its dimensions and variables are. We determine which domain(s) or their combination(s) it belongs to: e.g., physics, chemistry, biology, economics, finance, sociology, psychology, etc. We assess whether its nature is deterministic or stochastic, and whether sufficient data are available and what their quality is.
- We are guided by our own expertise, experience and intuition to find the right initial conditions for the research process.
- We search academic literature for suitable approaches to attack it. If it is at least partially satisfactorily solved (which is usually the case), we look for any available open-source or closed-source solvers. We can then help customers apply standard solutions or adapt them for their needs, as there is no need to reinvent the wheel.
- We love innovative problems where no standard solution exists and which require new proprietary solutions. Even for quite standard problems, special solvers are often needed due to e.g. their scale, the need for low latency, or the sensitive nature of the data itself.
- Frequent communication with the client is a must. Usually, any satisfactory solution requires many iterations to achieve production readiness. We work in well-defined stages and deliverables, and at any moment, the client is free to move on. If there is no conflict of interest, the project specific IP belongs to the client. If we are not able to comply, we will state so, and specific deals might be negotiated.
Examples
- Very often, the problem is to find a mapping \(f(x, \alpha) = y\), which maps inputs \(x\) into outputs \(y\), where \(\alpha\) represents the parameters of the function \(f\) and \(x\) and \(y\) are discrete variables. If a sufficient number of data pairs \((x,y)\) is available, the best model \(f\) to fit the data might be a neural network. This is a standard supervised learning problem of ML. But different neural network models yield different accuracies, and training is often as much an issue of science as it is of art. These models are not explanatory, since the neural nets are such general approximators. Hardly any insight can be obtained about the qualitative properties of the problem and its structure. They are known as Black-Box models.
- Often, reliable data does not exist. Think about nuclear power plants—luckily, there are only a handful of critical disasters to be studied! Here, humans have to think hard and model the processes based on the first principles. The resulting models are White-Box. The examples are, e.g., ordinary or partial differential equations (ODEs, PDEs) and systems of them. They allow us to sufficiently accurately model the characteristic variables of the systems, such as temperature, pressure, material properties, and/or radiation. While the predictions or global interactions might sometimes be surprising, our understanding of the processes at a local level is more or less "perfect".
- The rest lies in between: Grey-Box models. Neither is our understanding perfect, nor is it a Black-Box. We can, for example, decide on the structure of the model and train its parameters. In our view, this is where the biggest advances of science will be in the coming years.