Politicians and the general public are concerned about algorithm overwhelming. A little basic knowledge of their function cannot hurt.
In autumn 2019, the 16 experts appointed by the federal government submitted their first report. With drastic demands: Among other things, the Commission recommended a risk-adapted regulatory system for the use of algorithmic systems, an evaluation of programs according to "criticality" and "damage potential" and even a complete ban on the most harmful programs.
Is this a sensible consideration of the risks of technical development or the fear of the omnipotence of the machine? A fear that feeds more on pop culture images and exaggerated marketing promises than on concrete knowledge? This focus is intended to objectify this discussion: In the articles of c't 8/2020, it uses suggestion algorithms, Google's PageRank procedure and routing systems to show what makes algorithms so important and where their risks and side effects lie.
The context makes the difference
At its core, algorithms in IT are initially nothing more than a sequence of calculation instructions. What makes them so powerful and occasionally problematic is not their execution, but the context in which they are designed and used. Because in order to make abstract problems calculable for computers, the problem must be mapped to a mathematical model. The algorithm in question then solves this mathematically abstracted problem.
The abstract solution can then be applied to all possible concrete problems. For example, a sorting algorithm doesn't care whether it sorts numerical values, products according to popularity or pictures according to their motif. Seen in this way, algorithms are really as powerful, precise, incorruptible and objective as many people believe.
A critical point, however, is how exactly fuzzy, subjective variables such as popularity, beauty or suitability for a job are translated into numerical values - the so-called objectification. Because the mathematical model can only show a small section of reality. Which variables are represented mathematically and how exactly can, however, have a major influence on the result of a calculation. If you enter nonsensical data, you will get nonsensical results. "Garbage in, garbage out" as the computer scientist says.
In addition, there is often not all the data available that would be necessary for an exact calculation. An autonomous car, for example, must plan its next actions based on extremely incomplete sensor data – and also expect the situation to change further during planning.
Computer science's answer to this is simplifications that often, but not necessarily, always lead to the goal: They compare – similarly to how people would do – the situation with previous experiences, try to assess the consequences of an incorrect decision, make assumptions about the future Development – and in case of doubt, they let coincidence be decided by flipping coins.
This finding is anything but reassuring. Because it means that the evaluation and regulation of algorithms depends on their context. The assumptions under which algorithms are used have to be scrutinized just like the database and the heuristic methods. There will be no way around this discussion. The article "Ethics of Algorithms" reflects the current status of this discussion.
This article comes from c't 8/2020.
. (tagsToTranslate) Algorithms (t) Society (t) Artificial Intelligence (t) Politics (t) Science Ethics