Under the leadership of Prof. Michael Weyrich of the IAS, eight working groups from three departments at the University of Stuttgart as well as the Research Institute for Automotive Engineering and Powertrain Systems Stuttgart are collaborating in the Software-defined Car project (SofDCar).
As Weyrich explains, while SofDCar also has its sights set on the automotive industry, its focus is different: “The corporate and product scene that we want to network,” he says, “is dominated by a handful of major manufacturers and suppliers, who have the power to influence systems in a very significant way. But on the product side, there are currently many millions of vehicles all over the world using the roads under a plethora of technical, legal, and ethical framework conditions.”
Researchers in the SofDCar project are focusing on electrical, electronic, and software architectures in a completely novel way by putting the software at the forefront of the system, whereby there are two distinct levels involved. The first step is to get control of the over 100 control units and functions commonly found in existing vehicles.
“However,” says Weyrich, “the bigger picture is different. The novel thing about our approach is that we think of every individual vehicle as a node in a networked vehicle and system topography.” And, as project coordinator Matthias Weiss explains: “Another of our goals is to enable the digital sustainability of existing and future generations of vehicles, as well as using data effectively, in addition to which we are also looking at innovative use cases throughout a vehicle’s lifecycle.”
All elements of networked vehicles continuously transmit and receive information, whether within the vehicle itself, between different vehicles, or between the vehicle and the traffic infrastructure, such as stoplights or parking facilities, so the big question is how to implement these connections via a software architecture, which is where SofDCar’s digital twin comes into play, which can map the information pool of an entire fleet and, more importantly, manage the so-called “data loop”, i.e., the connection between the circulating vehicles and the manufacturers.
This feedback loop is currently static but the respective measurements will be dynamically adaptable and continuous across the entire vehicle fleet in the future. As Weiss points out: “This data can be used for development purposes throughout the entire life cycle of a given vehicle with a view to permanently honing the algorithms and, in turn, the vehicles themselves.”
This data exchange will also lead to completely new vehicle functions: a vehicle could, for example, receive warnings about traffic bottlenecks in the immediate vicinity from another vehicle that is currently on the road in question – in real time rather than as a time-delayed radio announcement. This kind of micro function already exists, but the big vision is fully autonomous driving. Weyrich is convinced that “it will take a while to achieve this, but we’re putting the foundations in place.”
Yet this presents significant challenges, due to the sheer volume of elements that need to be interconnected. The security issues associated with the “software-defined” concept are even more serious, and this applies to both projects.
“The problems begin with simple data theft, i.e., the risk that software could be stolen or copied and reprogrammed to the detriment of the owner,” as Alexander Verl explains. The safety issues involved in autonomous driving are even more critical, given that an error in the software could easily cause fatalities. “This,” as Weyrich adds, “means that the processes and infrastructure for releasing and distributing the necessary software and data need to be appropriately safeguarded.”
Fault finding as a key challenge
Of course, this means that one first has to identify the bugs, which is something that Dr. Andrey Morozov, a Junior Professor at the IAS, who is also working on both projects, specializes in. His focus in the SofDCar project is on anomaly detection. “Our task,” he says, “is to check the data to verify that everything is okay.”
Not an easy thing to do in complex cyber-physical systems, in which it is difficult to identify the exact reason for the malfunction, which, as Morozov explains, is why troubleshooting is carried out at different levels. For example, faults at the component level manifest themselves in the form of sensor, control system, or network errors.
More complex problems can be detected at the vehicle level, which result from component interactions, for example, when the vehicle accelerates but the sensors indicate that its speed is decreasing. Any unusual behavior on the part of the driver could also indicate that something is wrong. And, at vehicle fleet level, the focus is on traffic anomalies.
“The hardest thing in this context,” as Morozov explains, “is to recognize which indicators are relevant at any given moment within the infinitely vast amount of available data. It is critical to dynamically manage what we are paying attention to depending on the context. If, for example, we are charging an electric car in the garage, we need to focus on the battery controller, but when driving in the city at rush hour, we need to focus more on our surroundings.”
Morozov and his team are using artificial intelligence and deep learning to enable the vehicle to autonomously identify the plethora of potential anomalies in the system as a whole. The research team already developed a so-called “KrakenBox” in 2020, which is a device that can be programmed with the aid of a neural network to autonomously detect faults in industrial cyber-physical systems with no human intervention.
Morozov emphasizes the fact that neural networks are particularly well suited to deal with these issues because, as he explains: “they are good at remembering the origins of a given signal and predicting its future development. By comparing this forecast with what actually happens, you can then assess whether something might go wrong in the near future.”
So, whereas Morozov focuses on risk mitigation in the SofDCar project, his contribution to the SDM4FZI project is all about risk analysis, which has traditionally been a one-off process before a system goes into operation.
But, in the case of software-defined manufacturing (SDM), any software update could have a potentially drastic impact on the process and create new risk scenarios: new hazards are continuously emerging, so the risk analysis also needs to be automated in order to be carried out before any software update. Researchers use risk assessment models to describe how likely a disruption is to occur and what damage it may cause. However, as Morozov explains, the problem is that: “The number of potential risk scenarios rises exponentially in any complex system.”
Legal and ethical issues
In addition to these technical hurdles, software-defined systems are also subject to tricky legal and ethical issues. For example, as Weyrich explains, in this “delicate information scenario,” the built-in sensor systems needed for automated and autonomous driving facilitate the collection of a wide range of data about the vehicle, its occupants, and its surroundings, such as video recordings of what is happening inside and outside the vehicle.
Various countries and even continents have very different views of what is desirable, still permitted or prohibited, and in some cases the respective standards are even contradictory. Weyrich is aware that “there is an enormous amount of social tension relating to this field, which still receives little attention,” and resolving these issues goes beyond the project scope.
But the IAS director emphasizes: “This is something that we are continuously discussing in relation, for example, to the European Commission’s legislative framework, as well as in numerous other initiatives. This involves some difficult questions, but we are wide open to the relevant discussions.”
Source: University of Stuttgart