Slaven is our Senior C++ Architect and embedded expert from Zagreb, Croatia. He often speaks at conferences and IT meetups. You may have seen him in one of our videos having trouble explaining to his parents his job role here at Spyrosoft. He is also a passionate basketball player and dog owner. And, as he admits, he’s just terrible at playing the guitar. The dog is the only one that’s not complaining.

But now, let’s meet Slaven from a more technical point of view.

Slaven, what did your path to embedded look like?

It all started back in college, where I had a project where I implemented inverse kinematics on a PIC microcontroller. I think that was one of my first insights into the topic. It was interesting to see that you can increase the algorithm’s speed 30 times by avoiding the modulo operation. After working for some time as a teaching assistant in embedded programming courses, I started working on the automotive hardware-in-the-loop simulation system. I really liked this topic and studied a model for the execution of simulations on embedded systems. This research led to my PhD thesis. It’s nice to say that I continue working in this area at Spyrosoft.

Slaven Gumac on BITCon, Spyrosoft

Did you face any challenges in your recent projects?

Most projects have one challenge or another. I would like to mention one project where we performed code analysis for a large embedded system. This project was particularly interesting because we started to look at the theoretical limits of what can be solved. We even got to mention the halting problem and its generalisation of Rice’s theorem.

This meant that we had a problem that could not be fully solved, and some compromises had to be made.

We decided to simplify code analysis for the user as much as possible. We represented the code as a graph with nodes and edges. The code analysis problem became a path-search problem. If there was a path between two variables, there was potentially an influence between them. The user had to check the reported paths because this approach reported false positive influences. We accepted this compromise due to the nature of the problem.

But there was a twist, right? Users wanted you to filter some of the paths found. Tell us more about that.

The client needed to analyse the influence between little less than 100,000 pairs of variables. Our approach excluded many of these pairs from the analysis because no path was found.

Unfortunately, another problem became apparent when users tried to analyse the pairs in which paths were found. We have ensured that our analysis does not contain false negative reports. However, when we reported a path, the user had to double-check it to eliminate false positives. The code we analysed resulted in a huge, dense graph with many possible paths between many analysed variables. In some cases where we reported paths, we could have easily found 100 paths between two variables. This was tedious for the user, and we had to refine our solution further. We decided to improve the current solution as it provides a nice visualisation for the end user. We added some additional labels to the edges of the graph, and then we found an implementation of the Yen algorithm that allows filtering the paths based on these labels. A big problem that arose at this point was that we were nearing the deadline for delivering some results, and due to the nature of the problem, we still didn’t know if we would solve the problem.

Deadlines always add extra spicing to every project, don’t they? Was Yen’s algorithm the problem solver?

We were aware that the Yen algorithm with false positive filtering could potentially run in factorial time with respect to the number of nodes in the graph. Simply put, this meant that some code analyses could take a long time, and we had to stop them after a reasonable amount of time. We divided our approach into several steps. First, we tried to experimentally verify the running time of code analysis for a small subset of pairs. We estimated that in the case of sequential execution with a 1-hour timeout, the time to complete all required steps would probably take close to 100 days. The experiment showed that a portion of the runs would be completed in less than an hour. We were prepared for this scenario and communicated it to the customer.

We asked the customer if this portion was enough for the deadline and the answer was yes. We knew that sequential execution would miss the deadline. We implemented a REST service that can handle long-running tasks and distribute them across multiple cores of a very powerful 96-CPU machine. We chose a solution that allowed for easy reuse of the sequential code. Since we used the Yen algorithm from the JGraphT library for the Java implementation, we implemented a Spring Boot service for this purpose. The service worked as expected and the proportion of paths found met customer needs.

Well, it’s great that you found the solution! What would be a key takeaway for you from this project, a most valuable lesson?

We planned small incremental steps to analyse the results. We were fortunate that our assumptions were correct and we were able to communicate most of the expectations to the customer. Due to the nature of the problem, we had to verify the results between steps. Sometimes as programmers, we tend to tackle large chunks of work and overlook the importance of these intermediate steps. A valuable benefit of such an approach was communication with the customer. The problem was challenging and the customer knew they could potentially encounter problems due to the nature of the problem. Even though we communicated that the possibility of failure was real, we maintained the customer’s confidence by demonstrating progress. In addition, by demonstrating progress we ensured that a working product was available throughout the project. This made it possible to avoid long debugging sessions when they were necessary for the first productive use of the application.

Do you have any other advice for colleagues who could find themselves in similar situations?

It’s interesting that Scrum advocates a kind of incremental development, and my last comment feels like reinventing the wheel a bit. However, when faced with such interesting problems it’s easy to get carried away. When it comes to parallelisation, it can be particularly interesting to work with task distribution in a cloud and invest a lot of time just working on this topic. This is partly at the expense of the implementation of the tasks to be distributed. For this particular project, perhaps, the challenge of the task allowed us to focus more on it. When the computing task itself isn’t that challenging, it’s easy to lose focus on it. Try to focus on presenting results. In this case, you usually solve the prerequisites first. If you succeed, you usually do a follow-up, and that’s what we did with the project mentioned.

Slaven, thank you for the input and knowledge you shared and the experience you had while working in Spyrosoft. Hopefully, someone from the readers will find it handy. And that was the purpose of this interview in the first place.

If you want to read more about our Croatia office where you can meet Slaven, click here.

About the author

Marija Jurina

Marija Jurina

Employer Branding Specialist