Using Polyspace Products in Continuous Integration and DevOps Workflows
Are you afraid of finding critical coding bugs late in the software development cycle? Would you like to have evidence that your safety code doesn’t have any critical runtime errors such as divide-by-zero, out-of-bounds array access, and overflow errors? Do you need to comply with safety and security standards or guidelines like ISO 26262, IEC 61508, MISRA, SEI CERT-C, or ISO/IEC TS 17961? Do you like to integrate static code analysis into continuous integration (CI) and DevOps workflows?
In this video, we will demonstrate how to use Polyspace® products to integrate sophisticated static code analysis with development processes such as CI and DevOps. This helps developers to avoid bugs before submitting code and meet software quality objectives.
Published: 1 Jul 2020
Yeah, let's start with the DevOps workflows. What does it mean, DevOps? It's development operations. So that really is something with continuous integration, maybe tools like Jenkins come into place, versioning systems like gits, maybe dash bot tools or code review tools. This kind of stuff come on the plate if you speak about DevOps.
But basically the idea behind is to have a consistent and reliable approach to verify your software, because you don't really want the situation that the developer A applies the same but gets different results. So that's usually the case if you have different sandboxes and so on. So that's what we like to avoid with the continuous integration approach. And Polyspace fits very well into that. But, yeah, let's directly jump into the topic, using static code analysis in CI.
We talked in the first section about Polyspace Bug Find and Code Prover, mostly with the desktop tool. So you have seen how to use the graphical front end, how to use the tool interactively on your desktop. So that's completely fine. And I think if you need more information about that, just get in contact with us.
What I'd like to speak more about now is about the other products are on Polyspace, because we have also Polyspace Bug Finder Server, Polyspace Code Prover Server. And we also have a product which gives you access to a results in your web browser. And this product is called Polyspace Access and Polyspace Code Prover Access. So you see there is a lot of more than just the normal interactive desktop, where you execute the engine on your desktop. So that's what I mean with desktop. Of course, you can review the results with a browser also on your desktop just to avoid this confusion.
OK, let's speak a little bit about the desktop. What we have seen in the first section is that you interactively invoke the local installation of Polyspace Bug Finder and Code Prover on your desktop. Your code maybe is on source code repository, like svn git or some people also use CVS or Microsoft Team Founder Server, or whatever kind of system you use. It really doesn't matter here, because you have a local copy of your files on your desktop.
And this is where you apply Polyspace on. The problem is each and every developer might have a different set of settings, might have a different set of scope, different files to analyze. So it's not really suitable to have a quality view from the whole project or quality view from the whole department maybe.
So with the desktop, you can apply in Eclipse in the desktop application, but also in the command line. So the command line is our first entry point when it comes to continuous integration, because this flow-- or let's say the work what you started with in the desktop application can be reused with a continuous integration as well. Going one slide back, you have this built engineer here, which is a little grayed out person. So that's usually why all this stuff gets so complicated. Sometimes these people get gray hair-- I don't know, just kidding.
Basically it's how to transfer this work or the settings from the desktop to the CI. So that's the question behind. And this is why I start here with desktop tools again. Not to bore you, but really to make the bridge between two different worlds.
Let's go one step further. How do you set up a project in desktop? In desktop, you have the graphical front end. And if you click on New Projects, you can say Create project from build commands. And this is very important, because projects from build command, you don't need to manually gather all the information about sources, include folder, and pre-processor defines, because this tool will help you to extract all the information from the builds.
So what Polyspace is doing, executing the build and extracting all this information from the pipeline, let's say, in the backgrounds. So what you will need to specify here is just giving it your build script. Or you can just say, OK, make, or whatever command you use to execute your build.
And if you click finally on Run, then Polyspace will create a graphical project for you, because this is the workflow for the graphical front end and create your psprj project. This is the graphical front end project what you can use. And it consists of all the information about sources defined and so on. So if you click on Finish, you have this and can work with that in a graphical way.
But actually that's not what we want. We want to go to CI. We don't have any graphical front end. So we need another approach to get this kind of configuration set, because we really don't want to do this manually. It can be easily outdated if you add a new fire and so on. What we need is to do this from the command line.
And here, we have a little tool, it's called Polyspace Configure. And this tool is also in the binaries, as well on the desktop. It's also in the server. So you can use it to do exactly the same job, but just from the command line. So here's one example on the right side saying polyspace configure minus proc minus outer minus allow build. I don't explain all the details here. Important-- that I say minus output minus project. So that means-- no minus output minus options file, sorry. This is the options file we create here.
So that means we can create minus output minus project. This is for the graphical front end, if you wish. And then we have a minus output minus options file. And this is for the non-graphical projects. So in this file consists all of the information on our builds, which files I included, which pre-processor defines are invoked. This all this stuff that we use. And this can be plenty of information. And it's a nightmare to gather all of this. So you want to get rid of that, and Polyspace Configure is one way to do that. OK?
Just to summarize this, in this Polyspace project, you have usually, from a graphical front end, the project files and you have a configuration. So the configuration includes, for instance, what kind of MISRA set you want to check, what kind of checkups you want to invoke, maybe some settings about multitasking other stuff about reporting maybe. This project configuration has rarely changed. So usually you have a built engineer who set up this once saying, OK, this is my project. It's safety critical. I need to apply this and that, that kind of checkers. That's it. So you can provide this information as an option as well for the server.
And all the frequently changing stuff will come out of Polyspace Configure. So that means all the source files, include folders, pre-processor defines. So you have here a share of work between a human and the robot, and the machine, lets say, the executable in that way. That's very important that you understand that not all of the options we use for the engine are coming from a human and not all of the options are coming automatically from a machine or from a Polyspace Configure approach.
All of this is very well documented. And I will also show you later on an easy way how to get these kind of options and how to turn this configuration what you make with the graphical front end into a continuous integration configuration easily without much effort. Because when we take a look about the big workflow, we see, OK, we have a pre-submit and a post-submit workflow. We say submit here because we make a break on this line on the source code repository.
So as soon as you submit the code into the repository, there's a process behind which check out this kind of code or this kind of project, automatically runs the analysis in a command line interface, and provides results to the developers. So that's what we call post-submit, because actually the repository is the border in between. It's not a hard border. It's more of a virtual border. But there's something here which divides the pre-submit workflow into active usage and the post-submit workflow. So, yeah, exactly.
Let's focus on the later part now, because that's what we try to understand when we speak about continuous integration. So you can see here a Jenkins. We support nearly any kind of configuration management or any kind of continuous integration management system. It's not just Jenkins.
We provide a command line interface. And this can be easily integrated into any kind of system. So it can be a Microsoft Foundation Server. It can be a Jenkins. But there are plenty of other tools. Don't worry about that. So for the first, we have a plug-in for Jenkins. But the command line interface can be used completely freely. And I like to show you how.
So the post-submit workflow actually starts with the repository, which will contain the code of the developers. Then a Jenkins kicks in, check out, or, let's say, apply some scriptings, check out the code, run analysis, and, let's say, publish the analysis results to a Polyspace results dashboard, which we call Polyspace Access. But this is another server here. And then the team can take access to that. So that's basically the whole workflow. And my idea is to show you in the next few minutes how exactly that works.
So let's focus here for the first about the Polyspace server part. So we will later on speak Bug Finder and Code Prover Access, so how to review the results in a web interface. But let's focus now how to automate the verification on continuous integration server, like Jenkins.
So the Polyspace server products are completely command line interface based. So you can deploy it on each and every machine. It's Linux-based or Windows-based needs. And you can connect it with your continuous integration tool you have. So there's Jenkins of Bamboo.
And this server product also provides you some capabilities even to connect to dashboards like Polyspace Access or even to generate reports, what we will need to maybe further process the results if you want to maybe connect further business logic. So this can be also important, for instance, if you want to decide for further steps. some people know about the concept of gating or gated commits. So this is maybe one example of this kind of further usages of results.
I spoke about the Polyspace Jenkins plugin. So if you are in Jenkins, you can install it from the global repository of plugins. You can directly find if you will search for plugin or for Polyspace in your plugins you can find it. Then you can install it. And it helps you to easily handle the installation folders, the metrics server, access rights, and something like that.
But it does not enforce your certain workflow. And I think that's quite important. You are completely free how you want to develop your workflow with the tools. We have some recommendations for you. But if you have slightly different ideas, no problem. You can skip steps. For instance, here we have in the list our pass/fail criteria or sending emails. If you don't want to use it, no problem. If you want to apply another step, no problem. If you want to gather your source files, not with Polyspace Configure, but maybe fetch it from cmake or something like that, you can do it. We don't have a predefined way of doing that.
We use a command line interface and shell scripts, or bash scripts. And then you can implement that. But we have plenty of suggestions, examples for you, how to do that. I think that's a good way to do that, because the world is changing so fast. And the requirements and tool landscape changes so fast that we cannot fix it here.
OK, I spoke about automation. What does it mean? When we have a Jenkins in place, we want to do some steps. We want to check out the project sources, because we always want to start clean and not with some pre-compile stuff or, let's say, like see stuff in your sandbox, we want to always check out clean. We want to get the build information very clean. That means always checking what will be really compiled with Polyspace Configure, for instance. And we want to run the analysis and, the last step, making the results available. So that's basically the four-step approach I'd like to show you now.
Again, you can have maybe another step in between. But this is up to you. If you get this slides, by the way, you'll find the documentation links here. So this usually point you to the documentation, where can read how exactly you can do that, what exactly are the commands and the arguments you want to use, because otherwise my slides would be just full of text. That's not really convenient.
OK, by the way, text-- and I speak so much, let's say, start with a short video about the usage of Polyspace and Jenkins. So this is actually the Jenkins front end. What you can see are the different jobs. And if you click on the job, you can configure it. So that's basically the way how to configure it.
We have here the source code management to check out source code repository from git. We have the Polyspace settings where we can say, OK, this server, this Polyspace web interface. And we have the shell commands here. So this is what I mentioned upfront that we can script how to use Polyspace in this shell. And it gives us all the flexibility we need.
So don't worry if you cannot read it so fast. It was just a teaser. We go into the details now how these shell scripts or bash scripts should really look and what kind of options you have. Again, if you have questions, don't hesitate to put it in the chat, so after the session we can also spend some more time to answer all of this. That should be definitely possible. OK.
First step, as I mentioned, is checking out the project sources. So that's not a big deal. So usually, all of you will have a kind of versioning system in place, I guess. If not, you can also copy something from your network drives or what you want to do. There are plenty of capabilities that does not have much to do with Polyspace. It's more about how to use continuous integration or what's the approach of continuous integration itself.
But what's very important for Polyspace, you need in your sources, of course, a way how to launch a build. So let's say we have here a make file in place, which can be executed on this Jenkins machine. So that will be one requirement we have, because otherwise we cannot extract the build information.
And the other thing what I recommend to you provide separate folder for the tool with some supporting artifacts. What are supporting artifacts? I will go later more in the details. But do we need to define some general options. So I call it global options, but these are the options what I mentioned what the project engineer or the build engineer will define manually, which kind of MISRA set you want to apply, which kind of checkouts you want to enable. This kind of information needs to be written down, and ideally inside of our repository. Because if you have it inside of the repository checked in, you cannot lose it. And all of the changes will be tracked. So that's really something from the bottom of my heart I like to recommend you to make this consistent.
Of course, you can also put it into the script itself. But in terms of consistency, that's really a good way. That's checking out the project sources.
The project sources can be created from the desktop environment. So if you start with using the desktop tool, and if you have your working set of settings in place, you might wonder how to move it to continuous integration because you don't know all the arguments maybe. You don't have it in mind. They are in the documentation for sure. But going one slide back here, the minus checkouts all, minus code matrix, minus cert ppp, there's a certain syntax behind. And you want an easy way to get the configuration from desktop to server.
And this is possible. Again, you will have your project configuration. And as soon as you run an analysis, and the experienced Polyspace user know that, you get the Results folder. And in this Results folder, you have a dot settings folder. And this dot settings folder has a launching script, which invoke an options command txt and a source command txt. And this kind of file contains all of our settings what we made in the graphical front end. So this can be very helpful to extract some of the settings here and bring it into this global options files, which you want to use in Jenkins later on.
So that's not a task for each and every developer. That's more a task for a build engineer, for this single person who is responsible for setting up the job in Jenkins. Not each and every developer will do that. But you will have one experienced person who will work with that. This is the way how to extract it.
By the way, if you don't find a certain setting, you can in the graphical front end go with a mouse over the settings, you get a Help button. And then you also find in the documentation the syntax for the command line interface. If you don't find it there, you can also create a help ticket for MathWorks or call us, so you get to help finally. But this is a very convenient step.
OK second thing, getting the build information, we already spoke about using Polyspace Configure. And this is really convenient. Of course, you can also script it manually. So I have some customers doing that manually and creating this information from other sources, maybe from the build system itself. Or another step would be directly to extract it from cmake output files. So that would also work. But Polyspace Configure is our preferred way to really execute the builds. It doesn't parse the make files. It will really execute the make and parse the standard output.
The Polyspace Configure can be launched with this shaping binary, polyspace minus configure here for the Windows version. But we also have it for Unix. And then you specify the output options file. So in this options file, you will find all the settings. So that's the way how to invoke it.
And finally, you just need to provide a command here, for instance, make. Or you can also launch certain scripts. And you say call my script.bot or whatever kind of thing. This is also well documented here in the documentation link. So if you need more information, we are also available for that.
OK, the third step is running the analysis. So you can run the analysis with Polyspace Bug Finder or Code Prover Engine. They are two different executables. So you can choose Polyspace minus bug finder server or Polyspace minus code prover server.
And finally, you would need to provide the options files. So we spoke about two different options files, so the one which has been manually created for the global options and the order from the Polyspace Configure. And you pass it through the command line or through the API of the Polyspace Bug Finder server. That means you can here provide minus options file. These are the build options file, which change frequently. And here are the global options.
And then you'll have a result directory. And this result directory will finally contain all of the artifacts, all the results, which can be uploaded to the Polyspace Access in the last and fourth step. So to compare it, you have three inputs-- the global options file, the build options file, and maybe if you wish some extra options, maybe for reporting or something like that. And then you get this results folder. It's pretty easy to launch the engine. And then the engine will be executed in your sandbox and provide the results.
The last step is making the result available, because if you have the result just on your continuous integration server, of course, you can download it and open in Polyspace for desktop, that's possible. But it's not very convenient. And all the people need the development.
And you have another drawback I would say. If you want to annotate things, it's very difficult to do this on a single file. And it's much more difficult if you distribute this thing as a result file to several persons, how to get all the information back at one single location, how to merge that. So it's a better idea to use one central point where you want to upload the results. And then all the people can work with this central point of information and review the results, making annotations in one single artifact.
And this is Polyspace Access. Of course, you can create reports, HTML, Doc, PDF, and so on for the authorities and for the archives. But Polyspace Access itself is the best way to work in larger teams with your results.
So in the API, here you have the results folder from the last step. You say minus upload. Then you give the location of the result folder.
You can work with parent directories. So in Polyspace Access we will see it later on you can structure your projects into folders. So here, you can provide the folder where you want to put it in. And then you just give your project a name and provide some credentials for uploading. That would be too much to bring it here in the command line or on the slide. But you will find it on the documentation link what you can provide here-- or what you have to provide here.
So that's pretty easy as well. You have a result folder, your parent folder, some extra options maybe, and using Polyspace Access. This executable is also available on your server. So all these four steps what I have shown here will be done on the server, on the Polyspace server, and can be instrumented with the Jenkins, Bamboo, or maybe your own scripts. That's up to you. So that's what we call it here a continuous integration approach to launch all of the different steps in a row and making results available consistently. And how that looks finally we can see in Polyspace Access.
So here on this demo, we see if you launch in the Jenkins environment the analysis, it will run through here. You can see the run log. It's more or less a bunch of command line interface outputs. Then you have this little link. The results are uploaded to the Polyspace dashboard. You can log in with your LDIP account. And then you see this graphical front end where you can review different results, which have been uploaded.
You can also upload different stages of result. So that enables you to see what's new. So what are the new findings from the last run? You can see quality trends. You can see what kind of issues have been already reviewed, which kind of issues are non-reviewed. And you can also assign tickets to a certain person. So that's a great way to really deal with the results in one central dashboard.
So what we can here see on the left side is the results list. So again this is Polyspace Access. You can click on the different results. On the right side, you can see the code and you can see the event trace back to the root course. What you can also see is some information about the rational behind the check information.
You can do the review here. So selecting a certain status, we will do this later on. And you can also assign tickets. That means if a certain developer should take care about it, you can assign it to a person. And that really allows you a lot more than just with a desktop tool if you have larger teams. So desktop tool is great if you have one person teams or maybe two that handle that. That's OK. But as soon as you have 10, 20 or 100 people in your team or in your department, Polyspace Access is a better solution to bring all this information together.
So actually we have seen the Polyspace job with this scripting. Maybe some of you are familiar with a concept, it's called Jenkins Pipeline. So I think that's a very common way to automate things more easily. So you can include Polyspace in this continuous delivery pipeline as well. The pipeline gives you some more capabilities. So, of course, it's an alternative to this normal free style Jenkins project. But you can visualize the certain steps. And you can exactly see what happens in which step. So what we have seen before was this large dump of log, which is not really helpful if you want to debug information or if you want to see certain results from different stages, or at least if you want to make decisions from different stages. Let's say, if this stage fails, then go to another stage or doing this or that.
Another thing is it's reusable. So we will see that the Jenkins Pipeline is a script that you can put into your repository as well. That means you can versionize this kind of information as well, which is really helpful later on to have a consistent workflow. Because otherwise, maybe your Jenkins crashes or your server crashes and maybe this information gets lost. So that's, I would say from a project perspective, very traumatic.
But if you have it into your repository, hopefully your repository will be saved on different locations, whatever. Then it cannot be lost. So then you set up a new Jenkins server, whatever, and you have your information still available. And you can also trace back what has been done some weeks ago or which person changed something in the Jenkins Pipeline. So this is pretty convenient. I really love the Jenkins Pipelines. And I like to show how to do that, how to integrate Polyspace into the Jenkins pipeline, really is a pleasure for scripting.
So, yeah, let's start with this simple demo. You have a different view. This is this ocean blue view. Then you come to this Jenkins Pipeline. And you see it's like a chain with different steps. We start with a prepare step. So that's, in our case, setting some environment variables, not very fantastic.
Then we have this checkout step, where we read out the git repository, check out the git repository where we have our sources and so on. And the next step will be the Polyspace Configure, where we start where to make clean. Not sure if that is really necessary, because you start from clean, but anyhow-- and then we invoke the Polyspace Configure. And Polyspace Configure actually is exactly the same what I described before, just creating these options file what we want to use in the next step.
So what you can see here is actually the dump of the compiler. So you see the compiler also gives us some information about defects. Hopefully, the same will be found with Polyspace as well. And it does, yes. So now we have the next step the analysis step, which then use our options file, and run the analysis what you can see here in the log as well.
So you can see it's much more convenient here, because the different steps are much better divided into-- the logs can be much better reviewed. And here, we have, for instance, the last step about uploading results. It's not the final step. But last step what I had in my slides before just uploading results to Polyspace Access. So you can see here there are some arguments like minus login and so on, you can specify the address of the server, the port and so on.
And then we have the filter results step. So we download something back from the Polyspace Access. And we derive some further workflows, maybe checking what are the new defects and sending an email to a colleague or to, let's say, a project owner. These kind of steps are up to you what you like to do. But it's quite interesting that it's not a one-way. So you upload results to Polyspace Access, but you can also download the results.
And the benefit here is you can download filtered results. So it's not just getting the whole project. You can really ask for specific information. For instance, what's assigned to a certain user? What's new? This kind of information can be really helpful to derive some further steps. And this will be especially helpful when we talk about software quality objectives later on.
OK, so next step, extending maybe the scripts for your development workflow, of course, you can launch your analysis and download filtered results, as I spoke. Here, you can see one piece of this Jenkins Pipeline. We have this all in the Doc, how to do this specifically. And we have plenty of information in our application engineering team. So if you want to establish your own scripting, we can give you some startup how to do that. But it works quite well, and it's a very convenient step.
Another very important topic when it comes to Jenkins, but the same concept also exists with other systems as well is distributed builds. So let's start with this picture. Imagine you have a Jenkins master, but you don't want to launch all of the steps on this machine. So maybe you don't want to install Polyspace on this Jenkins master. You want to have this Jenkins master who instruments certain clients, certain agents.
And maybe one agent is on Linux. One agent is on Windows. The other is using some Docker stuff. So then you can do this quite easily with Jenkins master slave. And you just host Polyspace on a certain agent. This can be a completely different machine, only dedicated for Polyspace. And you can also have plenty of Jenkins for Polyspace, one for Linux, one for Windows, one for this project, one for other.
And the Jenkins master can take care of a lot of different things. For instance, that only one installation will be run at the same time or this kind of stuff. It really manages like an orchestra, the whole Jenkins agents you have in place. And each Jenkins slave will then check out the license on the licensed server. So it's a very convenient way fight for companies to establish Polyspace in the whole infrastructure.
So again this concept exists with other systems as well. Here's one example with Jenkins, because I started the Jenkins and for me it's convenient. It's free. I can demo that quite easily. With other systems, I need a license, something like that. It's more complicated. But we can support you with other systems as well.
So you can find it in the Jenkins under Manage Nodes. I also made some videos to show you that. But because of the time, we cannot go into that detail right now