The creation of apps for Orbuculum is really only just getting underway. Any number of applications can attach to it simultaneously to deliver multiple views and insight into the operation of the target, and only a few of those have been created so far.
The main orbuculum program collects data from the SWO pin on the CPU and both processes it locally and re-issues it via TCP/IP port 3443. The format of these data is exactly the same as that which is issued from the port, which also happens to be the format that the Segger Jlink probe issues. By default the JLink uses port 2332…believe it or not, the choice of port 3443 for Orbuculum was made for very specific reasons which did not include consideration of the JLink port number, so that was quite a co-incidence! Applications designed to use the TCP/IP flow from a JLink device can now also be used with a USB Serial port, or Black Magic Probe. Conversely, with a bit of simple modification to change the port number, orbuculum post-processing apps can hook directly to a JLink device (just change the source port number), or via orbuculum itself – everyone’s a winner.
The orbuculum suite currently includes a couple of simple example applications that use the infrastructure…creating new ones is trivial based on the code you can find in these examples.
The simplest of these existing apps is orbdump, which dumps data directly to a file. That’s useful when you just want to take a sample period for later processing…perhaps pushing it into something like Sigrok for processing in conjunction with other data. A command line something like this will dump 3 seconds of data directly into the file output.swo;
We’ve already mentioned orbtop. That tool is used for creating unix-style top output, but it features one little Easter Egg. Theres an option -o <filename> to that which dumps the processed sample data to a file, and an example shell script in Support/orbplot_top uses these data to produce pie charts of the distribution of the CPU load, a bit like this;
Frequently an application needs to merge multiple data sources as a precursor to using it in other apps. If you’ve got orbuculum producing several fifos with independent data in each there are unix tools that can do that, something like;
The problem with this is that you can never be completely sure of the order of data merging in the output file, so a dedicated tool, orbcat, is provided to hook to the TCP/IP port of orbuculum and take the same output format specifiers (but without the fifo names), dumping the resulting flow either to stdout or to a file or use by other tools, like this;
Since each value arrives discretely for each channel it is possible to be certain that each one is completely written before the next – whatever order they’re written on the target, they will be received in on the host (watch out for target OS issues here though!). This can resolve the problem of inconsistent intermingling. Indeed, it’s possible to go further, and use the enforced sequencing to advantage on the host. For example, we can write two characters and a int into a csv file on the host with a orbcat line like the following;
which would result in lines that look something like;
g, w, -453
Always bear in mind that there is no (real) limit to the number of simultaneous apps that can use the dataflow from the orbuculum TCP/IP port, nor on the re-use of data for multiple dumps; perhaps there’s a reason for creating two csv files, with the data above in a different order, for example.
Orbuculum is only just at the start of it’s lifecycle. It can collect and distribute SWO data, but its the apps that make use of these data that make it powerful, and there are plenty more of those to be created for many different purposes.
For now, the most interesting app that comes with the suite is orbstat, and that will be the subject of the next post.