Intermodule communications bus - I2C, CAN, and some thoughts-questions about MIDI cables

Continuing the discussion from Progress:

I think this would be a wise choice to avoid overall-general flakiness and just an overall question whenever the system experiences some intermittent hiccup of “maybe it’s because we’re exceeding the I2C limits”.

I think this brings up an interesting point, and I’m wondering if you’ve already considered it, perhaps even wrote about it and I missed it. The ‘virtual’ patch cables that you have planned, will they support MIDI as well? I think it would be a very powerful additional capability and would fit in very well with the Riban Modular idea. It would require either another button-indicator for a MIDI out-in, or some way of indicating that you wanted to make a MIDI connection.

I am making good progress with CAN bus implementation. I think it’s a good plan too.

I had not considered routing MIDI in this way but am aware that some mechanism for configuring MIDI is required. Initially it would allow all MIDI input to be merged. I’m not convinced that such detailed configuration of MIDI is necessarily required as the core functionality relates to control signals but we could consider this if it later proves beneficial.

It’s a neat idea to have another signal indication for MIDI but would require 2 more colours for MIDI source & destination.

1 Like

I’m considering using SPI instead of I2C between the RPi & Brain STM32. This will allow higher transfer rates avoiding a bottleneck. The STM32 will talk to panels via CAN bus which can run at 1Mbps. I2C is limited to 400Kbps. SPI can run at 60Mbps.

There may be benefit in using 2 CAN bus interfaces in the Brain although I think the STM32 has only 1 CAN interface.

re - SPI - wow, sounds very promising.

re - dual CAN bus - I’m not sure if you meant the specific STM32 you’re planning on or STM32 processors in general. If you still open on the processor, here’s a thread from ST’s forum where they identified five processors with dual CAN bus interfaces. I haven’t looked to see how suitable they are otherwise:

The post refers to eval boards, I’m guessing that the actual processors have two CAN ports as well.

This uses the STM32F107VCT which is part of the “Connectivity Line” of devices. It has two independant CAN bus presented on hardware pins. Most STM32 have a fully implemented CAN bus plus anther that can be interfaced with the main one, e.g. connecting two CAN bus to the same pins and decoding them separately within the STM32 which does not offer the benefit I seek (of increased bandwidth on the wire).

I will keep in mind that the F107VCT supports a second CAN bus. It may be a contender for use in the Brain but not in the panels which do not require this. It is about double the cost of the F103 (which itself is a bit more expensive than the F072 that I used in the prototype).

Thanks for pointing me at this device. It is good to know what is available in case it is needed. I hope it won’t be required and that the bandwidth of a single CAN bus is sufficient for the required data throughput. I should know more after the next batch of boards arrive which I plan to see next month. They will most likely be based on the STM32F103 - unless something changes my mind before I order them.

1 Like

Hi Riban,

I am Benoit from the Zynthian group (the “RTP-MIDI” man :wink:)

I am following your project with interest. You may be interested to know that I am working on a project to transport UMP (Universal MIDI Protocol aka MIDI 2.0) on a CAN bus, and one application is for modular synthesizers automation.

I have written a draft specification and made a few prototypes, and if you are interested, I can send you the specification draft (my goal being to make it public sooner or later)

Right now, I am using a RP2040 chip with the CAN2040 library on it, as RP2040 are much more easier to find than ST chips since 2 or 3 years. The total cost of the CAN interface for such a setup is around 3 euros (RP2040 = 1.2 euros each, Flash is 0.6 euros and CAN physical interface chip MCP2562 is 1.15 euros…)

Hi Benoit! Welcome to our (yet another) community.

I was very tempted by the RP2040 for many reasons but it does not have the quantity of I/O I require so the STM32 remains a good option. I’ve not (yet) had supply issues.

I’m interested in what you are doing. I wonder how transparent the transport will be to the higher level protocol?

I’m using TJA1051T transceiver driven directly from the CAN port on the STM32. This does require a higher precision clock than I had initially implemented so the revision 3 boards have a crystal oscillator.

The protocol is custom as there are no suitable standards in this field.

Hi Riban,

I suppose you mean transparency of the CAN Transport vs. UMP / MIDI2.0

UMP messages can be 32, 64, 96 or 128 bits only.

For UMP messages of 32 bits and 64 bits, I simply transport them in a CAN Data Frame. Note that MIDI 1.0 messages (excepted SYSEX ones) are transported transparently by UMP in the form of a 32 bits message (Message Type 2 in the UMP specification), so the CAN transport layer is totally transparent for them

For UMP messages of 96 bits and 128 bits, I use a fragmentation mechanism, so the UMP message is transported into two CAN messages (8 bytes + 4 bytes for the 96-bit messages, and 8 bytes + 8 bytes for the 128-bit messages). Fragment identification is done using dedicated CAN Identifier.

For the special case of SYSEX messages, UMP defines a sequence of 128-bit messages (in MIDI 2.0, there is no more the concept of “byte stream of unknown length”. The SYSEX stream is replaced by a sequence of 128-bit packets)

So the transport protocol I created is fully transparent both for MIDI 1.0 and MIDI 2.0 messages.

Last thing : the transport protocol accepts up to 16 modules per CAN bus

Benoit

Does this mean a maximum of 16 nodes on the bus? Why this limit?

Have you defined IDs for various message types? Is this combined with a mechanism for targetting specific nodes or is it all broadcast (everyone hears everything)?

Do you prioritise messages? CAN prioritiese based on the ID with lower values winning any contention.

Are you using standard or extended CAN? For riban modular I have used exteneded during panel detection then fall back to standard for most communication to minimise the data (packet) length and hence latency / increase bandwidth.

What data rate do you plan? Have you done any tests of latency and jitter?

You offered to share your protocol and open it publically at some point. Are you engaging with a wider community? There may be others interested in such implementation who may have input. (Is there anyone in the MIDI Association that may have an interest.) It would be good to see it standardised.

Sorry - lots of questions but all stuff that needs considering and which I am eager to understand.

To be completely exact : the bus I am developing has one controller (address 0) and up to 15 slave modules. So yes, it is max 16 nodes per bus
The first reason is that all modules must have a dedicated address and I use either pins on the backplane or a 16-positions switch to tell the address.
The cost of a controller is almost nothing as it uses only a Pico as the simplest version (with a Pico network hat, the cost is 15 euros including the CAN interface), so adding a second bus if necessary is not really an issue

Beside that, I use this bus in a system in my studio, and I found that 16 modules is the maximum to keep the latency as small as possible even with a 1Mbps CAN bus.

Yes. The ID is made of :

  • the node address
  • the message type
  • a sequence counter for fragmented UMP packets

On CAN, everything is broadcast, as all nodes see all messages. So nothing prevents a node to make a dialog with another one. However, the normal strategy for my bus is :

  • node 0 can send UMP packets to all nodes (broadcast) or to one node (using address 1 to 15)
  • nodes 1 to 15 normally send only to node 0 (which routes the messages to DAW or anything else over RTP-MIDI, USB, etc…)

Yes, messages from Node 0 have higher priority as they are normally use to drive the modules from the DAW, while messages coming from nodes 1 to 15 are typically results of human actions on front panels, so they can be delayed slightly if necessary because Node 0 sends a message

Only standard frames, as I want to keep the bandwidth used as small as possible (so only 11 bits)

1 Mbps. This is the maximum for most CAN chips (as FD is still not yet very widely used and the 5Mbps can be an issue if you don’t use high quality cabling. With 1 Mbps, it still works perfectly with HE10 connectors and flat cables between modules)

Yes, I even use the system daily in my studio :wink:
The biggest part of the latency does not come from the CAN bus itself, it comes from the link to the DAW. On the CAN bus itself, for a “serious” use (playing a sequence with a lot of continuous Control Change messages, latency is around 2 ms maximum, and jitter is around 1 ms worst case.
But as I said, I found that USB from the DAW can be a true nightmare… That’s why I prefer to use RTP-MIDI to be honnest (the latency is always kept under 3 ms worst case with RTP-MIDI and Network UMP)

Yes; definitely. And this includes proposing the protocol to MIDI Association. The problem is that the MMA is still very busy with discussions about UMP Networking, as we want to release the protocol specification in the coming months now, and there is no real resources right in the working groups for the other transports.
In all cases, whatever the decision of the MMA, nothing prevents to publish the protocol specification as a “project standard” that becomes later an adopted “global standard”

1 Like

Have you considered dynamically assigning addresses based on each node’s microcontroller UID? This is how riban modular builds its address table. Each node identifies itself on the CAN bus with its type and UID then the Brain (controller - node zero in your case) assigns an operating ID and stores a map between the operating ID, UID and functional unit.

I haven’t thought this through for your use case so am not sure if it will work. It is good for riban modular where the functional units are created dynamically within the brain each run-time.

I think the main challenge works be to indicate the mapping to the user but that might be a part of UMP. (I am not yet fully familiar with MIDI 2.0.)

I mention this because physical switch / jumper configuration can be expensive (cost, space, etc.) , requires user intervention and can lead to conflicts / contention. It can also limit quantity of nodes. The dynamic address allocation is in riban modular allows up to 64 nodes although this number depends on the quantity of bits in the protocol available for address. (6 in riban modular.)

Most microcontrollers have some form of UID that can be used for this purpose, effectively acting as a MAC. The STM32 had a 96-bit UID. RP2020 had a 64-bit UID (serial number).

Hi Riban,

to answer simply : I don’t like at all the “automatic” things like address allocation. I admit this is always cool for a commercial product to say “you just connect it and tadaaaa…” but I have had many experiences with setups supposed to configure themselves automatically, and which fail for a reason or another.
And in such cases, finding the problem can take a looooooot of time :grimacing:

Jumpers cost almost nothing and 25 years ago, all PC cards were configured like that and this was working nicely (setting four jumpers on a board is something that a musician is able to do :rofl:).
If you remember well, “plug&play” which was supposed to solve the “jumper brainstorming” turned to be “plug&pray” in some cases :wink:

And about the maximum number of modules : I have decided to limit the maximum of nodes to 16 for two reasons :

  • this value matches exactly the concept of group which appeared in UMP
  • I made some tests with heavy UMP traffic, and with a 1Mbps CAN bus, more than 16 modules can introduce serious laytency
1 Like

The revision 3 boards have arrived (with some issues which I will explain in the progress thread soon) and I have CAN working on them. The CAN interface seems okay but I have not yet loaded it heavily. I am still using I2C between the RPi 4 and a Bluepill STM32F103 board that is acting as the Brain’s CAN interface. I see regular exceptions in my Python test code that scans the bus for changes. This is likely an issue with the I2C implementation on the Bluepill. As mentioned above, I am likely to switch to a faster (than I2C) interface for this.

I will revisiti @BEB’s suggestion to use a RP2040 for the CAN interface in the Brain. There is benefit in keeping the same architecture throughout, i.e. using STM32F103 for panels and Brain. We have a single development environment where issues are generally resolved for all dev and libraries are common so tend to be more interoperable but if there is tighter integration between the RPi4 & RP2040 then this may provide advantages. Of course there are controls and indications on the Brain and I wouldn’t want to run out of pins / need port expanders not would I want to re-write the interface code. I will take another look at the RP2040.

Since writing that paragraph I have taken a quick look at the CAN2040 project and am not convinced it provides sufficient benefit. It is a software layer that adds CAN to the RP2040. As I previously mentioned, I do like the idea of using the RP2040 but it does not provide the interfaces I need, including nativie CAN. The STM32 provides a hardware implementation of CAN which offloads a lot of the processing, allowing for more multitasking. On balance I believe the STM32 remains the best option:

  • It is a common architecture across riban modular devices.
  • It has hardware implementation of CAN, I2C & SPI.
  • It has a lot of I/O.
  • I have most of the interfaces working satisfactorily.

I am not sure I answered @tunagenes initial question in this topic.

Yes! I2C is designed for internal (to an enclosure), short (ideally on same PCB) interfaces with a limited transfer rate (400Kbs). It is not robust (although there are ways to improve this) so not ideal for hostile environments (like a stage / venue). CAN resolves many of these issues. It is designed for very small data payloads but that does work for this purpose (control and monitoring of sensors) and is by-design robust (balanced transport, message priority, sender detection of data loss and some recovery mechanisms). The challenge here is interfacing with the RPi that does not support CAN natively. It is a shame that I have to bounce the data across an extra interface. It adds a little latency but more importantly there is another layer of error detection / correction. I could use a turnkey solution like the MCP2515 SPI/CAN interface but I would prefer to minimise the quantity of chips. I need something on the Brain to interface with switches, potentiometers, LEDs, etc. so it makes sense to use the same STM32 that does this job on the panels and add the SPI/CAN interface within its code. It becomes a firmware issue rather than a hardware issue which I hope is simpler to resolve.

In summary - I did choose to move to CAN for inter-module comms. I reduced the bus cable to 6 wires (2xGnd, 2x5V, CAN-H, CAN-L) and it seems to work. I can control the LEDs and measure switch and pot values reliably and the errors I am seeing seem to relate to the I2C interface which I intend to replace.

The next step in this topic will be to replace the I2C link with SPI which I hope will increase bandwidth and reduce errors. I also need to load up the bus more heavily and to write the C/C++ host code for the interface. (Currently testing with Python code.)

1 Like

I found issues with implementing SPI on STM32 (no slave mode implemented) and CAN driver on RPi (up to 25% CPU) so I have implemented a simple USART connection between RPi & STM32 Brain controller. This runs at 1Mbps (same as CAN), encodes data with a simplified COBS implementation and has simple, one byte checksum for error detection. Most messages are passed between USART & CAN without modification providing the RPi with CAN access to the panels but a first byte of 0xFF indicates a Pi/controller message, e.g. request for list of detected panels. Implementation is done and works well for a single panel. I will test how it scales. Testing is with a Python script but the interface will be embedded in the host C/C++ application.

I struggled to get the USART port on the RPi4 working in DietPi. After some fiddling with dtoverlays it seems to work but I am not yet convinced that it isn’t throttling the CPU. The docs are a bit ambiguous as are comments in the configuration files and I seemed to find a bug in DietPi configuration tool that made one of the USARTs hidden. I will revisit this and document the process. This should be simple. I just want a simple bidirectional serial connection. (I spent a little while debugging why one direction wasn’t working before realising I hadn’t restored the jumper wire! :blush:)

I still need to benchmark the RPi using this serial link.

I2C is now completely removed from the code but remains a hardware option on the panels for other projects to use if desired.

1 Like