I honestly didn’t expect that it would take me so many weeks to get, let’s say 20% complete on my SMB 3 client port to RISC OS when I started on it at the end of May 2023. I’ve been committing to a branch of my GitHub repo (linked), so if you’re interested, you can see how far I’ve progressed.
The big question is how long it’ll take to complete the remaining 80% of the work. My original hope that I could complete the entire project in 6 to 8 weeks hasn’t panned out, so now the pay rate per hour (the bounty program) has become much less attractive than I’d originally hoped. My original estimates were based on the completely unrealistic plan to write my own implementation of the SMB 2 and 3 protocols to integrate into the current RISC OS SMB client.
Once I realized that Apple had open-sourced an SMB client that I could port, with a whole lot of work designing adaptation layers, and that their code should run efficiently on RISC OS, I now had a feasible plan, but it took me the past several weeks to design and code those adaptation layers. The tricky parts were bridging two slightly different APIs for mbufs (BSD UNIX memory buffers, already used by the RISC OS TCP/IP stack, with a custom implementation of the memory manager), and writing a high-performance asynchronous TCP socket interface.
The most disappointing aspect to me of the RISC OS TCP/IP stack is the poor performance of many clients, including the bundled Web browser, NetSurf. I can only get about 1.5 MB/s download speed on my local LAN. Not good! I finally got around to running Wireshark to see what was actually going on, and the culprit appears to be relating to scheduling and polling, and not the stack itself.
The RISC OS Web browser can’t read the stream of incoming data fast enough for the TCP/IP stack to send ACKs back, so the Web server pauses with “TCP window full” for almost exactly 1/100th of a second (the RISC OS tick rate), then the client gets scheduled and reads whatever’s in the TCP receive buffer, which is less than 64KB, then it goes to sleep for another clock tick and the download pauses for another 0.01 seconds. Wash, rinse, repeat.
Thankfully, this dynamic shouldn’t be an issue for my project because my code catches the Internet_Event that the RISC OS stack can send (if you request it) to notify you when there’s data available to read (or to write, if the TCP send buffer had filled up), or if the remote end closed the connection. But I also have to use the “RTSupport” routines to create realtime threads to handle those events at a higher priority than the RISC OS foreground app. I can see now why there’s a popular URL_Fetcher module.