Type a username from the list of connected users to send them a "Wazzup!" (Full Scratcher accounts only, sorry - this uses cloud variables) Quit by using the red stop sign when you're done. This is a test of sending data from one process to another for use in games with a small number of players, for example sending checkers moves - but unlike existing cloud communications systems, it supports multiple 2-player scenarios... i.e. it is a point to point system, not a broadcast system. Note that it is *NOT* a chat system. This app sends pre-determined chat-like messages just to make it fun to use, because it needs a lot more multi-user testing before it can be used in games. I'd like a lot of people to try it out, and sending "Wazzup"s is more fun than sending debug messages... (So please don't waste everyone's time by reporting this as an illegal chat system because it isn't.) If unexpected things happen, please describe them as best you can in the comments below, and save the debug log. I'll ask for a copy if I need it.
I used the encoding/decoding blocks from SJRCS_011 https://scratch.mit.edu/projects/10175296/ Here's the protocol; it's complicated because it attempts to be relatively robust in the face of failures, within the limitations of Scratch's cloud system which doesn't really have a guaranteed robust failure mode. The basic idea is that if “Sender” wants to send a message to “Recipient”, they set a variable called ‘sender and timestamp’ to the sender's name, plus a timestamp. (see footnote on timestamps) They check first before setting it to see if it already had someone else's name in it, and if so, they back off and wait until it is clear without even trying to write to it. If however the variable was clear when they checked, and they go ahead and set it. There's a chance someone else also saw that it was clear at the same time and that other person may have set it as well. So there's a step after setting it where you read back the value that was written, to see if you were the sender who managed to write to it successfully. Since cloud behavior is a bit unpredictable in this area, there's a delay before reading back to wait until everyone's changes have propagated, and also to be fairly confident that everyone else can see that the variable is now set, so that no-one else will still try to write to it. This is a write lock. Once the write lock is successfully claimed, the rest of the inter-process message is filled out (message, message type, message seqno) and finally ‘recipient’ is set to the recipient's name. When a recipient sees their name in that variable, they know a complete message has been prepared and they should fetch and decode the message as soon as possible. After receiving the message, the recipient sets the ‘recipient’ variable to 0, telling the sender that they have received the message. This is a handshake. The sender still has the sender lock and may if they want send more messages (eg to other players in a multi-player game), but if so, it must be done immediately with pre-computed data - they can't go away and do an indefinite amount of computation while they hold the sender flag open. Finally the sender resets the ‘sender and timestamp’ variable which frees up the lock and allows other people to use the communications channel. There are also various timeouts involved in the above process in case a sender holds the flag for too long, or if a process is closed by a user (or just goes away due to networking issues). While all this is going on, every recipient keeps a table of current active users updated, and also there will be a message type where every active user can be asked to automatically report their presence, so that the list of users can be forcibly resynchronised at regular but not too frequent intervals (but that's not yet implemented). This scheme avoids the need for a single process to be designated the master process - all processes are equal and have the same chance of grabbing a lock. Maybe later some sort of round robin (like the revolving token in a Cambridge Ring) might be added to ensure that everyone gets a turn, but that can be layered on top of the current mechanism; we would need to be extra careful not to break the ring altogether when any one participant drops out... which is why there is nothing like that is in this early draft :-) There is also some provision (not yet implemented) for a ‘heartbeat’ message type that is just a null packet which is sent if nothing has been seen since some particular length of time. This will let clients know that the cloud system is still alive and will help in tracking when users have dropped off by closing the browser window etc, and need to be removed from the active users list. Clean stops (red stop sign) are intercepted and the user is cleanly removed from the user list. If a message is sent to a user who is unresponsive, they're removed from the user list. btw the userlist mechanism is implemented a bit crudely at the moment as it is only used for display, and is not a critical component of the communication protocol. TO DO: designate the people who follow the last sender as the next heartbeat pingers. First person takes over heartbeat duties 3 secs after no messages, second person steps in if nothing for 6 secs, etc etc - this will be more reliable in face of people dropping out. Also the last thing the protocol needs to make it 100% robust is a CRC on the packet which the receiver checks and includes in his acknowledgement. This will allow the sender to retry and force confirmation that a packet was received, which will allow a guaranteed transport mechanism (cf IP streams on top of datagrams)