Vote count:
0
I have the following simple WebSocket server built around the Socket.io library:
var PROCESSES = 1,
cluster = require('cluster'),
i;
if (cluster.isMaster) {
for (i = 0; i < PROCESSES; i++) {
console.log('Forking worker', i);
cluster.fork();
}
} else {
(function () {
var server = require('http').Server(),
io = require('socket.io')(server);
io.on('connection', function (socket) {
socket.on('message', function (message) {
socket.emit('message', message + ' too!');
});
});
server.listen(8080);
})();
}
When started, it creates a single server process which listens for websocket connections and echoes the response back to the client:
$ iocat --socketio ws://localhost:8080
> i am hungry
i am hungry too!
> i like you
i like you too!
>
Now, when I change the PROCESSES
variable to a number larger than 1, the client can no longer connect.
var PROCESSES = 2,
...
...results in...
$ iocat --socketio ws://localhost:8080
> client.on error
$ iocat -v --socketio ws://localhost:8080
> SIOClient> SIOClient: url-> ws://localhost:8080
SIOClient> onError { [Error: xhr poll error] description: 400 }
client.on error
My gut feeling is that the cluster module, when given more than one worker process, inappropriately switches from one process to another mid-handshake. But I would have thought that the entire connection, from the client initiating the handshake to the closing of the socket at the very very end, occurred over one persistent, keep-alive'd connection.
So what exactly is going on here? And how could it be worked around? I'm familiar with the idea of using a Redis store to share state between server processes on different machines, but that feels like too much infrastructure for my use case (collecting a stream of events from the client and replying with an acknowledgement).
Node.js cluster module appears to break socket.io handshake
Aucun commentaire:
Enregistrer un commentaire