Hi,
Below is a little ML code that prints out everything that gets sent to port 8080. On linux this behaves much as expected, but on MacOS, as soon as the Socket.accept call is made, it uses 100% of processor.
The code:
(*
Reproducable on Mac OS X 10.6.5 PolyML 5.4 Not reproducable on Linux :)
After calling main() the CPU usage becomes 100%. After connecting to this server socket by doing telnet localhost 8080 the CPU usage is still 100%.
*)
fun mkServerSocket () = let val server = INetSock.TCP.socket(); val _ = Socket.bind(server, INetSock.any 8080); val _ = Socket.Ctl.setREUSEADDR(server,true); val saddr = INetSock.fromAddr(Socket.Ctl.getSockName server); val _ = Socket.listen(server,128); in (saddr,server) end;
fun readLoop active_socket = let val s = Byte.bytesToString(Socket.recvVec(active_socket,80)); val _ = PolyML.print s; (* print to output *) in (PolyML.print "about to read...\n"; readLoop active_socket) end;
fun main() = let val (saddr, server_socket) = mkServerSocket () val (active_socket, active_socket_addr) = Socket.accept server_socket in readLoop active_socket end;
Hi Lucas, I've committed a fix to SVN head. The problem was that Mac OS X requires the time-out argument to the "select" system call to be properly formatted as micro-seconds and seconds whereas Linux seems to be more tolerant of the usec field being larger than a million. I seem to remember this from a previous commit to the I/O module but it looks like I forgot to make a similar change to the networking.
If this looks all right I'll make the change in the 5.4 fixes branch.
Regards, David
On 11/01/2011 17:32, Lucas Dixon wrote:
Hi,
Below is a little ML code that prints out everything that gets sent to port 8080. On linux this behaves much as expected, but on MacOS, as soon as the Socket.accept call is made, it uses 100% of processor.
Hi David, tried, tested and all works beautifully, thanks for the fast fix!
cheers, lucas
On 12/01/2011 15:50, David Matthews wrote:
Hi Lucas, I've committed a fix to SVN head. The problem was that Mac OS X requires the time-out argument to the "select" system call to be properly formatted as micro-seconds and seconds whereas Linux seems to be more tolerant of the usec field being larger than a million. I seem to remember this from a previous commit to the I/O module but it looks like I forgot to make a similar change to the networking.
If this looks all right I'll make the change in the 5.4 fixes branch.
Regards, David
On 11/01/2011 17:32, Lucas Dixon wrote:
Hi,
Below is a little ML code that prints out everything that gets sent to port 8080. On linux this behaves much as expected, but on MacOS, as soon as the Socket.accept call is made, it uses 100% of processor.
polyml mailing list polyml@inf.ed.ac.uk http://lists.inf.ed.ac.uk/mailman/listinfo/polyml