Note that what I am thinking about is not something for the common case, I am specifically thinking of applications that have to drink from the fire hose, with GigE interfaces to saturate, and so on. When I was promoting fewer threads, I was doing so because that was the way to get highest throughput, not necessarily because it was easier on the developer.
I agree with you that a shared-everything model isn't something that the average application programmer is able to deal with. The worst is that they often think that they are able to deal with it, and then... Whoa.
The mechanism for shared-nothing concurrency and various means of IPC (sockets/pipes, shared memory) have existed in Unix since, well, forever. But where are the users? Now, we gave them nuclear chainsaws, and they were all over it. What do you make of that?
Re: RULES :
Date: 2008-05-19 07:05 pm (UTC)I agree with you that a shared-everything model isn't something that the average application programmer is able to deal with. The worst is that they often think that they are able to deal with it, and then... Whoa.
The mechanism for shared-nothing concurrency and various means of IPC (sockets/pipes, shared memory) have existed in Unix since, well, forever. But where are the users? Now, we gave them nuclear chainsaws, and they were all over it. What do you make of that?