﻿<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:trackback="http://madskills.com/public/xml/rss/module/trackback/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/"><channel><title>BlogJava-the journey is the reward...-文章分类-java techs</title><link>http://www.blogjava.net/adapterofcoms/category/43828.html</link><description /><language>zh-cn</language><lastBuildDate>Sun, 28 Mar 2010 02:59:29 GMT</lastBuildDate><pubDate>Sun, 28 Mar 2010 02:59:29 GMT</pubDate><ttl>60</ttl><item><title>在运行时,你能修改final field的值吗?</title><link>http://www.blogjava.net/adapterofcoms/articles/315748.html</link><dc:creator>adapterofcoms</dc:creator><author>adapterofcoms</author><pubDate>Thu, 18 Mar 2010 01:36:00 GMT</pubDate><guid>http://www.blogjava.net/adapterofcoms/articles/315748.html</guid><wfw:comment>http://www.blogjava.net/adapterofcoms/comments/315748.html</wfw:comment><comments>http://www.blogjava.net/adapterofcoms/articles/315748.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/adapterofcoms/comments/commentRss/315748.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/adapterofcoms/services/trackbacks/315748.html</trackback:ping><description><![CDATA[<p>&nbsp;</p>
<p>以[final&nbsp;int x=911]&nbsp;, [static final int x=912]为例,jdk1.6.0_16(为何如此版本详细,是因为下面还有个jdk的bug).</p>
<p>样例类:</p>
<p>class Test {&nbsp;<br />
&nbsp;private final&nbsp; int x=911;//modifiers:final-&gt;18,non-final-&gt;2<br />
&nbsp;static final private&nbsp; int y=912;//modifiers:final-&gt;26,non-final-&gt;10&nbsp;<br />
&nbsp;public int getX(){<br />
&nbsp;&nbsp;return x;<br />
&nbsp;}&nbsp;&nbsp;<br />
&nbsp;public static int getY(){<br />
&nbsp;&nbsp;return y;<br />
&nbsp;}&nbsp;<br />
}&nbsp;</p>
<p>&nbsp;Java中的final field意指常量,赋值一次,不可改变.编译器会对final field进行如下的优化:</p>
<p>e.g:</p>
<p>Test t=new Test();</p>
<p>凡是在程序中对t.x的引用,编译器都将以字面值911替换,getX()中的return x也会被替换成return 911;</p>
<p>所以就算在运行时你改变了x的值也无济于事,编译器对它们进行的是静态编译.</p>
<p>但是Test.class.getDeclaredField("x").getInt(t)除外;</p>
<p>&nbsp;</p>
<p>那么如何在运行时改变final field x的值呢?</p>
<p>private final&nbsp; int x=911;Field.modifiers为18,而private int x=911;Field.modifiers为2.</p>
<p>所以如果我们修改Field[Test.class.getDeclaredField("x")].modifiers由18[final]变为2[non-final],那么你就可以修改x的值了.</p>
<p>&nbsp;Test tObj=new Test();&nbsp;&nbsp;<br />
&nbsp;Field f_x=Test.class.getDeclaredField("x");&nbsp;&nbsp;<br />
&nbsp;&nbsp;<br />
&nbsp;&nbsp;//修改modifiers 18-&gt;2<br />
&nbsp;&nbsp;Field f_f_x=f_x.getClass().getDeclaredField("modifiers");<br />
&nbsp;&nbsp;f_f_x.setAccessible(true);&nbsp;&nbsp;<br />
&nbsp;&nbsp;<strong><font color="#ff0000">f_f_x.setInt(f_x, 2/*non-final*/);<br />
</font></strong>&nbsp;&nbsp;<br />
&nbsp;&nbsp;f_x.setAccessible(true);<br />
&nbsp;&nbsp;f_x.setInt(tObj, 110);//改变x的值为110.&nbsp;&nbsp;<br />
&nbsp;&nbsp;System.out.println("静态编译的x值:"+tObj.getX()+".------.运行时改变了的值110:"+f_x.getInt(tObj));<br />
&nbsp;&nbsp;&nbsp;<br />
&nbsp;&nbsp;f_x.setInt(tObj, 111);//你可以继续改变x的值为.&nbsp;&nbsp;<br />
&nbsp;&nbsp;System.out.println(f_x.getInt(tObj));</p>
<p>但是想恢复原来的modifiers,f_f_x.setInt(f_x, 18/*final*/);这是无效的,因为Field只会初始化它的FieldAccessor引用一次.</p>
<p>&nbsp;</p>
<p>在上面的过程中,我还发现了个jdk bug,你如果将上面的红色代码改为如下的代码:</p>
<p>f_f_x.setInt(f_x, <font color="#ff0000"><strong>10</strong></font>/*这个数值是<strong>static</strong> non-final modifiers,而x是<strong>non-static</strong>的,这样就会使f_x得到一个static FieldAccessor*/);那么会引发A fatal error has been detected by the Java Runtime Environment.并产生相应的err log文件.显然JVM没有对这种情况加以处理.我已提交to sun bug report site.&nbsp;</p>
<p><span style="font-size: large"><strong>sun 于2010-3-26通知我,他们已承认该bug,bug id : 6938467.发布到外网可能有一到两天的延迟.</strong></span></p>
<p><strong><span style="font-size: large"><a href="http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6938467" target="_blank">http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6938467</a></span></strong>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<img src ="http://www.blogjava.net/adapterofcoms/aggbug/315748.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/adapterofcoms/" target="_blank">adapterofcoms</a> 2010-03-18 09:36 <a href="http://www.blogjava.net/adapterofcoms/articles/315748.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>MINA,xSocket同样的性能缺陷及陷阱,Grizzly better</title><link>http://www.blogjava.net/adapterofcoms/articles/314560.html</link><dc:creator>adapterofcoms</dc:creator><author>adapterofcoms</author><pubDate>Fri, 05 Mar 2010 01:37:00 GMT</pubDate><guid>http://www.blogjava.net/adapterofcoms/articles/314560.html</guid><wfw:comment>http://www.blogjava.net/adapterofcoms/comments/314560.html</wfw:comment><comments>http://www.blogjava.net/adapterofcoms/articles/314560.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/adapterofcoms/comments/commentRss/314560.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/adapterofcoms/services/trackbacks/314560.html</trackback:ping><description><![CDATA[<p>MINA,Grizzly[grizzly-nio-framework],xSocket都是基于 java nio的 server framework.<br />
这里的<strong>性能缺陷的焦点</strong>是指当一条channel上的SelectionKey.OP_READ ready时,<strong>1.</strong>是由select thread读完数据之后再分发给应用程序的handler,<strong>2.</strong>还是直接就分发,由handler thread来负责读数据和handle.<br />
mina,xsocket是<strong>1. </strong>grizzly-nio-framework是<strong>2.<br />
</strong>尽管读channel buffer中bytes是很快的,但是如果我们放大,当连接channel达到上万数量级,甚至更多,这种延迟响应的效果将会愈加明显.<br />
MINA:<br />
for all selectedKeys <br />
{<br />
&nbsp;&nbsp;&nbsp; <strong>read data</strong> <strong>then</strong> fireMessageReceived.<br />
} <br />
xSocket:<br />
for all selectedKeys <br />
{<br />
&nbsp;&nbsp;&nbsp; <strong>read data ,append it&nbsp;to readQueue</strong>&nbsp;<strong>then</strong> performOnData.<br />
} <br />
其中mina在fireMessageReceived时没有使用threadpool来分发,所以需要应用程序在handler.messageReceived中再分发.而xsocket的performOnData默认是分发给threadpool[WorkerPool],WorkerPool虽然解决了线程池中的线程不能充到最大的问题[跟tomcat6的做法一样],但是<a href="http://adapterofcoms.javaeye.com/blog/599027" target="_blank" mce_href="http://adapterofcoms.javaeye.com/blog/599027">它的调度机制依然缺乏灵活性</a>.<br />
Grizzly:<br />
for all selectedKeys <br />
{<br />
&nbsp;&nbsp; <strong>[</strong>NIOContext---filterChain.execute---&gt;our filter.execute<strong>]&lt;------run In DefaultThreadPool<br />
</strong>}<br />
grizzly的DefaultThreadPool几乎重写了java util concurrent threadpool,并使用自己的LinkedTransferQueue,但同样<a href="http://adapterofcoms.javaeye.com/blog/599027" target="_blank" mce_href="http://adapterofcoms.javaeye.com/blog/599027">缺乏灵活的池中线程的调度机制</a>.&nbsp;</p>
<p><strong>下面分别是MINA,xSocket,Grizzly的源码分析:<br />
</strong>Apache MINA&nbsp;(mina-2.0.0-M6源码为例):<br />
&nbsp;&nbsp;&nbsp; 我们使用mina nio tcp最常用的样例如下:<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; NioSocketAcceptor acceptor = new NioSocketAcceptor(/*NioProcessorPool's size*/);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; DefaultIoFilterChainBuilder chain = acceptor.getFilterChain();&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //chain.addLast("codec", new ProtocolCodecFilter(<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //new TextLineCodecFactory()));<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ......<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; // Bind<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acceptor.setHandler(/*our IoHandler*/);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acceptor.bind(new InetSocketAddress(port));<br />
------------------------------------------------------------------------------------<br />
&nbsp;&nbsp;&nbsp; 首先从NioSocketAcceptor(extends AbstractPollingIoAcceptor)开始,<br />
bind(SocketAddress)---&gt;bindInternal---&gt;startupAcceptor:启动AbstractPollingIoAcceptor.Acceptor.run使用executor[<strong>Executor</strong>]的线程,注册[interestOps:SelectionKey.OP_ACCEPT],然后wakeup selector.<br />
一旦有连接进来就构建NioSocketSession--对应--channal,然后session.getProcessor().add(session)将当前的channal加入到NioProcessor的selector中去[interestOps:SelectionKey.OP_READ],这样每个连接中有请求过来就由相应的NioProcessor来处理.</p>
<p>这里有几点要说明的是:<br />
1.一个NioSocketAcceptor对应了多个NioProcessor,比如NioSocketAcceptor就使用了SimpleIo<strong>ProcessorPool </strong>DEFAULT_SIZE = Runtime.getRuntime().availableProcessors() + 1.当然这个size在new NioSocketAcceptor的时候可以设定.<br />
2.一个NioSocketAcceptor对应一个java nio selector[OP_ACCEPT],一个NioProcessor也对应一个java nio selector[OP_READ].<br />
3.一个NioSocketAcceptor对应一个内部的AbstractPollingIoAcceptor.Acceptor---thread.<br />
4.一个NioProcessor也对应一个内部的AbstractPollingIoProcessor.Processor---thread.<br />
5.在new NioSocketAcceptor的时候如果你不提供<strong>Executor</strong>(线程池)的话,那么默认使用Executors.newCachedThreadPool().<br />
这个<strong>Executor</strong>将被NioSocketAcceptor和NioProcessor公用,也就是说上面的Acceptor---thread(一条)和Processor---thread(多条)都是源于这个Executor.<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 当一个连接java nio channal--NioSession被加到<strong>ProcessorPool</strong>[i]--NioProcessor中去后就转入了AbstractPollingIoProcessor.Processor.run,<br />
AbstractPollingIoProcessor.Processor.run方法是运行在上面的<strong>Executor</strong>中的一条线程中的,当前的NioProcessor将处理注册在它的selector上的所有连接的请求[interestOps:SelectionKey.OP_READ].</p>
<p>AbstractPollingIoProcessor.Processor.run的主要执行流程:<br />
for (;;) {&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ......<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; int selected = selector(final SELECT_TIMEOUT = 1000L);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .......<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (selected &gt; 0) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; process();<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ......<br />
}</p>
<p><strong>process</strong>()--&gt;<strong>for</strong> <strong>all</strong> session-channal:OP_READ --&gt;<strong>read(session)</strong>:这个<strong>read</strong>方法是AbstractPollingIoProcessor.<strong>private void read(T session)</strong>方法.<br />
read(session)的主要执行流程是read channal-data to buf,if readBytes&gt;0 then IoFilterChain.fireMessageReceived(buf)/*<strong>我们的IoHandler.messageReceived将在其中被调用</strong>*/;<br />
&nbsp;&nbsp;&nbsp; 到此mina Nio 处理请求的流程已经明了.<br />
&nbsp;&nbsp;&nbsp; mina处理请求的线程模型也出来了,<strong>性能问题也来了</strong>,那就是在AbstractPollingIoProcessor.Processor.run--&gt;process--&gt;read(per session)中,在process的时候mina是<strong>for all selected-channals 逐次read data再fireMessageReceived到我们的IoHandler.messageReceived中</strong>,<strong>而不是并发处理</strong>,这样一来很明显后来的请求将被<strong>延迟处理.<br />
</strong>我们假设:如果NioProcessorPool's size=2 现在有200个客户端<strong>同时</strong>连接过来,假设每个NioProcessor都注册了100个连接,对于每个NioProcessor将<strong>依次顺序</strong>处理这100个请求,那么这其中的第100个请求要得到处理,那它只有等到前面的99个被处理完了.<br />
&nbsp;&nbsp;&nbsp; 有人提出了改进方案,那就是在我们自己的IoHandler.messageReceived中利用线程池再进行分发dispatching,这个当然是个好主意.<br />
&nbsp;&nbsp;&nbsp; <strong>但是请求还是被延迟处理了</strong>,因为还有<strong>read data</strong>所消耗的时间,这样第100个请求它的数据要被读,就要等前面的99个都被读完才行,<strong>即便是增加ProcessorPool的尺寸也不能解决这个问题</strong>.<br />
&nbsp;&nbsp;&nbsp; 此外mina的<strong>陷阱(这个词较时髦)</strong>也出来了,就是在<strong>read(session)</strong>中,在说这个陷阱之前先说明一下,我们的client端向server端发送一个消息体的时候不一定是完整的只发送一次,可能分多次发送,<strong>特别是在client端忙或要发送的消息体的长度较长的时候</strong>.而mina在这种情况下就会call我们的IoHandler.messageReceived多次,结果就是消息体被分割了若干份,等于我们在IoHandler.messageReceived中每次处理的数据都是不完整的,这会<strong>导致数据丢失,无效</strong>.<br />
下面是read(session)的源码:<br />
private void read(T session) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; IoSessionConfig config = session.getConfig();<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <strong>IoBuffer buf = IoBuffer.allocate(config.getReadBufferSize());</strong></p>
<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; final boolean <strong>hasFragmentation</strong> =<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; session.getTransportMetadata().hasFragmentation();</p>
<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; try {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; int <strong>readBytes</strong> = 0;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; int <strong>ret</strong>;</p>
<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; try {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (hasFragmentation/*<strong><font color="#ff0000">hasFragmentation一定为ture,也许mina的开发人员也意识到了传输数据的碎片问题,但是靠下面的处理是远远不够的,因为client一旦间隔发送,ret就可能为0,退出while,不完整的readBytes将被fire</font></strong>*/) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; while ((<font color="#ff0000">ret</font> = read(session, <strong>buf</strong>)) &gt; 0) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <strong><font color="#ff0000">readBytes</font></strong> += <font color="#ff0000">ret</font>;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (!buf.hasRemaining()) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; break;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; } else {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ret = read(session, buf);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (ret &gt; 0) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; readBytes = ret;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; } finally {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <strong>buf.flip();<br />
</strong>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }</p>
<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <strong>if (<font color="#ff0000">readBytes</font> &gt; 0) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; IoFilterChain filterChain = session.getFilterChain(); <br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <font color="#ff0000">filterChain</font>.<font color="#ff0000">fireMessageReceived</font>(<font color="#000000">buf</font>);<br />
</strong>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; buf = null;</p>
<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (hasFragmentation) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (readBytes &lt;&lt; 1 &lt; config.getReadBufferSize()) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; session.decreaseReadBufferSize();<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; } else if (readBytes == config.getReadBufferSize()) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; session.increaseReadBufferSize();<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (ret &lt; 0) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; scheduleRemove(session);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; } catch (Throwable e) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (e instanceof IOException) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; scheduleRemove(session);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; IoFilterChain filterChain = session.getFilterChain(); <br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; filterChain.fireExceptionCaught(e);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }<br />
&nbsp;&nbsp;&nbsp; }<br />
这个陷阱大家可以测试一下,看会不会一个完整的消息被多次发送,你的IoHandler.messageReceived有没有被多次调用.<br />
<strong>要保持我们应用程序消息体的完整性也很简单只需创建一个断点breakpoint,然后set it to the current IoSession,一旦消息体数据完整就dispatching it and remove it from the current session</strong>.<br />
--------------------------------------------------------------------------------------------------&nbsp;<br />
<strong>下面以xSocket v2_8_8源码为例:<br />
</strong>tcp usage e.g:<br />
IServer srv = new Server(8090, new EchoHandler());<br />
srv.start() or run(); <br />
-----------------------------------------------------------------------<br />
class EchoHandler implements IDataHandler {&nbsp;&nbsp; <br />
&nbsp;&nbsp;&nbsp; public boolean <strong>onData</strong>(INonBlockingConnection nbc) <br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; throws IOException, <br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; BufferUnderflowException, <br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; MaxReadSizeExceededException {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; String data = nbc.readStringByDelimiter("\r\n");<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; nbc.write(data + "\r\n");<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return true;<br />
&nbsp;&nbsp;&nbsp; }<br />
&nbsp;&nbsp;}<br />
------------------------------------------------------------------------<br />
说明1.Server:Acceptor:IDataHandler ------1:1:1<br />
Server.run--&gt;IoAcceptor.accept()在port上阻塞,一旦有channel就从IoSocketDispatcherPool中获取一个IoSocketDispatcher,同时构建一个IoSocketHandler和NonBlockingConnection,调用Server.LifeCycleHandler.onConnectionAccepted(ioHandler)&nbsp; initialize the IoSocketHandler.<strong>注意:IoSocketDispatcherPool.size默认为2,也就是说只有2条do select的线程和相应的2个IoSocketDispatcher.这个和MINA的NioProcessor数是一样的.<br />
</strong>说明2.IoSocketDispatcher[java nio Selector]:IoSocketHandler:NonBlockingConnection------1:1:1<br />
在IoSocketDispatcher[对应一个Selector].run中---&gt;IoSocketDispatcher.handleReadWriteKeys:<br />
<strong>for all selectedKeys</strong> <br />
{<br />
&nbsp;&nbsp;&nbsp; <strong>IoSocketHandler.onReadableEvent</strong>/onWriteableEvent.<br />
}&nbsp;<br />
<strong>IoSocketHandler.onReadableEvent</strong>的处理过程如下:<br />
1.readSocket();<br />
2.NonBlockingConnection.IoHandlerCallback.onData <br />
NonBlockingConnection.onData---&gt;appendDataToReadBuffer: <strong>readQueue append data<br />
</strong>3.NonBlockingConnection.IoHandlerCallback.onPostData<br />
NonBlockingConnection.onPostData---&gt;HandlerAdapter.<strong>onData[our dataHandler]</strong> <strong>performOnData in WorkerPool</strong>[threadpool].&nbsp;</p>
<p><strong>因为是把channel中的数据读到readQueue中,应用程序的dataHandler.onData会被多次调用直到readQueue中的数据读完为止.所以依然存在类似mina的陷阱.解决的方法依然类似,因为这里有NonBlockingConnection.<br />
</strong>----------------------------------------------------------------------------------------------<br />
<strong>再下面以grizzly-nio-framework v1.9.18源码为例:<br />
</strong>tcp usage e.g:<br />
Controller sel = new Controller();<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; sel.setProtocolChainInstanceHandler(new DefaultProtocolChainInstanceHandler(){<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; public ProtocolChain poll() {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ProtocolChain protocolChain = protocolChains.poll();<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (protocolChain == null){<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; protocolChain = new DefaultProtocolChain();<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //protocolChain.addFilter(<strong>our app's filter/*应用程序的处理从filter开始,类似mina.ioHandler,xSocket.dataHandler*/</strong>);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //protocolChain.addFilter(new ReadFilter());<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return protocolChain;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; });<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;//如果你不增加自己的SelectorHandler,Controller就默认使用TCPSelectorHandler port:18888<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; sel.addSelectorHandler(<strong>our app's selectorHandler on special port</strong>);&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br />
&nbsp;&nbsp;sel.start();<br />
------------------------------------------------------------------------------------------------------------<br />
&nbsp;说明1.Controller:ProtocolChain:Filter------1:1:n,Controller:SelectorHandler------1:n,<br />
SelectorHandler[对应一个Selector]:SelectorHandlerRunner------1:1,<br />
Controller.&nbsp;start()---&gt;for per SelectorHandler start SelectorHandlerRunner to run.<br />
SelectorHandlerRunner.run()---&gt;selectorHandler.select()&nbsp; then <strong>handleSelectedKeys</strong>:<br />
<strong>for all selectedKeys</strong> <br />
{<br />
&nbsp;&nbsp; <strong>NIOContext.execute:dispatching to threadpool for ProtocolChain.execute---&gt;our filter.execute</strong>.<br />
}&nbsp;</p>
<p>你会发现这里<strong>没有read data from channel</strong>的动作,因为这将由你的filter来完成.所以自然没有mina,xsocket它们的陷阱问题,分发提前了.<strong>但是你要注意SelectorHandler:Selector:SelectorHandlerRunner:Thread[SelectorHandlerRunner.run]都是1:1:1:1,也就是说只有一条线程在doSelect then handleSelectedKeys</strong>.</p>
<p>&nbsp;&nbsp;&nbsp; 相比之下虽然grizzly在<strong>并发性能</strong>上更优,但是在<strong>易用性</strong>方面却不如mina,xsocket,比如类似mina,xsocket中表示当前连接或会话的IoSession,INonBlockingConnection对象在grizzly中由NIOContext来负责,但是NIOContext并没有提供session/connection lifecycle event,以及常规的read/write操作,这些都需要你自己去扩展SelectorHandler和ProtocolFilter,从另一个方面也可以说明grizzly的可扩展性,灵活性更胜一筹.</p>
<p>&nbsp;</p>
<img src ="http://www.blogjava.net/adapterofcoms/aggbug/314560.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/adapterofcoms/" target="_blank">adapterofcoms</a> 2010-03-05 09:37 <a href="http://www.blogjava.net/adapterofcoms/articles/314560.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Java线程池的瑕疵,For java util concurrent threadpool Since jdk1.5</title><link>http://www.blogjava.net/adapterofcoms/articles/313482.html</link><dc:creator>adapterofcoms</dc:creator><author>adapterofcoms</author><pubDate>Sat, 20 Feb 2010 12:15:00 GMT</pubDate><guid>http://www.blogjava.net/adapterofcoms/articles/313482.html</guid><wfw:comment>http://www.blogjava.net/adapterofcoms/comments/313482.html</wfw:comment><comments>http://www.blogjava.net/adapterofcoms/articles/313482.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/adapterofcoms/comments/commentRss/313482.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/adapterofcoms/services/trackbacks/313482.html</trackback:ping><description><![CDATA[<p>&nbsp;&nbsp;&nbsp; java.util.concurrent的作者是Doug Lea : 世界上对Java影响力最大的个人,在jdk1.5之前大家一定熟悉他的backport-util-concurrent.jar."这个鼻梁挂着眼镜，留着德王威廉二世的胡子，脸上永远挂着谦逊腼腆笑容，服务于纽约州立大学Oswego分校计算器科学系的老大爷。",他可是并发编程的大师级人物哦!<br />
&nbsp;&nbsp;&nbsp; Since jdk1.5,在java.util.concurrent包下的线程池模型是基于queue的,threadpool只有一个,而queue却有多个LinkedBlockingQueue,SynchronousQueue,ScheduledThreadPoolExecutor.DelayedWorkQueue等可参见java.util.concurrent.Executors.<span style="color: #ff0000"><strong>注意:我下面的问题是针对LinkedBlockingQueue的</strong></span>,参考的src为jdk1.6.<br />
&nbsp;&nbsp;&nbsp; Threadpool通过以下的3个属性来标志池中的线程数:<br />
corePoolSize(类似minimumPoolSize),poolSize(当前池中的线程数),maximumPoolSize(最大的线程数).<br />
这3个属性表达的意思是每次新创建或结束一个线程poolSize++/--,在最忙的情况下threadpool创建的线程数不能超过maximumPoolSize,<br />
当空闲的情况下poolSize应该降到corePoolSize,当然threadpool如果从创建时它就从来没有处理过一次请求的话,那么poolSize当然为0.<br />
&nbsp;&nbsp;&nbsp; 通过以上2段的说明下面我要引出我所要讲的问题:<br />
我们来看一下java.util.concurrent.ThreadPoolExecutor的execute方法:<br />
public void execute(Runnable command) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (command == null)<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; throw new NullPointerException();<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (poolSize &gt;= corePoolSize || !addIfUnderCorePoolSize(command)) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (runState == RUNNING &amp;&amp; workQueue.offer(command)) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (runState != RUNNING || poolSize == 0)<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ensureQueuedTaskHandled(command);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; else if (!addIfUnderMaximumPoolSize(command))<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; reject(command); // is shutdown or saturated<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }<br />
}<br />
它表达的主体意思是:如果当前的poolSize&lt;corePoolSize,那么就增加线程直到poolSize==corePoolSize.<br />
如果poolSize已经到达corePoolSize,那么就把command(task) put to workQueue,如果workQueue为LinkedBlockingQueue的话,<br />
那么只有当workQueue offer commands达到workQueue.capacity后,threadpool才会继续增加线程直到maximumPoolSize.<br />
<strong>1.*****如果LinkedBlockingQueue.capacity被设置为Integer.MAX_VALUE,那么池中的线程几乎不可能到达maximumPoolSize.*****</strong><br />
所以你如果使用了Executors.newFixedThreadPool的话,那么maximumPoolSize和corePoolSize是一样的并且LinkedBlockingQueue.capacity==Integer.MAX_VALUE,或者如果这样new ThreadPoolExecutor(corePoolSize,maximumPoolSize,keepAliveTime,timeUnit,new LinkedBlockingQueue&lt;Runnable&gt;(/*Integer.MAX_VALUE*/))的话,<br />
上述的使用都将导致maximumPoolSize是无效的,也就是说线程池中的线程数不会超出corePoolSize.<br />
<strong>这个也让那些tomcat6的开发人员可能也郁闷了,他们不得不改写LinkedBlockingQueue</strong>,以tomcat-6.0.20-src为例:<br />
org.apache.tomcat.util.net.NioEndpoint.TaskQueue extends LinkedBlockingQueue&lt;Runnable&gt; override offer method:&nbsp;<br />
&nbsp;public void setParent(ThreadPoolExecutor tp, NioEndpoint ep) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; parent = tp;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; this.endpoint = ep;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; public boolean offer(Runnable o) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //we can't do any checks<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (parent==null) return super.offer(o);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //we are maxed out on threads, simply queue the object<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (parent.getPoolSize() == parent.getMaximumPoolSize()) return super.offer(o);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //we have idle threads, just add it to the queue<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //this is an approximation, so it could use some tuning<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (endpoint.activeSocketProcessors.get()&lt;(parent.getPoolSize())) return super.offer(o);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <strong style="color: #ff0000">//if we have less threads than maximum force creation of a new thread<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (parent.getPoolSize()&lt;parent.getMaximumPoolSize()) return false;<br />
</strong>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //if we reached here, we need to add it to the queue<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return super.offer(o);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }&nbsp; </p>
<p>org.apache.tomcat.util.net.NioEndpoint.start()--&gt;<br />
&nbsp;&nbsp;&nbsp;TaskQueue taskqueue = new TaskQueue();/***queue.capacity==Integer.MAX_VALUE***/<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;TaskThreadFactory tf = new TaskThreadFactory(getName() + "-exec-");<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;executor = new ThreadPoolExecutor(getMinSpareThreads(), getMaxThreads(), 60,TimeUnit.SECONDS,taskqueue, tf);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;taskqueue.setParent( (ThreadPoolExecutor) executor, this);<br />
<strong>2.*****如果把LinkedBlockingQueue.capacity设置为一个适当的值远小于Integer.MAX_VALUE,那么只有put到queue的任务数到达LinkedBlockingQueue的capacity后,才会继续增加池中的线程,使得poolSize超出corePoolSize但不超过maximumPoolSize,这个时候来增加线程数是不是有点晚了呢??????*****.</strong><br />
这样一来reject(command)也可能随之而来了,LinkedBlockingQueue.capacity设置为何值又是个头疼的问题.<br />
所以ThreadPoolExecutor+LinkedBlockingQueue表达的意思是首先会增加线程数到corePoolSize,但只有queue的任务容量到达最大capacity后,才会继续在corePoolSize的基数上增加线程来处理任务,直到maximumPoolSize.<br />
&nbsp;&nbsp;&nbsp; 但为什么我们不能这样呢:将LinkedBlockingQueue.capacity设置为Integer.MAX_VALUE,让task尽可能的得到处理,同时在忙的情况下,增加池中的线程充到maximumPoolSize来尽快的处理这些任务.即便是把LinkedBlockingQueue.capacity设置为一个适当的值&lt;&lt;&lt;远小于Integer.MAX_VALUE,也不一定非得在任务数到达LinkedBlockingQueue的capacity之后才去增加线程使poolSize超出corePoolSize趋向maximumPoolSize.<br />
&nbsp;&nbsp;&nbsp; <strong>所以java util concurrent中的ThreadPoolExecutor+LinkedBlockingQueue组合的缺点也就出来了</strong>:如果我们想让线程池尽可能多的处理大量的任务的话,我们会把LinkedBlockingQueue.capacity设置为Integer.MAX_VALUE,但是如果这样的话池中的线程数量就不能充到最大maximumPoolSize,也就不能充分发挥线程池的最大处理能力.如果我们把LinkedBlockingQueue.capacity设置为一个较小的值,那么线程池中的线程数量会充到最大maximumPoolSize,但是如果池中的线程都忙的话,线程池又会reject请求的任务,因为队列已满.<br />
&nbsp;&nbsp;&nbsp; 如果我们把LinkedBlockingQueue.capacity设置为一个较大的值但不是Integer.MAX_VALUE,那么等到线程池的线程数量准备开始超出corePoolSize时,也就是任务队列满了,这个时候才去增加线程的话,请求任务的执行会有一定的延时,也就是没有得到及时的处理.<br />
&nbsp;&nbsp;&nbsp; <strong><span style="color: #ff0000"><strong>其实也就是说ThreadPoolExecutor缺乏灵敏的线程调度机制,没有根据当前任务的执行情况,是忙,还是闲,以及队列中的待处理任务的数量级进行动态的调配线程数,使得它的处理效率受到影响.<br />
</strong></span></strong>那么什么是忙的情况的判断呢?&nbsp; <br />
busy[1]:如果poolSize==corePoolSize,并且现在忙着执行任务的线程数(currentBusyWorkers)等于poolSize.[而不管现在put到queue的任务数是否到达queue.capacity]<br />
busy[2].1:如果poolSize==corePoolSize,并且put到queue的任务数已到达queue.capacity.[queue.capacity是针对有任务队列极限限制的情况]<br />
busy[2].2:线程池的基本目标是尽可能的快速处理大量的请求任务,那么就不一定非得在put到queue的任务数到达queue的capacity之后才判断为忙的情况,只要queue中现有的任务数(task_counter)与poolSize或者maximumPoolSize存在一定的比例时就可以判断为忙情,比如task_counter&gt;=poolSize或者maximumPoolSize的(NumberOfProcessor+1)倍,这样queue.capacity这个限制可以取消了.<br />
在上述busy[1],busy[2]这2种情况下都应增加线程数,直至maximumPoolSize,使请求的任务得到最快的处理.</p>
<p>前面讲的是忙的时候ThreadPoolExecutor+LinkedBlockingQueue在处理上的瑕疵,那么空闲的时候又要如何呢?<br />
如果corePoolSize&lt;poolSize&lt;maximumPoolSize,那么线程等待keepAliveTime之后应该降为corePoolSize,嘿嘿,这个就真的成了bug了哦,<strong>一个很难发现的bug</strong>,poolSize是被降下来了,可是很可能降过了头&lt;corePoolSize,甚至降为0也有可能.<br />
ThreadPoolExecutor.Worker.run()--&gt;ThreadPoolExecutor.getTask():<br />
Runnable getTask() {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; for (;;) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; try {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; int state = runState;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (state &gt; SHUTDOWN)<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return null;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Runnable r;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (state == SHUTDOWN)&nbsp; // Help drain queue<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; r = workQueue.poll();<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; else if (poolSize &gt; corePoolSize || allowCoreThreadTimeOut)<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /*<span style="color: #ff0000">queue is empty,这里timeout之后,return null,之后call workerCanExit() return true</span>.*/<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; r = workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; else<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; r = workQueue.take();<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (r != null)<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return r;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (workerCanExit()) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (runState &gt;= SHUTDOWN) // Wake up others<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; interruptIdleWorkers();<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return null;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; // Else retry<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; } catch (InterruptedException ie) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; // On interruption, re-check runState<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }<br />
}//end getTask.<br />
private boolean workerCanExit() {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; final ReentrantLock mainLock = this.mainLock;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; mainLock.lock();<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; boolean canExit;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; try {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; canExit = runState &gt;= STOP ||<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <span style="color: #ff0000">workQueue.isEmpty() </span>||<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; (allowCoreThreadTimeOut &amp;&amp;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; poolSize &gt; Math.max(1, corePoolSize));<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; } finally {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; mainLock.unlock();<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return canExit;<br />
}//end workerCanExit.</p>
<p>在workerCanExit() return true之后,poolSize仍然大于corePoolSize,pooSize的值没有变化,<br />
ThreadPoolExecutor.Worker.run()将结束--&gt;ThreadPoolExecutor.Worker.workerDone--&gt;这个时候才将poolSize--,可惜晚了,<strong style="color: #ff0000">在多线程的环境下,poolSize的值将变为小于corePoolSize,而不是等于corePoolSize!!!!!!<br />
</strong>例如:如果poolSize(6)大于corePoolSize(5),那么同时timeout的就不一定是一条线程,而是多条,它们都有可能退出run,使得poolSize--减过了corePoolSize.<br />
&nbsp;&nbsp;&nbsp; 提一下java.util.concurrent.ThreadPoolExecutor的allowCoreThreadTimeOut方法, @since 1.6 public void allowCoreThreadTimeOut(boolean value);<br />
它表达的意思是在空闲的时候让线程等待keepAliveTime,timeout后使得poolSize能够降为0.[其实我是希望它降为minimumPoolSize,特别是在服务器的环境下,我们需要线程池保持一定数量的线程来及时处理"零零碎碎的,断断续续的,一股一波的,不是很有压力的"请求],当然你可以把corePoolSize当作minimumPoolSize,而不调用该方法.<br />
&nbsp;&nbsp;&nbsp; 针对上述java util concurrent线程池的瑕疵,我对java util concurrent线程池模型进行了修正,特别是在"忙"(busy[1],busy[2])的情况下的任务处理进行了优化,使得线程池尽可能快的处理尽可能多的任务.<br />
下面提供了高效的线程池的源码购买:<br />
<strong>java版threadpool</strong>: <br />
http://item.taobao.com/auction/item_detail-0db2-9078a9045826f273dcea80aa490f1a8b.jhtml<br />
<strong>c [not c++]版threadpool in windows NT</strong>: <br />
http://item.taobao.com/auction/item_detail-0db2-28e37cb6776a1bc526ef5a27aa411e71.jhtml</p>
<img src ="http://www.blogjava.net/adapterofcoms/aggbug/313482.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/adapterofcoms/" target="_blank">adapterofcoms</a> 2010-02-20 20:15 <a href="http://www.blogjava.net/adapterofcoms/articles/313482.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>DWR在和spring集成时的bug,SpringCreator.getType???</title><link>http://www.blogjava.net/adapterofcoms/articles/312495.html</link><dc:creator>adapterofcoms</dc:creator><author>adapterofcoms</author><pubDate>Wed, 10 Feb 2010 04:04:00 GMT</pubDate><guid>http://www.blogjava.net/adapterofcoms/articles/312495.html</guid><wfw:comment>http://www.blogjava.net/adapterofcoms/comments/312495.html</wfw:comment><comments>http://www.blogjava.net/adapterofcoms/articles/312495.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/adapterofcoms/comments/commentRss/312495.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/adapterofcoms/services/trackbacks/312495.html</trackback:ping><description><![CDATA[DWR在和spring集成时,在dwr.xml中将设置creator="spring",告诉dwr将使用dwr的org.directwebremoting.spring.SpringCreator来创建对象实例,但是SpringCreator.getType地处理是不适当的,让我们来看看它的源码[dwr-3.0.0.116]: <br />
<br />
public Class&lt;?&gt; getType() <br />
{ <br />
if (clazz == null) <br />
{ <br />
try <br />
{ <br />
<font color="red">clazz = getInstance().getClass();</font> <br />
} <br />
catch (InstantiationException ex) <br />
{ <br />
log.error("Failed to instansiate object to detect type.", ex); <br />
return Object.class; <br />
} <br />
} <br />
<br />
return clazz; <br />
} <br />
<br />
我们再来看看它的getInstance,最终由spring来创建实例. <br />
<br />
public Object getInstance() throws InstantiationException <br />
{ <br />
try <br />
{ <br />
if (overrideFactory != null) <br />
{ <br />
return overrideFactory.getBean(beanName); <br />
} <br />
<br />
if (factory == null) <br />
{ <br />
factory = getBeanFactory(); <br />
} <br />
<br />
if (factory == null) <br />
{ <br />
log.error("DWR can't find a spring config. See following info logs for solutions"); <br />
log.info("- Option 1. In dwr.xml, &lt;create creator='spring' ...&gt; add <param name="location1" value="beans.xml" /><br />
log.info("- Option 2. Use a spring org.springframework.web.context.ContextLoaderListener."); <br />
log.info("- Option 3. Call SpringCreator.setOverrideBeanFactory() from your web-app"); <br />
throw new InstantiationException("DWR can't find a spring config. See the logs for solutions"); <br />
} <br />
<br />
return factory.getBean(beanName); <br />
} <br />
catch (InstantiationException ex) <br />
{ <br />
throw ex; <br />
} <br />
catch (Exception ex) <br />
{ <br />
throw new InstantiationException("Illegal Access to default constructor on " + clazz.getName() + " due to: " + ex); <br />
} <br />
} <br />
<br />
<strong>getInstance将返回由spring来创建的实例,很明显SpringCreator.getType有点多此一举,它先创建了实例,再从实例的getClass获取对象的类型,而spring的beanFactory.getType同样有此功能,但它不需要先创建实例. <br />
<br />
也许写这位代码的仁兄是不知道spring beanFactory.getType这个方法吧!</strong> <br />
<br />
我把SpringCreator.getType改正后的代码 如下: <br />
<br />
public Class&lt;?&gt; getType() <br />
{ <br />
if (clazz == null) <br />
{ <br />
try <br />
{ <br />
<font color="red">if(overrideFactory != null){ <br />
clazz=overrideFactory.getType(beanName); <br />
}else { <br />
if(factory == null) <br />
factory = getBeanFactory(); <br />
clazz=factory.getType(beanName); <br />
} </font><br />
} <br />
catch (Exception ex) <br />
{ <br />
log.error("Failed to detect type.", ex); <br />
return Object.class; <br />
} <br />
} <br />
<br />
return clazz; <br />
} <br />
<br />
如果出现 <font color="red">Error loading class for creator </font>...... 那么就修改SpringCreator吧! <br />
<img src ="http://www.blogjava.net/adapterofcoms/aggbug/312495.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/adapterofcoms/" target="_blank">adapterofcoms</a> 2010-02-10 12:04 <a href="http://www.blogjava.net/adapterofcoms/articles/312495.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>浏览器[IE,Firefox]不支持comet技术-AJAX不能支持服务端推消息</title><link>http://www.blogjava.net/adapterofcoms/articles/311551.html</link><dc:creator>adapterofcoms</dc:creator><author>adapterofcoms</author><pubDate>Mon, 01 Feb 2010 12:43:00 GMT</pubDate><guid>http://www.blogjava.net/adapterofcoms/articles/311551.html</guid><wfw:comment>http://www.blogjava.net/adapterofcoms/comments/311551.html</wfw:comment><comments>http://www.blogjava.net/adapterofcoms/articles/311551.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/adapterofcoms/comments/commentRss/311551.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/adapterofcoms/services/trackbacks/311551.html</trackback:ping><description><![CDATA[<p>comet技术:服务端向客户端主动推消息的技术,但侧重基于http的协议,如果是socket则不存在这个问题.</p>
<p>从tomcat6开始,增加了org.apache.catalina.CometProcessor接口来实现对comet技术的支持.<br />
修改conf/server.xml&nbsp; </p>
<p>&lt;Connector port="8080" protocol="HTTP/1.1"-改为-&gt;"org.apache.coyote.http11.Http11NioProtocol"<br />
java:请参看tomcat.apache.org上的CometServlet的例子.<br />
import javax.servlet.http.HttpServlet;<br />
import org.apache.catalina.CometEvent;<br />
import org.apache.catalina.CometProcessor;</p>
<p>CometServlet extends HttpServlet implements CometProcessor</p>
<p>javascript:</p>
<p>function installComet(){&nbsp;&nbsp; <br />
&nbsp;var xmlReq = window.ActiveXObject ? new ActiveXObject("Microsoft.XMLHTTP") : new XMLHttpRequest();<br />
&nbsp;xmlReq.onreadystatechange = handler;<br />
&nbsp;xmlReq.open("GET", "/yourapp/comet",true);<br />
&nbsp;xmlReq.send();<br />
}<br />
function handler(){<br />
&nbsp;try{<br />
&nbsp; if(xmlReq.readyState){&nbsp;&nbsp; <br />
&nbsp;&nbsp; if(xmlReq.readyState&gt;=3){&nbsp;&nbsp;&nbsp; <br />
&nbsp;&nbsp;&nbsp; alert(xmlReq.responseText);<br />
&nbsp;&nbsp; }<br />
&nbsp; }<br />
&nbsp;}catch(e){&nbsp;&nbsp; <br />
&nbsp; alert(xmlReq.readyState+":e-&gt;:"+e.message);<br />
&nbsp;}&nbsp; <br />
}</p>
<p>&nbsp;&nbsp;&nbsp; 在IE浏览器各个版本中handler只会被回调一次而不管服务端针对此次连接发多少次消息,此时的readyState为3<br />
对responseText的操作会引发javascript error:完成该操作所需的数据还不可使用。</p>
<p>&nbsp;&nbsp;&nbsp; 在Firefox中handler会被多次调用,但responseText会缓存前一次的消息而不会清除,responseText的数据会随着服务端消息的到达而累积.</p>
<p>&nbsp;&nbsp;&nbsp; 到目前为止,浏览器只能通过插件的方式来实现对comet技术在客户端的支持,所以流行的flash player,ActionScript就成为了首选.<br />
ActionScript通过socket来建立长连接.</p>
<p>&nbsp;&nbsp;&nbsp; 所以那些AJAX框架都不能真正的支持comet,而只能通过poll,setTimeout/setInterval,<br />
而dwr的ReverseAjax正是使用了setTimeout来poll轮询服务端的,请参看dwr的engine.js的源码.</p>
<img src ="http://www.blogjava.net/adapterofcoms/aggbug/311551.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/adapterofcoms/" target="_blank">adapterofcoms</a> 2010-02-01 20:43 <a href="http://www.blogjava.net/adapterofcoms/articles/311551.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item></channel></rss>