Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
嗝屁小孩纸
guide-rpc-framework
提交
58df5467
G
guide-rpc-framework
项目概览
嗝屁小孩纸
/
guide-rpc-framework
与 Fork 源项目一致
从无法访问的项目Fork
通知
3
Star
1
Fork
1
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
G
guide-rpc-framework
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
前往新版Gitcode,体验更适合开发者的 AI 搜索 >>
提交
58df5467
编写于
6月 16, 2020
作者:
S
shuang.kou
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
[refractor]remove threadpool in NettyServerHandler
上级
405faedb
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
35 addition
and
29 deletion
+35
-29
rpc-framework-common/src/main/java/github/javaguide/utils/concurrent/threadpool/ThreadPoolFactoryUtils.java
...e/utils/concurrent/threadpool/ThreadPoolFactoryUtils.java
+19
-1
rpc-framework-simple/src/main/java/github/javaguide/remoting/transport/netty/server/NettyServerHandler.java
...e/remoting/transport/netty/server/NettyServerHandler.java
+16
-28
未找到文件。
rpc-framework-common/src/main/java/github/javaguide/utils/concurrent/threadpool/ThreadPoolFactoryUtils.java
浏览文件 @
58df5467
...
...
@@ -7,6 +7,8 @@ import java.util.Map;
import
java.util.concurrent.ConcurrentHashMap
;
import
java.util.concurrent.ExecutorService
;
import
java.util.concurrent.Executors
;
import
java.util.concurrent.ScheduledExecutorService
;
import
java.util.concurrent.ScheduledThreadPoolExecutor
;
import
java.util.concurrent.Semaphore
;
import
java.util.concurrent.ThreadFactory
;
import
java.util.concurrent.ThreadPoolExecutor
;
...
...
@@ -64,7 +66,7 @@ public final class ThreadPoolFactoryUtils {
log
.
info
(
"shut down thread pool [{}] [{}]"
,
entry
.
getKey
(),
executorService
.
isTerminated
());
try
{
executorService
.
awaitTermination
(
10
,
TimeUnit
.
SECONDS
);
}
catch
(
InterruptedException
i
e
)
{
}
catch
(
InterruptedException
e
)
{
log
.
error
(
"Thread pool never terminated"
);
executorService
.
shutdownNow
();
}
...
...
@@ -98,4 +100,20 @@ public final class ThreadPoolFactoryUtils {
return
Executors
.
defaultThreadFactory
();
}
/**
* 打印线程池的状态
*
* @param threadPool 线程池对象
*/
public
static
void
printThreadPoolStatus
(
ThreadPoolExecutor
threadPool
)
{
ScheduledExecutorService
scheduledExecutorService
=
new
ScheduledThreadPoolExecutor
(
1
,
createThreadFactory
(
"print-thread-pool-status"
,
false
));
scheduledExecutorService
.
scheduleAtFixedRate
(()
->
{
log
.
info
(
"============ThreadPool Status============="
);
log
.
info
(
"ThreadPool Size: [{}]"
,
threadPool
.
getPoolSize
());
log
.
info
(
"Active Threads: [{}]"
,
threadPool
.
getActiveCount
());
log
.
info
(
"Number of Tasks : [{}]"
,
threadPool
.
getCompletedTaskCount
());
log
.
info
(
"Number of Tasks in Queue: {}"
,
threadPool
.
getQueue
().
size
());
log
.
info
(
"==========================================="
);
},
0
,
1
,
TimeUnit
.
SECONDS
);
}
}
rpc-framework-simple/src/main/java/github/javaguide/remoting/transport/netty/server/NettyServerHandler.java
浏览文件 @
58df5467
...
...
@@ -4,8 +4,6 @@ import github.javaguide.factory.SingletonFactory;
import
github.javaguide.handler.RpcRequestHandler
;
import
github.javaguide.remoting.dto.RpcRequest
;
import
github.javaguide.remoting.dto.RpcResponse
;
import
github.javaguide.utils.concurrent.threadpool.CustomThreadPoolConfig
;
import
github.javaguide.utils.concurrent.threadpool.ThreadPoolFactoryUtils
;
import
io.netty.channel.ChannelFutureListener
;
import
io.netty.channel.ChannelHandlerContext
;
import
io.netty.channel.ChannelInboundHandlerAdapter
;
...
...
@@ -13,8 +11,6 @@ import io.netty.channel.SimpleChannelInboundHandler;
import
io.netty.util.ReferenceCountUtil
;
import
lombok.extern.slf4j.Slf4j
;
import
java.util.concurrent.ExecutorService
;
/**
* 自定义服务端的 ChannelHandler 来处理客户端发过来的数据。
* <p>
...
...
@@ -27,39 +23,31 @@ import java.util.concurrent.ExecutorService;
@Slf4j
public
class
NettyServerHandler
extends
ChannelInboundHandlerAdapter
{
private
static
final
String
THREAD_NAME_PREFIX
=
"netty-server-handler-rpc-pool"
;
private
final
RpcRequestHandler
rpcRequestHandler
;
private
final
ExecutorService
threadPool
;
public
NettyServerHandler
()
{
this
.
rpcRequestHandler
=
SingletonFactory
.
getInstance
(
RpcRequestHandler
.
class
);
CustomThreadPoolConfig
customThreadPoolConfig
=
new
CustomThreadPoolConfig
();
customThreadPoolConfig
.
setCorePoolSize
(
6
);
this
.
threadPool
=
ThreadPoolFactoryUtils
.
createCustomThreadPoolIfAbsent
(
THREAD_NAME_PREFIX
,
customThreadPoolConfig
);
}
@Override
public
void
channelRead
(
ChannelHandlerContext
ctx
,
Object
msg
)
{
threadPool
.
execute
(()
->
{
try
{
log
.
info
(
"server receive msg: [{}] "
,
msg
);
RpcRequest
rpcRequest
=
(
RpcRequest
)
msg
;
//执行目标方法(客户端需要执行的方法)并且返回方法结果
Object
result
=
rpcRequestHandler
.
handle
(
rpcRequest
);
log
.
info
(
String
.
format
(
"server get result: %s"
,
result
.
toString
()));
if
(
ctx
.
channel
().
isActive
()
&&
ctx
.
channel
().
isWritable
())
{
//返回方法执行结果给客户端
RpcResponse
<
Object
>
rpcResponse
=
RpcResponse
.
success
(
result
,
rpcRequest
.
getRequestId
());
ctx
.
writeAndFlush
(
rpcResponse
).
addListener
(
ChannelFutureListener
.
CLOSE_ON_FAILURE
);
}
else
{
log
.
error
(
"not writable now, message dropped"
);
}
}
finally
{
//确保 ByteBuf 被释放,不然可能会有内存泄露问题
ReferenceCountUtil
.
release
(
msg
);
try
{
log
.
info
(
"server receive msg: [{}] "
,
msg
);
RpcRequest
rpcRequest
=
(
RpcRequest
)
msg
;
//执行目标方法(客户端需要执行的方法)并且返回方法结果
Object
result
=
rpcRequestHandler
.
handle
(
rpcRequest
);
log
.
info
(
String
.
format
(
"server get result: %s"
,
result
.
toString
()));
if
(
ctx
.
channel
().
isActive
()
&&
ctx
.
channel
().
isWritable
())
{
//返回方法执行结果给客户端
RpcResponse
<
Object
>
rpcResponse
=
RpcResponse
.
success
(
result
,
rpcRequest
.
getRequestId
());
ctx
.
writeAndFlush
(
rpcResponse
).
addListener
(
ChannelFutureListener
.
CLOSE_ON_FAILURE
);
}
else
{
log
.
error
(
"not writable now, message dropped"
);
}
});
}
finally
{
//确保 ByteBuf 被释放,不然可能会有内存泄露问题
ReferenceCountUtil
.
release
(
msg
);
}
}
@Override
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录