Skip to content

  • 体验新版
    • 正在加载...
  • 登录
  • PaddlePaddle
  • PaddleFL
  • Issue
  • #103

P
PaddleFL
  • 项目概览

PaddlePaddle / PaddleFL

通知 35
Star 5
Fork 1
  • 代码
    • 文件
    • 提交
    • 分支
    • Tags
    • 贡献者
    • 分支图
    • Diff
  • Issue 6
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 4
  • Wiki 3
    • Wiki
  • 分析
    • 仓库
    • DevOps
  • 项目成员
  • Pages
P
PaddleFL
  • 项目概览
    • 项目概览
    • 详情
    • 发布
  • 仓库
    • 仓库
    • 文件
    • 提交
    • 分支
    • 标签
    • 贡献者
    • 分支图
    • 比较
  • Issue 6
    • Issue 6
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 4
    • 合并请求 4
  • Pages
  • 分析
    • 分析
    • 仓库分析
    • DevOps
  • Wiki 3
    • Wiki
  • 成员
    • 成员
  • 收起侧边栏
  • 动态
  • 分支图
  • 创建新Issue
  • 提交
  • Issue看板
已关闭
开放中
Opened 8月 20, 2020 by saxon_zh@saxon_zhGuest

Paddle FL MPC running on Multy Machines

Created by: SaviorD7

Hello everyone!

I am trying to run Paddle FL MPC on multy machines. (I used uci_housing_demo example, but remade it for my data) So , I have two linux (ubuntu 18.04) machines running in Google Cloud.

  1. I need to prepare data on each machine according to point 1 in the guide: Data owner encrypts data. Concrete operations are consistent with “Prepare Data” in “Running on Single Machine”. What does it mean? For example, I have a half part of data on first machine and next half part of data on second machine. I need to training these parts of data on each machine and send results in the one place.

So in my mind it seems like that: First machine (will be main machine): has its part of data , training NN and has results after Secoon machine: has its part of data, training NN and send results to the first machine So now, first machine have all the results recieve from two training.

Or maybe I dont understand it right ? Cause in point 2: According to the suffix of file name, distribute encrypted data files to /tmp directories of all 3 computation parties. For example, send house_feature.part0 and house_label.part0 to /tmp of party 0 with scp command. says that there are 3 computation (okay, maybe it just use 3 machine) and there are divide data and send to party 0 (which is server or main machine I think so)

But , however, it seems that machine with part 0 will recieve all data from 1 and 2 parties and just training it by itself :/ So it is not going to be Federate Learning.

Can you please understand it more understandable? :~

Thanks 👍

指派人
分配到
无
里程碑
无
分配里程碑
工时统计
无
截止日期
无
标识: paddlepaddle/PaddleFL#103
渝ICP备2023009037号

京公网安备11010502055752号

网络110报警服务 Powered by GitLab CE v13.7
开源知识
Git 入门 Pro Git 电子书 在线学 Git
Markdown 基础入门 IT 技术知识开源图谱
帮助
使用手册 反馈建议 博客
《GitCode 隐私声明》 《GitCode 服务条款》 关于GitCode
Powered by GitLab CE v13.7