• X
    fix several sparse table issuses (#20686) · 48669aa8
    xujiaqi01 提交于
    * no longer need to define all embedding layers (no one less) of all slots in each program. make trainer_param repeated in ps.proto.
    * add find_distributed_lookup_table_grads instead of hard code GRAD
    * support embedding stop gradient. push sparse has error before fix this.* 
    * fix fill sparse, skip slots which do not have embedding. each slot's embedding in a sparse table should be used in all training programs before fix this.
    * fix pull sparse, skip slots which do not have embedding.
    * fix collect feasign label info, skip slots which do not have embedding.
    * support when there are multi sparse tables in one or multi training programs, each program can pull/push its own related sparse tables instead of all sparse tables.
    * test=develop
    48669aa8
node.py 19.0 KB