You can find simulation PD gains here. On real robot we used a fraction of P =[40.0, 40.0, 97.0], D = [0.26, 0.23, 0.16]. It is important to know, these parameters are tuned for our robot and reactive_planners code. I don’t think that these gains work for any algorithm.
Thank you very much. I will give it a shot later.
BTW,when i tune the PD parameters,joints may shock because of bad parameters,this quiet often cause encoder error.Have you met this problem?
It is important to note the difference between the PD gains in mpc and rl, they are not the same thing. For mpc, the pd gains try to track a desired trajectory while in rl the D term is only damping (no desired velocity) and there is no feedforward term. To understand this better check the subsection “low level joint control” on page #9 of this paper https://arxiv.org/pdf/2406.01152. The gains Elham mentioned are for mpc, for rl with pd action on ODRI robots we always used kp=2 and kd=.1.