Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What kinds of coordinate using by the Neural-SLAM and Habitat? #63

Open
GuoPingPan opened this issue Dec 20, 2022 · 1 comment
Open

Comments

@GuoPingPan
Copy link

GuoPingPan commented Dec 20, 2022

I am confused about the coordinate between neural-slam and habitat.
Can you tell me what is the coordinate of the agent in neural-slam and the difference with habitat?
I also want to know the coordinate using by your real robot collecting the noise data, this can help me better understand the transformation of your job.

Thanks a lot!

@GuoPingPan GuoPingPan changed the title What kind of What kinds of coordinate using by the Neural-SLAM and Dec 20, 2022
@GuoPingPan GuoPingPan changed the title What kinds of coordinate using by the Neural-SLAM and What kinds of coordinate using by the Neural-SLAM and Habitat? Dec 20, 2022
@GuoPingPan
Copy link
Author

I got that the be below code is to turn the [ front = -z,left = -x, up=y] in habitat world frame to [front = x,left = y, o] in world frame[x,y,z]

     def get_sim_location(self):
        agent_state = super().habitat_env.sim.get_agent_state(0)

        x = -agent_state.position[2]
        y = -agent_state.position[0]
        axis = quaternion.as_euler_angles(agent_state.rotation)[0]
        if (axis%(2*np.pi)) < 0.1 or (axis%(2*np.pi)) > 2*np.pi - 0.1:
            o = quaternion.as_euler_angles(agent_state.rotation)[1]
        else:
            o = 2*np.pi - quaternion.as_euler_angles(agent_state.rotation)[1]
        if o > np.pi:
            o -= 2 * np.pi
        return x, y, o

and dx, dy, do = pu.get_rel_pose_change(curr_sim_pose, self.last_sim_location) is to get the relative in agent ego frame

But i am quite confused about the function get_new_pose

def get_new_pose(pose, rel_pose_change):
    x, y, o = pose
    dx, dy, do = rel_pose_change

    global_dx = dx * np.sin(np.deg2rad(o)) + dy * np.cos(np.deg2rad(o))
    global_dy = dx * np.cos(np.deg2rad(o)) - dy * np.sin(np.deg2rad(o))
    x += global_dy
    y += global_dx
    o += np.rad2deg(do)
    if o > 180.:
        o -= 360.

    return x, y, o

Why x+=global_dy and y+=global_dx? I know x,y is represented in full map frame.

What exactly the axis direction between all the frame you are using?
What is relation between the full_map frame full_map = torch.zeros(num_scenes, 4, full_w, full_h).float().to(device),(you use w as vertical, h as horizon, but we always use the h as vertical and w as horizon) and the agent frame(x, y, o)?

I am quite confused and stucked for a few days, please help me to understand it.
@devendrachaplot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant