PyOpenNI ROS Node (Publisher)

PyOpenNI is a Python integration of OpenNI, a computer vision library that supports body and hand gesture recognition and tracking. However, in order to access the information in ROS, a ROS node must publish the data captured from the sensor. This tutorial will explain the ROS OpenNI publisher node.

Node Initialization

rospy.init_node('OpenNI', anonymous=True)
rgb_pub = rospy.Publisher('rgb', String, queue_size=10)
depth_pub = rospy.Publisher('depth', String, queue_size=10)
gesture_pub = rospy.Publisher('gesture', String, queue_size=10)
skeleton_pub = rospy.Publisher('skeleton', Skeleton, queue_size=10)
rate = rospy.Rate(30) # 30hz

First, initialize a node called “OpenNI.” This will register the node in Roscore.  Then, instantiate the ROS publishers for the following: RGB image, depth image, gesture data, and skeleton data. The publishers will have the topic names rgb, depth, gesture, and skeleton respectively. The queue_size of 10 prevents the subscriber from receiving more than 10 messages at once. Finally, set the ROS frame rate to 30 Hz, which determines the speed at which the OpenNI node will publish its information.

OpenNI

ctx = opi.Context()
ctx.init()

depth_generator = opi.DepthGenerator()
image_generator = opi.ImageGenerator()
hands_generator = opi.HandsGenerator()
gesture_generator = opi.GestureGenerator()
user = opi.UserGenerator()

First, the OpenNI Context will start the data retrieval from the Kinect. Then we link different data processing methods (generators) to the grabber.

gesture_generator.register_gesture_cb(gesture_detected, gesture_progress)
hands_generator.register_hand_cb(create, update, destroy)
user.register_user_cb(new_user, lost_user)
pose_cap.register_pose_detected_cb(pose_detected)
skel_cap.register_c_complete_cb(calibration_complete)
skel_cap.set_profile(opi.SKEL_PROFILE_ALL)

Next, we add the callback methods to each generator. Notice that the skeleton tracking requires multiple callback methods, as one is needed to track the user while others are needed to track the specific points.

Publish

def capture_rgb():
    rgb_pub.publish(image_generator.get_raw_image_map_bgr())

def capture_depth():
    depth_pub.publish(depth_generator.get_raw_depth_map_8())

def get_joints():
    for id in user.users:
        if skel_cap.is_tracking(id) and skel_cap.is_calibrated(id):
            joints = [skel_cap.get_joint_position(id, j)
                  for j in map(lambda a: getattr(opi, a), name_joints)]

            newpos_skeleton = depth_generator.to_projective([j.point for j in joints])
            if newpos_skeleton:
                skeleton_msg = Skeleton()
                skeleton_msg.id = id
                skeleton_msg.data = str(newpos_skeleton)
                skeleton_pub.publish(skeleton_msg)

while not rospy.is_shutdown():
    capture_rgb()
    capture_depth()
    get_joints()
    ctx.wait_any_update_all()
    rate.sleep()

Finally, we use a while loop to continuously retrieve data. The capture_rgb, capture_depth, and get_joints methods publish their respective data through their respective publishers. Notice that in all three methods, we publish the data as strings because of ROS limitations on message datatypes. The rate.sleep assures that our program will run at the predefined rate.

Make sure to follow up on my next tutorial, which will examine the subscriber side of the ROS OpenNI node as well as how to integrate the OpenNI node into your project.

Helpful Links

https://github.com/jmendeth/PyOpenNI/blob/master/examples/depth.py

https://github.com/jmendeth/PyOpenNI/blob/master/examples/gestures.py

https://github.com/jmendeth/PyOpenNI/blob/master/examples/skeleton.py:

These three links show examples of PyOpenNI depth, gestures, and skeleton callbacks.

http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28python%29:

This tutorial explains how to both publish and subscribe in ROS Python.