Using Android Studio, to code an application called NAOvatar to control NAO and view what NAO sees through a camera attached to NAO on the head, using Android Studio. Thereafter, to create a series of behaviours using Choregraphe that NAO can carry out. Lastly, to connect the series of behaviours created in Choregraphe, which is connected to a NAO robot, to the application for the user to be able to control NAO directly.
4.1 ANDROID STUDIO – NAOvatar APPLICATION
Android Studio is an Integrated Development Environment for Android Development that developers can use to create their very own application. Android Studio contains a text editor, debugging tools and tools for running applications.
4.1.1 OVERVIEW OF NAOvatar APPLICATION
Figure 5: Overview of NAOvatar
The NAOvatar application consist of two main activities:
1. Splash Screen
2. Main Page
· Consist of the live video streaming from IP camera and buttons to control NAO.
4.1.2 SPLASH SCREEN
A Splash screen is a page that appears for a short amount of time before the main page with the main contents of the application appears. In order to add a Splash screen into the application, a new activity must be created. Thereafter, copy the image to be displayed as the Splash screen to the drawable file.
Figure 6: Image used for Splash screen of NAOvatar
Add the following code in the activity_splash.xml to set up the image as the splash screen.
Figure 7: Code to set up image as splash screen
To remove the action bar at the top of the Splash screen, in the styles.xml file, change the code from DarkActionBar to NoActionBar.
Figure 8: Code to remove action bar
Add the following code in the splash.java file.
Figure 9: Code to add to splash.java file
In the AndroidManifest.xml file, move the intent filter into the Splash activity to launch the Splash activity first before the Main Activity when the application is opened.
Figure 10: Moving intent filter from Main Activity to Splash activity
4.1.3 APPLICATION ORIENTATION
For an optimal live view from the camera attached to NAO and usage of the application, the entire application has to be oriented in landscape layout. In the AndroidManifest.xml file, add the code below into in Main Activity and the Splash activity.
Figure 11: Code to change application orientation
Figure 12: Adding code to change application orientation into AndroidManifest.xml file
4.1.4 MAIN PAGE
This application makes use of an IP camera, which can be directly accessed over a network connection (WiFi). In this example, a D-Link DCS-936L IP camera will be used to create the application. This particular IP camera supports video streaming to a UDP socket through Real-Time Streaming Protocol (RTSP) and Real-Time Transport Protocol (RTP). RTSP manages the streaming session 10 while RTP breaks the video that is to be streamed into packets that can be streamed between devices 11.
To allow the application to use the video streaming feature, the following code is to be added in the AndroidManifest.xml file:
Figure 13: Adding code to allow video streaming into AndroidManifest.xml file
Next, in order to create the user interface (UI) element that displays the video streamed from the IP camera, in the Main Activity’s layout file, a SurfaceView element is to be added:
Figure 14: Adding surface view element to Main Activity’s layout file
The remaining implementation will be done in the MainActivity.java file, which displays the layout created previously. To create the Main Activity’s UI and configure the UI to full screen, the code below is to be added. Change the RTSP_URL string accordingly, depending on the camera’s local IP address, and the username and password, depending on the configuration of the IP camera in use.
Figure 15: Code to create Main Activity’s UI and configure UI to full screen
In this activity, the SurfaceHolder object is given as a callback, which sends a signal to the activity that the rendering surface is ready to use. Thereafter, the MediaPlayer object has to be set up for RTSP communication and RTP video streaming work. In the event that the SurfaceView object is destroyed, for example, due to the application crashing, the media player can be disposed of via the release method.
Figure 16: Code for SurfaceHolder object and MediaPlayer object
For this particular model of IP camera used, all RTSP requests are required to have an “Authorization” header with a Basic auth value. The code in Figure 16 makes use of a helper method to get the headers required to communicate with the RTSP server, which is the Authorization header. The following code as seen in Figure 17 is the implementation of that:
Figure 17: Code to implement helper method
Lastly, the onPrepared listener implementation is required for the MediaPlayer object, which is invoked a period of time after the call, to prepareAsync as shown before. This is all required to start the media playback.
Figure 18: Code to start media playback
Upon implementing the UI for the video streaming, buttons need to be added to control NAO. Download the image file that is to be displayed as the button into the project folder directory app/src/main/res/drawable/. In the design editor of the activity_main.xml file, drag and drop button to the desired location. Add the following code to the text editor of the activity_main.xml file for each button, in order to change the button image to the desired image.
Figure 19: Code to add to change button image
Once all the buttons have been added, the design editor in activity_main.xml will appear as shown.
Figure 20: Design editor in activity_main.xml file when all buttons have been added.
Change the names of each button to a name unique to itself for easier code referencing when adding functionalities to each button in the next step.
Figure 21: Changing names of buttons
To start a specific movement from NAO from the buttons created previously, a code has to be added to access the Choregraphe code, such that the action can be carried out by the actual NAO robot that Choregraphe is connected. This can be done by adding functionalities to the buttons. In the MainActivity.java file, add OnClickListener to implements, for a callback to be invoked when the button is clicked. Click Alt + Enter and OnClickListener to prompt the implementation of methods.
Figure 22: Steps to implement methods for OnClickListener
In order to start actions from the buttons created in Android Studio, where the actions are implemented in Choregraphe, the following code has to be added, by changing the activity_name part accordingly.
Figure 23: Code to call Choregraphe from Android Studio
Choregraphe is a multi-platform desktop application that allows users to create animations, behaviours and dialogues, test them on a simulated robot, or directly on a real one, monitor and control the robot, and even enrich Choregraphe behaviours with users’ own Python code 1. Similar to Android Studio, Choregraphe is an Integrated Development Environment that makes creating applications for users much simpler.
4.2.1 SERIES OF BEHAVIOURS
The series of behaviour required for this application to work is shown in Figure 5.
Figure 24: Series of behaviours using Choregraphe
From Figure 5, at the main page, when the user presses any of the buttons on the right that controls NAO’s legs, this action will send a signal to start the sequence where the first step in the behaviour is “Move To”. “Move To” is used as the user can move the robot to a configured position, which we will configure to x=0.1m, such that one click of the button is equivalent to NAO taking one step. The second step involves “Apply Posture”, where the setting can be customised to raise NAO’s hands when the button label “L” (which stands for Lift) on the left from Figure 5 is used. When the button is released, this action stops and, the third step “Hands” is initiated when the button labelled “O” (which stands for Open) on the left is pressed, which specifies NAO to open his hands, after which the fourth step goes into motion “Hands(1)”, when the button labelled “C” (which stands for Close) is pressed, which specifies NAO to close his hands to grasp the object. After that, the user can use the buttons on the right to move NAO towards the destination where the object is to be released, by sending a signal to “Move To(1)”, when they press the button labelled “O”, the last step “Hands(2)”, specifies NAO to open hands and release object.
Figure 25: Resultant NAOvatar application
Upon opening the application, the splash screen will first appear before displaying the Main Page. On the Main Page, the live video stream from the IP camera attached to NAO is displayed together with the buttons that allow users to control NAO. The function of each button can be found in Figure 5. The IP camera and the mobile device that the application is being used on must be connected to the same WiFi network for the application to work.
6. CONCLUSION AND RECOMMENDATIONS FOR FUTURE WORK
In conclusion, NAO is an extremely intelligent humanoid robot, which has the potential for more than just education and research purposes. NAO can be integrated into our everyday lives through Home Automation and is especially applicable since people nowadays are looking for smart solutions for their homes. Through NAOvatar, users can obtain NAO’s first person’s point of view through the live stream video displayed on NAOvatar and smoothly operate NAO to carry out their household chores for them.
There is much room for future work to be done as the NAOvatar application built in this project can only control NAO to carry out the functions of holding, grasping and moving items, which could aid in the particular household chore of tidying up users’ homes. In the future, more functions, such as holding a broom and swaying side by side to carry out the household chore of sweeping the floor, can definitely be added to the applications. However, due to time constraints, this project has been limited to implementing only several of the many functions into the application.
I am especially indebted to Dr Qing Song, Associate Professor from the School of Electrical and Electronic Engineering, who has been supportive of my goals for this research and who has worked tirelessly and actively to provide me the academic time to pursue those goals. I am grateful to those whom I have had the pleasure of working with during the course of this project. The student mentors have taught me a great deal both about scientific research and opened a whole new world of EEE to me.
Last but not least, the support of my family members in the pursuit of this project has been vital during the course of this project. I would like to thank my parents, whose love and guidance is with me in whatever I pursue.