NuwaUnity : Nuwa Robot SDK for UnityUnity/Android versionInstallScenesMotionTTSTouchLEDMotor controlMotor angle range tableMovement controlLocal ASRCloud Speech To Text and Local ASR mix operationSpeech2TextFaceTrackControl FaceQ & A
###Feature list:
Motion
Touch
Motor
Control Face
LED
TTS
in app localcommand
Speed2Text
Recognize
Face_track
Face_Recognize
Movement
Connection
Unity version 2018.4.0 later android SDK Min version : 6.0
After decompress zip file, you can see NuwaUnity_Core.UnityPackage and NuwaUnity_sample.UnityPackage.
drag these package into Unity3D and import these files.
if you got error like namespace "nuwa" could not be found, you may only import NuwaUnity_Sample. import NuwaUnity_Core.UnityPackage to slove the problem
All DemoScene are in Assets/NuwaUnity/Scene
Drag all scene from [Assets/NuwaUnity/Scene] into Scene in Build, and set [NuwaUnity/Scene/Demo_Title] as first Scene.
Unity Scene : Demo_Motion_Play
Nuwa motion
file is nuwa's private Robot motion control format. Which is composed of "MP4(Face)", "Motor control", "Timeline control", "LED control", etc. You can just play a motion, and Robot will do a serials of pre-defined actions. PS: Some motions only include "body movements" without "MP4(Face)"
motionPlay(final String motion, final boolean auto_fadein)
motionPlay(final String name, final boolean auto_fadein, final String path)
motionPrepare(final String name)
motionPlay( )
motionStop(final boolean auto_fadeout)
getMotionList( )
motionSeek(final float time)
motionPause( )
motionResume( )
motionCurrentPosition( )
motionTotalDuration( )
x// use default NUWA motion asset path
Nuwa.motionPlay(string motion_name);
// give a specefic motion asset path (internal use only)
Nuwa.motionPlay("001_J1_Good", false, "/sdcard/download/001/");
//NOTICE:Please must call motionStop(true), if your auto_fadein is true.
// will get callback of onStopOfMotionPlay(String motion)
Nuwa.motionStop();
// pause, resume
Nuwa.motionPause();
Nuwa.motionResume();
Unity Scene : Demo_tts
Robot can speak out a sentance by a giving string
xxxxxxxxxx
Nuwa.startTTS("Nice to meet you");
// you can cancel speaking at any time
Nuwa.startTTS.stopTTS();
// or Used specific language speak (Please reference TTS Capability for market difference)
Nuwa.startTTS.startTTS(TTS_sample,"en_us");
// receive callback onTTSComplete(boolean isError) of VoiceEventListener
Nuwa.onTTSComplete += onTTSComplete(boolean isError) {
}
TTS Capability (Only support on Kebbi Air)
xxxxxxxxxx
* Taiwan Market : Locale.CHINESE\Locale.ENGLISH
* Chinese Market : Locale.CHINESE\Locale.ENGLISH
* Japan Market : Locale.JAPANESE\Locale.CHINESE\Locale.ENGLISH
* Worldwide Market : Locale.ENGLISH
Unity Scene : Demo_Touch
Robot provides Touch
events. While you need it, you can request it. You can get TouchEvent Callback from touch head, chest, right hand, left hand, left face, right face.
xxxxxxxxxx
private void Start()
{
Nuwa.onTouchBegan += OnTouchBegin;
Nuwa.onTouchEnd += OnTouchEnd;
Nuwa.onTap += OnTap;
Nuwa.onLongPress += OnLongPress;
}
// type: head: 1, chest: 2, right hand: 3, left hand: 4, left face: 5,right face: 6.
public void OnTouchBegin(Nuwa.TouchEventType type){}
public void OnTouchEnd(Nuwa.TouchEventType type){}
public void OnTap(Nuwa.TouchEventType type){}
public void OnLongPress(Nuwa.TouchEventType type){}
Unity Scene : Demo_LED
There are 4 parts of LED on Robot. API can contorl each them. (Head, Chest, Right hand, Left hand) Each LED part has 2 types of modles - "Breath mode" and "Light on mode" Before using it, you need to use API to turn on it first, and turn off it while unneeded. LED default controled by System. If the App wants to have different behavior, it can be disabled by disableSystemLED() If your App needs to control Robot LED, App needs to call disableSystemLED() once,and App call enableSystemLED() while App is in onPause state.
xxxxxxxxxx
NOTICE : Kebbi-air not support FACE and Chest LED Breath mode.
xxxxxxxxxx
/*
id:1 = Face LED
id:2 = Chest LED
id:3 = Left hand LED
id:4 = Right hand LED
onOff: 0 or 1
brightness, Color-R, Color-G, Color-B: 0 ~ 255
interval: 0 ~ 15
ratio: 0 ~ 15
*/
//disable system led,
Nuwa.disableSystemLED();
//use this to control led by app
Nuwa.enableLed(Nuwa.LEDPosition, bool);
Nuwa.setLedColor(Nuwa.LEDPosition , Color);//設定部位的LED顏色
// turn on LED
Nuwa.enableLed(1, 1);
Nuwa.enableLed(2, 1);
Nuwa.enableLed(3, 1);
Nuwa.enableLed(4, 1);
// Set LED color
mRobot.setLedColor(1, 255, 255, 255, 255);
mRobot.setLedColor(2, 255, 255, 0, 0);
mRobot.setLedColor(3, 255, 166, 255, 5);
mRobot.setLedColor(4, 255, 66, 66, 66);
// Switch to "Breath mode"
mRobot.enableLedBreath(1, 2, 9);
// turn off LED
mRobot.enableLed(1, 0);
mRobot.enableLed(2, 0);
mRobot.enableLed(3, 0);
mRobot.enableLed(4, 0);
Unity Scene : Demo_Motor
Mibo Robot has 10 motors, use the API, you can control each of them
ctlMotor(final int motorid, final int speed, final float setPositionInDegree, final float setSpeedInDegreePerSec)
motorid:
speed: always 0
setPositionInDegree: target positon in degree
setSpeedInDegreePerSec: speed in degree per sec (range: 0 ~ 200)
xxxxxxxxxx
| ID | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
|---|----|----|----|----|----|----|----|----|---|----|
|Max| 20 | 40 | 5 | 70 | 100| 0 | 5 | 70 |100| 0 |
|Min|-20 |-40 | -85|-200| -3 |-80 |-85 |-200|-3 |-80 |
Example:
xxxxxxxxxx
//Get neck_y Motor Degree
Nuwa.getMotorPresentPossitionInDegree(Nuwa.NuwaMotorType.neck_y)
//get motor degree
Nuwa.setMotorPositionInDegree((int)type, (int)motorRotateDegree, (int)motorSpeed);
Unity Scene : Demo_movement
To control Robot to forward, backwards, turns, stop
Low level control:
Advanced control:
// go forward
Nuwa.forwardInAccelerationEx();
// go back
Nuwa.backInAccelerationEx();
// stop
Nuwa.stopInAccelerationEx();
Demo_LocalCommand
ASR engine use SimpleGrammarData to describe the list of keyword, App needs to create Grammar first. (Command table)
Example:
strin mi_Name; //grammer's title, you can set value you like
string[] values; //string to recognize
/*
mi_Name = "robot";
values = new string[2] {"hello","how are you"};
*/
Nuwa.prepareGrammarToRobot(mi_Name, values);
Nuwa.onGrammarState += OnGrammarState; // OnSetup Grammer Finish
Nuwa.onLocalCommandComplete += TrueFunction; //get complete event
Nuwa.onLocalCommandException += FalseFunction; //get exception event
3.After get on GrammerState Event Callback, start localcommand
void OnGrammarState(bool isError, string info)
{
Debug.Log(string.Format("OnGrammarState isError = {0} , info = {1}", isError, info));
//start LocalCommand
Nuwa.startLocalCommand();
}
You can get Json from onLocalCommandComplete Event, the json file should look like this:
{
"result": "測試",
"x-trace-id": "ef73bd1252544f30a818b0f68a6a72c7",
"engine": "IFly local command",
"type": 1,
"class": "com.nuwarobotics.lib.voice.ifly.engine.IFlyLocalAsrEngine",
"version": 1,
"extra": {
"content": "String"
},
"content": "{\n \"sn\":1,\n \"ls\":true,\n \"bg\":0,\n \"ed\":0,\n \"ws\":[{\n \"bg\":0,\n \"cw\":[{\n \"w\":\"測試\",\n \"gm\":0,\n \"sc\":67,\n \"id\":100001\n }],\n \"slot\":\"<NuwaQAQ>\"\n }],\n \"sc\":68\n}"
}
If the returned JSON file is empty, it means that the requested input content was not recognized.
Unity Scene : Demo_MixUnderstand
While doing ASR Mix mode, engine will receive results from local and cloud, engine will only return one of both. Rules are:
PS: Cloud ASR and local ASR result are json
format string.
createGrammar(final String grammar_id, final String grammar_string)
startMixUnderstand( )
stopListen( )
Example:
strinExceptiong mi_Name; //grammer's title, you can set value you like
string[] values; //string to recognize
/*
mi_Name = "robot";
values = new string[2] {"hello","how are you"};
*/
Nuwa.prepareGrammarToRobot(mi_Name, values);
Nuwa.onGrammarState += OnGrammarState; //Get Grammer setup complete
Nuwa.onMixUnderstandComplete += MixUnderstandFunction; //get mixunderstand result
Nuwa.onLocalCommandComplete += TrueFunction; //get localcomand complete
Nuwa.onLocalCommandException += FalseFunction; //get localcommand exception
3.After get on GrammerState Event Callback, call start localcommand
void OnGrammarState(bool isError, string info)
{
Nuwa.startLocalCommand();
}
You can get Json from onLocalCommandComplete Event, the LocalCommand json file should look like this:
{
"result": "測試",
"x-trace-id": "2ed6670726b47c5a3ac006aafa9216d",
"engine": "Google Cloud",
"type": 1,
"class": "com.nuwarobotics.lib.voice.hybrid.engine.NuwaTWMixEngine",
"version": 1,
"extra": {
"content": "String"
},
"content": "{\"sn\":1,\"ls\":true,\"bg\":0,\"ed\":0,\"ws\":[{\"bg\":0,\"slot\":\"<MiboMixunderstand>\",\"cw\":[{\"id\":10001,\"w\":\"測試\",\"sc\":96,\"gm\":0}]}],\"sc\":94}"
}
and mixunderstand's json sould look like this:
{
"result": "123",
"x-trace-id": "e5e37c47dc4649b08cf71015c8d1b69e",
"engine": "Google Cloud",
"type": 2,
"class": "com.nuwarobotics.lib.voice.hybrid.engine.NuwaTWMixEngine",
"version": 1
}
If the returned JSON file is empty, it means that the requested input content was not recognized.
Unity Scene : Demo_Speech2Text
Nuwa.onSpeech2TextComplete += SpeechCallback;
Nuwa.setListenParameter(Nuwa.ListenType.RECOGNIZE, "language", "en_us");
Nuwa.setListenParameter(Nuwa.ListenType.RECOGNIZE, "accent", null);
Nuwa.startSpeech2Text(false); //dont need wakeup, set false
The returned string is text and can be used directly.
In addition, this feature requires internet connection. If the Robot's system time is not the same as the time zone of the current location, it is very likely that the returned string will be empty.
Unity Scene : Demo_FaceTrack
//1. Register Get HumanFaceTrack Event
Nuwa.onTrack += GetTrackData;
//2. send requeset and wait 2 second to get result
Nuwa.startRecognition(Nuwa.NuwaRecognition.FACE);
//3. receive callback and set value
void GetTrackData(Nuwa.TrackData[] data)
{
float _x = float.Parse(data[0].x); //only get first person's face
float _y = float.Parse(data[0].y);
float _w = float.Parse(data[0].width);
float _h = float.Parse(data[0].height);
FaceOriginPos = new Vector2(_x, _y); // set face pos
FaceOriginSize = new Vector2(_w, _h); // set face size
FaceCenterPos = FaceOriginPos + (FaceOriginSize / 2f); // set face center
}
// Received data should look like this:
// {"height":175,"width":175,"x":250,"y":93}
Scene : Demo_ControlFace
// use
Nuwa.ShowFace();
Nuwa.HideFace();
Nuwa.MouthOn();
Nuwa.MouthOff();
string motionID; //kebbi's motion id
Nuwa.PlayMotion(motionId);
int x,y; //face window's start pos(x,y)
int w,h; //分別為凱比的寬跟高,凱比臉的正常大小為1024x600
int w,h; //face windows's width and height, the default screen size is 1024*600
Nuwa.ChangeFace(x, y, w, h);
If you want exit app after use ShowFace(), you must need set face's screen size to (1024,600) and call HideFace().
Q & A
Q1: Why it dodn't work while App calls "mRobot.motionPlay("001_J1_Good", true/false)"? Ans:
[Kebbi Motion list] https://dss.nuwarobotics.com/documents/listMotionFile
Q2: Why App can not control LED? Ans:
Q3: Unity got [xxx_gameObject can only be called from the main thread] Exception
https://forum.unity.com/threads/unity-android-main-ui-thread.174553/