Marvin the Respeaker


I have been playing with the Seeedstudio Respeaker Kickstarter device.

It is designed to enable you to build your own Amazon Echo or Echo Dot devices and is similar to the circuit board from an Echo Dot. The software and instructions are on github.

At the moment the instructions on using it are a bit sketchy and the software a little buggy, but it is a very nice device, and there is a lot of useful supporting software.

It runs openwrt Linux and supports Wifi (but not Bluetooth). It has an audio jack to connect to any speaker, or you can solder a speaker to it.

It has an Arduino Leonardo device that drives 12 RGB LEDS and 8 capacitive touch sensors.

And it has expansion connectors for a a Grove sensor adapter and for an optional microphone array.

It supports an SD card that can expand the Linux storage and can be used to store music.

All of the significant software and examples are in  Python.

It comes with examples to access the Amazon Alexa service, the Mycroft open source equivalent, and the Microsoft Bing speech recognition and text to speech APIs. You can also access the Google text to speech API.

It also runs the Mopidy music player and has a web front end to the Mopidy music player and a few other functions.

It runs pocketsphinx for local, offline speech recognition or to recognise keywords such as “Alexa”.

I built a speech-controlled music player and Alexa equivalent based on the examples.

I control Modipy using the python-mpd2 client software .

I called my device Marvin after Marvin the paranoid android and Marvin Minsky and added “Marvin” to the keywords that pocketsphinx recognises  to achieve that.

I can set up playlist and play them by using a spoken phrase containing the playlist name (“Marvin, play Bob Dylan”). The playlists can be tracks on the SD card or Internet radio stations. I also support spoken command like “pause”, “play”, “stop”, “next”, “previous”. And I also got it to speak Wikipedia entries like Alexa does. It uses text to speech to tell you what it is doing.

My program is in python.

It all works, but unfortunately not very well.

The microphone it not up to the standard of the one in the Amazon Dot. You would probably have to buy the expensive microphone array to get decent performance.

I am also not sure that pocketsphinx is up to the job of recognising keywords.  It does not work as well as the newer online speech recognition services like Alexa and Bing Speech API.

It is also difficult to get all the different software to access the microphone and speakers without errors.

So, I think it is a very good try at an open source copy of an Amazon Dot, but both the hardware and software needs improvement.  It is extremely hard to be cost-competitive with an Amazon Dot.

Posted in Electronics, gadgets, Uncategorized | Tagged , , | Leave a comment

Social robot (1)


I am working on a social robot to wander around the house, find people, and annoy them.

Some of its features are:

  • Autonomous wandering, looking for humans
  • Face recognition and face tracking
  • Recognising people by name
  • Following people by tracking their faces
  • Speech synthesis
  • Speech recognition
  • Home automation
  • Google Now integration

It is a bit like social robots such as Buddy, Zenbo and Aido, but cheaper, and smaller.

I am using this base, which is available from various suppliers, and comes with two motors.

The version I bought came with this motor shield, this ESP8266 module. There are lots of ESP8266 boards on ebay, but you have to be careful to get one that fits the motor shield. It is now a bit old.

The base is powered by 2 86250 3.7v rechargeable Lithium Ion batteries. Battery packs for these are available cheaply on ebay and elsewhere.


I  reprogrammed the firmware in the ESP8266 using the Arduino IDE. My version connects to my Wifi access point and uses MQTT messages to drive the robot, get sensor values etc.

I added an HC-SR04 ping sensor to stop the robot bumping into things. As the ESP8266 has 3.3v logic, I used an HC-SR04 that worked with 3.3v. They are slightly rarer and more expensive than the 5v only ones.

I have used a couple of different camera modules, including one based on the OpenMV. The one in the picture is using an Android phone, on a pan-and-tilt phone holder. Using an Android phone rather than the OpenMV allows me to provide a lot more features.

I am currently just using the tilt function with a single HS-422 servo. The HC-SR04 ultrasonic sensor and the servo are driven by the ESP8266 motor shield.

The Android App that I have written uses:

I am using JavaCV rather than the OpenCV java interface, as JavaCV seems to be simpler and more complete for this application. (I tried both and a combination of them).

On the phone display, I either show a robot face or a camera preview from the front camera. The app recognises faces and sends MQTT commands to the base to keep the face in the middle of the screen. If the face it too small, the robot moves forward; if it is too big it moves back. If it is on the left of the screen, the robot turns right; if on the right, it turns left. It is is high on the screen, the camera pans up; if low, it pans down. In this way, the robot follows and tracks the person.

When the robot first recognises a face, it asks the person for their name using Google speech recognition. It then goes into a short training session. When it has enough data, it attempts to predict who a new person is. This works reasonably well, but is a bit dependent on lighting. Google speech recognition is a little slow.

If you touch the phone screen, you get a small pop-up menu of options. One option is to switch between face and camera mode. The others are different varieties of speech commands:

  • Phone commands
  • Robot commands
  • Home automation commands

The phone commands are the Google Now ones that you get by touching the microphone on Android phones. This includes opening apps, asking questions, setting reminders, playing music, etc.

Robots command are ones I have implemented and include driving the robot, and managing the face recognition data. For examples, you can list the recognised people, delete and rename people. The data is kept in an external directory in the phone memory.

Home automation commands are sent to my home automation system via MQTT. So I can switch the lights, television, heating etc. on and off.

I might add Alexa commands to this.

I think my social robot can do most of the things that the commercial versions can do,  but it is not done quite as smoothly as the videos for those products suggest they work.

I could quite easily add extra function like taking pictures or videos and uploading them to Facebook or Youtube.

It doesn’t have the story telling capability or game playing or education apps or recipe following of the commercial social robots, but this could be done by integrating with other apps.

The Aido has an optional video projector, which is a nice feature, but expensive.

I need to add a cliff sensor to stop the robot falling down stairs. A few more sonar sensors would help too.

It would be good to add navigation capability but that would need Lidar or a 3D camera, or perhaps a Google Tango phone.

Continuous listening for a trigger word is also possible. It currently only listens when it asks for the name of an unrecognised person, or when you touch the screen and select a speech command type.

Some animation of the face like blinking, eye movement and moving of lips when talking, would be good.

Other things I could add include motion tracking, emotion detection, object recognition, and telepresence.

I will describe the ESP8266 and Android software in separate posts.

Posted in robotics | Tagged , , | Leave a comment


I got an Amazon Echo the day it came out in the UK, and replaced the Raspberry Pi version in my kitchen with it.

I also have an Amazon Dot on order for when in comes out in the UK in a few weeks time. I will use it as a bed-side radio.

The Amazon Echo and Alexa work very well.

I upgraded my Spotify account from Unlimited to Premium to work with Alexa. I have several reasons for wanting to do that, so it was about time.

Spotify works brilliantly with Alexa. The only thing it doesn’t do it let me specify other devices to play Spotify on. (I can do that with Spotify Connect with my Premium account).

Alexa now knows my UK location, which makes a lot of things, like weather reports work better.

Some things have stopped working: IFTTT doesn’t seem to work with UK accounts.  It used to work when my account was effectively a US one. Fitbit integration also seems to have stopped working. I am sure these things will be fixed.

The Echo worked straightaway with my Wemo Insight devices, and with my Hive thermostat. Controlling my heating with Alexa is nice.

Google calendar integration was also straightforward.

I installed ha-bridge on a Raspberry Pi which was very easy, and it has a very nice web interface. It emulates Philips Hue switches. It allows me to control my LightwaveRF devices, my EDF IAMs, and my media devices (Virgin TiVO, TV, AV Receiver) via my existing home automation software and my IR Blaster.

So I now have nearly 30 devices controlled my Alexa.

Device groups didn’t work too well for me as my existing software didn’t cope with the devices being switched simultaneously, so I set them up as virtual devices in ha-bridge instead.

I could do with much better control of my TV, so I might develop an Alexa Skill for that.

At the moment I am getting ha-bridge to talk to node-red by http, and then using MQTT to talk to my home automation software.

I plan to have an Amazon Dot in each bedroom and an Amazon Echo on each floor, so I can talk to Alexa from anywhere in the house.

I am not too sure about security. I don’t want people shouting through the letter box “Alexa, Open the Front Door”.  Luckily I haven’t fitted an IoT front door lock yet, as the ones that would work with my house are too expensive. “Alexa, disarm the alarm”,  also doesn’t work yet, as my alarm system is hard to integrate with.

Buying Amazon products with Alexa doesn’t work yet in the UK, but it probably soon will.

The Alexa phone app now works in the UK. The Alexa web site already worked. Both are useful. I am trying out using the phone app at the shops to look at my shopping list and delete items as they are bought. Alex now recognises “Add Marmite to Shopping List”, which the US version didn’t.

I could do with an Alexa skill to interrogate my home automation system. E.g. to ask for the temperature in a room, or for a breakdown of electricity usage. Or which plants need watering.

It would also be useful if Alexa could speak notifications from my home automation system, but I have a Raspberry Pi doing that at the moment.







Posted in Amazon Alexa, Home automation, Raspberry PI | Tagged , , , , , | Leave a comment

My first robot wars robot

2016-07-17 20.43.21

My grandson Elliot wanted me to build him a robot wars style robot. I thought I would try one with a flipper, as chainsaws and flame throwers seemed a bit dangerous for a child.

I have no skills in metal work or mechanics, so it was quite a challenge. I decided to base it on an existing design, as I am not skilled enough to design my own metal chassis. I chose the one that is the top hit, when you google “robot flipper“.

That design is not a robot wars robot, but an autonomous Sumo robot, but it was close enough. I copied the chassis and flipper from that robot, and used the pneumatics design, but completely changed the electronics and the mechanics.

I use these motors and wheels and a Raspberry Pi Zero with a ZeroBorg and an UltraBorg to drive the motors, servos and ultrasonic sensors. I use a Wifi dongle to communciate with it.

The robot looks very battered due to the number of battles it has been it. It is nothing to do with my poor metal working skills.

The robot would not cope well in a real battle. I did not add much reinforcement so it is not very strong. The motors have too little torque to push anything much, it is too high of the ground to flip anything, and the pneumatics are leaky. And it is too small. But it does drive around, detect obstacles and flip things over. I would quite like a full size one to drive.

It is programmed in Python.

Here is some of the insides:

2016-07-17 09.19.10

Posted in Raspberry PI, robotics | Tagged , | Leave a comment

Pebble voice control

2016-06-18 08.14.03I bought a Pebble Time Steel a few weeks ago when the price dropped, and have just started looking at creating my own apps and watch faces for it.

The CloudPebble site makes it very easy to develop apps for the Pebble.

The part of the app that runs on the watch is written in C, and the part that runs on the phone is in Javascript. The app is seamlessly installed to both, and the debugging features are good.

So, to make voice control of my home automation system work, I modified a simple voice transcription app and made it send the command to node-RED and then show the command and response on the watch.

I already had a node-RED html flow that executes my house control commands and returns the reply.

Whether the app is practical is debatable. From having a watch face displayed, the sequence of actions is:

  1. Press the Select button to open the app list
  2. Scroll down to the voice control app
  3. Press Select to open the app
  4. Press Select to listen
  5. Speak the command
  6. Press select to stop listening and review the voice transcription
  7. Press Select to execute it, if it was OK, or Back (to step 5) if not
  8. Look at the reply on the watch
  9. Press the Back button to go back to the watch face

The voice transcription seems pretty good, so I don’t often have to repeat steps 5 to 7.

Here is the C program that runs on the watch:

#include <pebble.h>

static Window *s_main_window;
static TextLayer *s_output_layer;
static DictationSession *s_dictation_session;
static char s_last_text[256];
static char s_command[256];
static char s_reply[64];

/******************************* Dictation API ********************************/

static void dictation_session_callback(DictationSession *session, DictationSessionStatus status, 
                                       char *transcription, void *context) {
  if(status == DictationSessionStatusSuccess) {
    strncpy(s_command, transcription, sizeof (s_command));
    DictionaryIterator* dictionaryIterator = NULL;
    app_message_outbox_begin (&dictionaryIterator);
    dict_write_cstring (dictionaryIterator, MESSAGE_KEY_COMMAND, transcription);
    dict_write_end (dictionaryIterator);
  } else {
    // Display the reason for any error
    static char s_failed_buff[128];
    snprintf(s_failed_buff, sizeof(s_failed_buff), "Transcription failed.\n\nError ID:\n%d", (int)status);
    text_layer_set_text(s_output_layer, s_failed_buff);

/************************************ Messaging *************************************/

static void inbox_received_callback(DictionaryIterator *iterator, void *context) {
  APP_LOG(APP_LOG_LEVEL_INFO, "Message received");
  // Read tuples for data
  Tuple *temp_tuple = dict_find(iterator, MESSAGE_KEY_REPLY);
  strncpy( s_reply, temp_tuple->value->cstring, sizeof (s_reply));
  APP_LOG(APP_LOG_LEVEL_INFO, "Reply: %s", s_reply);
  // Display the dictated text
  snprintf(s_last_text, sizeof(s_last_text), "Command:\n%s\nReply: %s", s_command, s_reply);
  text_layer_set_text(s_output_layer, s_last_text);

static void inbox_dropped_callback(AppMessageResult reason, void *context) {
  APP_LOG(APP_LOG_LEVEL_ERROR, "Message dropped");

static void outbox_failed_callback(DictionaryIterator *iterator, AppMessageResult reason, void *context) {
  APP_LOG(APP_LOG_LEVEL_ERROR, "Outbox send failed");

static void outbox_sent_callback(DictionaryIterator *iterator, void *context) {
  APP_LOG(APP_LOG_LEVEL_INFO, "Outbox send success");

/************************************ App *************************************/

static void select_click_handler(ClickRecognizerRef recognizer, void *context) {
  // Start voice dictation UI

static void click_config_provider(void *context) {
  window_single_click_subscribe(BUTTON_ID_SELECT, select_click_handler);

static void window_load(Window *window) {
  Layer *window_layer = window_get_root_layer(window);
  GRect bounds = layer_get_bounds(window_layer);

  s_output_layer = text_layer_create(GRect(bounds.origin.x, (bounds.size.h - 24) / 2, bounds.size.w, bounds.size.h));
  text_layer_set_text(s_output_layer, "Press Select to speak");
  text_layer_set_text_alignment(s_output_layer, GTextAlignmentCenter);
  layer_add_child(window_layer, text_layer_get_layer(s_output_layer));

static void window_unload(Window *window) {

static void init() {
  s_main_window = window_create();
  window_set_click_config_provider(s_main_window, click_config_provider);
  window_set_window_handlers(s_main_window, (WindowHandlers) {
    .load = window_load,
    .unload = window_unload,
    // Register callbacks
  // Open AppMessage
  const int inbox_size = 128;
  const int outbox_size = 128;
  app_message_open(inbox_size, outbox_size);
  window_stack_push(s_main_window, true);

  // Create new dictation session
  s_dictation_session = dictation_session_create(sizeof(s_last_text), dictation_session_callback, NULL);

static void deinit() {
  // Free the last session data


int main() {

And here is the javascript code that runs on the phone:

var xhrRequest = function (url, type, callback) {
  var xhr = new XMLHttpRequest();
  xhr.onload = function () {
  };, url);

// Listen for when an AppMessage is received
  function(e) {
    // Get the dictionary from the message
    var dict = e.payload;

    console.log('Got message: ' + JSON.stringify(dict));
    var url = '' + 
    xhrRequest(url, 'GET',
      function(response) {
        console.log('Response: ' + response); 
        // Assemble dictionary using our keys
        var dictionary = {
          'REPLY': response

        // Send to Pebble
          function(e) {
            console.log('Response sent');
          function(e) {
            console.log('Error sending response');

Posted in Home automation | Tagged , , | Leave a comment

Alexa on the Raspberry Pi

2016-06-15 16.43.38

UPDATE June 16th 2016: I was wrong about Alexa not being able to access my UK Amazon account. It appears that my UK an USA accounts are linked, so when I said “Read my Kindle”, Alexa started reading me my current Kindle book. It did it at about one sentence every few minutes, with no obvious way to stop it, so it was not that useful. I still don’t think it can access Amazon Prime music. 

Also, although the Alexa app is not available in the UK, you can go to and control things from there. In particular, it shows me my history of interactions with Alexa.

A couple of things that the alexa web site showed me I could do were shopping lists and to-do lists. They are quite fun, but you have to go to the alexa web site to delete things from them.

It also reminded me that I could get a voice remote for my Amazon Fire stick, so I have ordered one of those. Perhaps at some time I can use it for voice control of my home automation.

I still could not get any smart home devices to work with my Alexa setup. When I tried “Discover devices” on the alexa web site, it did not discover my Wemo devices, although they are supported.  I suspect an Amazon Echo would find them. I wonder if this could be added to the Raspberry Pi application. It needs to do a UPnP Wifi search. It would be possible for either the Raspberry Pi or my Amazon Fire stick to do this.

SECOND UPDATE: The Alexa app is more useful than I thought it would be, now that I have looked at what it can do on It will now read and update my Google Calendar, and I have added several skills so it tells me about Beer, Cricket and a few other things. It will play a lot of radio stations via TuneIn, which does not need an account.

Unfortunately you can only set US addresses for devices (even Amazon Fire TV or sticks). This means I can’t default locations for things like weather. However, the traffic update does allow UK addresses. Amazon are going to have to do a lot of work on this to make it truly international.

I thought I would try the instructions on Github for Alexa on the Raspberry Pi.

My Kitchen Raspberry Pi, which is a Pi 3 with a Touch Display, and a camera, microphone and speaker, seemed a good choice. (I have Raspberry Pis in most of my rooms).

It took several hours to set up.

Here is is telling me a joke:

To use an Alexa with your own device, you have to set up a developer account using a USA Amazon account, and do a lot of configuration of your own custom device and security profile on the developer site. This results in a device type, and an oauth2 client id and secret, which you then use to configure the Raspberry Pi application.

The Raspberry Pi application is odd. It uses a node.js server and a Java client. The node.js server seems to only be used for the oauth2 authentication.

You have to install node.js, a recent version of the Oracle Java JDK, Maven, VLC, and a few other things. You need self-signed certificates to access the applications. It is all very involved, and the instructions are not very good. It is not at all clear why VLC is installed, particularly as it is configured, and then the configuration is discarded.

The main problem with the instructions is that they are for a very specific old version of Raspbian, and are misleading for the latest Jessie release of Raspbian.

The resulting application is a bit difficult to use and very fragile. It does not have much useful error reporting.

It looks like you need to re-authenticate the application every time you reboot the Raspberry Pi, and authentication is a non-trivial process.

This video explains some of the difficulties of the instructions and the application. The author of the video was setting the application up on a Pi Zero, which has its own issues:

Is the application useful for someone in the UK, who can’t yet officially buy an Amazon Echo? Well, not really.

Its OK for asking about the weather (which defaults to Seattle, if you are not specific), telling jokes, and asking some general knowledge questions. But is is currently pretty useless for playing music, and doing home automation.

There are several issues for UK users:

  • It is not linked to your UK Amazon account, so it can’t read your Kindle books, or play your Amazon music.
  • The Alexa application that configures it for home automation, music etc. is not available in the UK
  • It seems to use iHeartRadio for internet radio and that is not available in the UK.

When the Amazon Echo is eventually available in the UK, and other countries, some of these issues should be fixed. It might then  be worth developing a more robust application, which is easier to configure and use.


Posted in Home automation, Raspberry PI | Tagged , , , , | Leave a comment

Controlling my house with texts

2016-04-07 12.50.45

I backed the Seeedstudio RePhone Kickstarter, and got the Create kit. I thought I would use it as a house SMS server, so that I can control my house with text messages when I am out. I can already control it via various means including MQTT messages, if I have an Internet connection. But sometimes you do not have an Internet connection.

As the RePhone does not have Wifi, I needed to use Bluetooth to send the commands to my home automation system. This needs a Bluetooth SPP server, and a Raspberry Pi 3 seems like the perfect server for that, as it has built-in Bluetooth.

I am not sure it is a very good use of the RePhone as it does not use its capability to control electronics, but the RePhone is a relatively cheap way to get this capability.

I program the RePhone using Arduino. This involved bringing my old Vista laptop back to life, as the drivers for the RePhone are only available on Windows, and not on Windows 10. The good set of example provided made programming it to process texts, talk to a Bluetooth server and write to the OLED screen easy.

I programmed the Bluetooth SPP server in Java, and again there are good examples for that. Getting Bluecove working on the Raspberry Pi 3 was a little tricky but not too hard.

The Bluetooth SPP server relays the text message commands to my HouseControl server running on a Raspberry Pi 2, and sends the replies back to the RePhone which currently just displays them on the screen.

So now I can do things like turn the lights on or the heating up, or talk to people in the house via text to speech.

I do not currently text the replies back, but that would be easy to add.

Here is the RePhone Arduino code:

#include <LBT.h>
#include <LBTClient.h>
#include <LGSM.h>
#include <LCheckSIM.h>
#include <LDisplay.h>

// Change to the name of your server
#define SPP_SVR "pi3"

LBTDeviceInfo info;
char *spaces = "                    ";

void setup()  
  Serial.println("House Client started");
  // Set up the LCD screen
  Lcd.screen_set(0xffff00); // Yellow background
  // Set up SMS
  Serial.println("SMS is ready");
  // Set up Bluetooth
  bool found = false;
  bool success = LBTClient.begin();
  if( !success )
    Serial.println("Cannot start Bluetooth");
	Lcd.draw_font(10, 0, "Cannot start Bluetooth", 0xffff00, 0);
    Serial.println("Bluetooth client started");
    // Look for the Bluetooth devices
    int num = LBTClient.scan(30);
    Serial.printf("Found %d devices\n", num);
    for (int i = 0; i < num; i++)
      memset(&info, 0, sizeof(info));
      // See if it is the required server
      if (!LBTClient.getDeviceInfo(i, &info)) continue;
      Serial.printf("Found address: %02x:%02x:%02x:%02x:%02x:%02x name: %s\n", 
          info.address.nap[1], info.address.nap[0], info.address.uap, info.address.lap[2], info.address.lap[1], info.address.lap[0],
      if (0 == strcmp(, SPP_SVR))
        found = true;
        Serial.println("Server found");
  if( !found )
    Serial.println("Server not found");
	Lcd.draw_font(10, 0, "Server not found", 0xffff00, 0);
  Serial.println("Trying to connect\n");
  // Try to connect
  bool conn_result = LBTClient.connect(info.address);
  Serial.printf("Connect result: %d\n", conn_result);
  if( !conn_result )
    Serial.println("Connect failed");
    Lcd.draw_font(10, 0, "Connect failed", 0xffff00, 0);
    Serial.println("Connected to SPP Server");
    Lcd.draw_font(10, 0, "Connected", 0xffff00, 0);
void loop()
  // Wait for SMS message
  if( LSMS.available() ) 
    char cmd[50];
    char reply[32];
    // Get the command from the text message 
    LSMS.remoteContent(cmd, 50);
    Lcd.draw_font(10, 20, spaces, 0xffff00, 0);
    Lcd.draw_font(10, 20, cmd, 0xffff00, 0);
	Lcd.draw_font(10, 40, spaces, 0xffff00, 0);
	// Send the command to the SPP server, terminated by newline
    LBTClient.write(cmd, strlen(cmd));
    LBTClient.write((char *) "\n", 1);

    // Read reply from SPP server
        int len = LBTClient.readBytes(reply,32);
		reply[len] = 0;
    Serial.printf("Reply: %s\n", reply);
    Lcd.draw_font(10, 40, reply, 0xffff00, 0);

and here is the Raspberry Pi 3 Java code:

package net.geekgrandad.apps;


import javax.bluetooth.*;
* Class that implements an SPP Server which accepts single line of
* message from an SPP client and sends a single line of response to the client.
public class SPPServer {
    //start server
    private void startServer() throws IOException{
        //Create a UUID for SPP
        UUID uuid = new UUID("1101", true);
        //Create the servicve url
        String connectionString = "btspp://localhost:" + uuid +";name=Sample SPP Server";
        //open server url
        StreamConnectionNotifier streamConnNotifier = (StreamConnectionNotifier) connectionString );
        //Wait for client connection
        System.out.println("\nServer Started. Waiting for clients to connect...");
        StreamConnection connection=streamConnNotifier.acceptAndOpen();
        RemoteDevice dev = RemoteDevice.getRemoteDevice(connection);
        System.out.println("Remote device address: "+dev.getBluetoothAddress());
        System.out.println("Remote device name: "+dev.getFriendlyName(true));
        //read string from spp client
        InputStream inStream=connection.openInputStream();
        BufferedReader bReader=new BufferedReader(new InputStreamReader(inStream));
        OutputStream outStream=connection.openOutputStream();
        PrintWriter pWriter=new PrintWriter(new OutputStreamWriter(outStream));
        boolean connected = true;
        while (connected) {
	        String lineRead=bReader.readLine();
	        Socket sock = null;
	        String host = "";
	        String ret = "No reply";
			try {
				sock = new Socket(host, 50000);
			    PrintWriter out = new PrintWriter(sock.getOutputStream(),true);
			    BufferedReader in = new BufferedReader(new InputStreamReader(sock.getInputStream()));
			    ret = in.readLine();
			    System.out.println("Reply is " + ret);
			    sock = null;
			} catch (UnknownHostException e1) {
				System.err.println("Unknown host");
				connected = false;
			} catch (IOException e1) {
				connected = false;
				try {
					if (sock != null) sock.close();
				} catch (IOException e2) {
	        //send response to spp client
    public static void main(String[] args) throws IOException {
        //display local device address and name
        LocalDevice localDevice = LocalDevice.getLocalDevice();
        System.out.println("Address: "+localDevice.getBluetoothAddress());
        System.out.println("Name: "+localDevice.getFriendlyName());
        SPPServer sampleSPPServer=new SPPServer();
Posted in Arduino, Home automation, Raspberry PI | Tagged , | Leave a comment