Saturday, 27 June 2009

AIR 2009 day 2

The first talk this morning was by Guido Bugmann who talked about creating stimulus-response mappings. The benefits of which is that you can pre-plan what to do in a specific situation and then map that to an action, the point being that planning what to do is the slowest part of making the robots go and if you have some stuff that is pre-planned then the robot is much (much) quicker to react. One of the inspirations for this was tennis players, because they can decide on a course of action, like which way to run to get to the ball, in an incredibly short space of time ~200ms which is the quickest that a message can propagate through the brain (from seeing the ball being tossed to telling the muscles what to do). One of the things that he found interesting is that it is very very easy to program the brain. The example he used was to get us to raise our left hand if we saw a letter A on screen and our right hand if we see a B. That's a pretty straightforward task right? In the process of telling us what to do he's made us program our brain to do this simple task, which wouldn't have been evolved or taught we can just do it without practise. Psychologists haven't really investigated this phenomenon, they tell subjects to do something and then look at what the subject did, there's not really been any investigation of how long it takes a subject to be "programmed" to do a task.

The next talk was byDave Barnes about the challenges of putting robots into space. Basically it's incredibly difficult! Everything ideally has zero mass, volume and power consumption and it has to be resistant to radiation and the cold and the heat. They spend an enormous amount of time callibrating the robots and all other parts of it, that's most of what they seem to do in Aberystwyth actually. The guys here built the robot arm for the Beagle 2 mission, and there's a lot of work that's gone into it. They were also responsible for the colour callibration thing which is basically some colours on a square which the robot can photograph in order to check the colour correction on the lenses (the Mars atmosphere distorts colour in a different way to Earth's atmosphere apparently). It's nothing more than a square of colours with no mechanics or moving parts but they needed 3 teams of scientists to work on it to prevent colour fading, or dust sticking to it and getting exactly the right colour pigments etc etc. A lot of work! This was easily the most interesting talk of the week. It gave a real insight into the issues of space robotics, basically it's nigh impossible and absolutely everything is working against you. Good stuff.
Another talk was about Artificial Immune Systems, in no other field of research I've encountered have people been so adamant about stating that you should consider the pros and cons of their algorithms and that it won't solve everything. I think this is partially because Jon and Mark are very honest about it... mostly because they "haven't found their niche yet", but that's just my opinion and is probably nothing like reality.

After the talks were all over we were taken to Aberystwyth's Mars Lab, where they test the Mars rovers! They have a scale version of the next Mars rover which is half size in all dimensions (so the real one is 8x bigger) which I photographed here (click to enbiggen):

Mars Rover, at Aberystwyth Uni

Pretty neat huh? The white blobs are things for tracking the robot's movements across the lab. There's a picture of me with this robot, but that's far too blurry to put online. The white stuff it's stood on is Mars sand D, which is basically sand which is the same consistency and has the same properties as some Mars sand (there are lots of different types of sand on Mars). This rover's job is to look for sedimentary rocks and analyse samples to see if Mars once supported life. It also has a drill on the side (although this model I think does not) which drills down into the Mars soil to take samples from below the surface, because Mars has little/no magnetic field so doesn't protect the surface from the sun's harmful radiation.

Wednesday, 24 June 2009

AIR 2009 day 1

For this week (23rd - 30th June) I will be at the Autonomous Intelligent Robots Summer Camp. This is at the University of Wales, Aberystwyth. The website has lots of details about the specifics, including a timetable of stuff that's going on.

This morning was a talk by Jeremy Wyatt about reinforcement learning which is when you use rewards and punishments to make a robot learn about its environment and how to react to it and make predictions about the outcome of its actions. I didn't know very much about it before, I'd read something about game theory in draughts from one of John Holland's books which sounded very similar. Basically a robot can observe the state of the world, and it has a policy which control how it acts when given a particular world state. The robot gets given rewards based on what you want it to do, like if you want it to push a box you can reward it whenever it pushes something. The robot has to modify its policy in order to maximise the amount of rewards it gets for ALL world states. There are associated issues like it could come up with a behaviour you don't want in order to get lots of rewards (like the box pushing robot might try to push a wall) so you need to be careful when designing the reward system. Also if rewards are too infrequent then the robot can never really learn because it is not told when it does good. He briefly mentioned something called inverse reinforcement learning, which is when you develop a reward system somehow and then use that to teach a robot, then what you teach the robot is fed back into the reward system and modifies it based on the learning process. This is then iterated and so you get a good reward system and a good robot!

This was followed by some lectures by members of Aberystwyth's robotics department who were talking about the things that are going on in their lab.
The first of these was by Mark Lee who talked about developmental robotics. Which is all about teaching robots to do a task in stages, much like how children/babies learn to do stuff gradually. They have a robot arm which has developed hand-eye coordination, they started by developing an eye and teaching it to centre upon certain coordinates so that it can move quickly towards a point instead of circling around it or searching or whatever. They then taught the arm to understand how its motor movements and positions map to its position on a table (looking down) which uses the eye to look at the robot's actual position. Then they taught the robot how to move to coordinates or objects on the table, effectively hand-eye coordination, since the robot arm know what position it needs to be in to get to a place and it can make minor modifications to its position given what it observes with the eye. Good stuff! It makes sense because they don't try to run before they can walk. Often, problems are encountered because people try to do too much at once and it's good to try and increment the difficulty of a task. Humans develop hand-eye coordination in a similar manner and we've been doing that for thousands of years, so the system must be good!

Another talk was on using a 360 degree panoramic camera to navigate to a certain point. This assumes you have a starting picture of where you want to be, they then find the difference between the current camera's image and the one of the goal location. The place where the difference is least will be where you want to go, but as you near that place the difference gets smaller and smaller so it allows you to home in. The good thing about this is that it works better if you use it in a visually rich environment (ie there are lots of colours and blocks and stuff) because it's basically landmark navigation. This is cool because a lot of vision analysis stuff for robots involves putting the robots in an arena where there's not much to look at so interesting things are easy to pick out, this work is much more applicable to real life because the real world isn't painted white!

The final presentation was about the robot scientist they have in Aberystwyth. This has an understanding of yeast biology and uses that to perform and analyse tests on yeast. It's basically a way of automating science so that you can do a lot of tests in a repeatable manner. They use it for identifying what some yeast genes are responsible for and it's been quite successful in that regard! It can apparently run ~1000 experiments a day and can make over 200,000 observations. Good stuff! We went to see it acutally, and it looks very cool.

More fun stuff will happen tomorrow I hope, now I'm off for dinner with everyone.

Thursday, 18 June 2009

Bristol Robotics Lab Visit 11/06/09 part two.

The first thing I did when I got to the Robotics Lab was to meet up with Wenguo and their intern Jean-Charles. These guys have been putting linux onto the 50 or so epucks which they have there, Wenguo being the guy who designed the linux extension (and incidentally is incredibly knowledgable about pretty much everything), and Jean-Charles being the guy tasked with doing all the repetetive instalation work (so is an expert on how it should all be put together). I spend a good hour or so chatting about how the linux boards work and what I need to install them. Mostly this meant having things patiently explained to me, over and over again until I could understand.

After the session the three of us went for lunch and met up with my supervisor in Bristol Prof. Alan Winfield. If you have a passing interest in swarm robots (or robots in general) then you may have seen this video before:

Alan is the guy in the video!
The project they're talking about in the video is called SYMBRION, which aims to create swarming robots that can combine together to achieve some goal. In the video they show an example of this end goal with the robots forming an X and walking (1). Wenguo, Alan and one of my supervisors in York, Jon Timmis are important people on this project!

Anyway, after lunch I was given a small tour of the lab by Wenguo and Jean-Charles picutres and details about this will be included when I get the information from my camera. Unfortunately I didn't get to see EcoBot this time, which is a shame because it's really cool. EcoBot is a robot which has a digestive system, not a virtual or simulated digestive system a REAL one. It consumes dead insects (2) and converts their bodies into electricity to power its movement. There's a video of EcoBot moving around, as you may guess the little black blobs it leaves behind are poop! Robots like this are really interesting because it makes you wonder just what qualities make something alive. This robot moves and eats and excretes, it's sensitive to external stimuli (light) it only really lacks reproduction and growth. Does that make it alive?


After visiting BRL I now have a list of extra stuff we need to buy to get the linux boards working on the epucks. I'm hoping that it won't be too difficult. Jean-Charles has written a very thorough guide to installing them and since the boards haven't been officially released yet I'll probably be one of the first people to work from the manual. Hopefully I'll be able to give Jean-Charles some useful feedback about it, having written a manual recently I know how difficult it can be to get constructive feedback on that kind of thing. Since coming back from Bristol our technicians in York have had a lot of questions about building the boards and all the parts required. Hopefully this will help Wenguo to make a really thorough package to describe the PCB and components.



(1) sadly this hasn't actually been achieved yet and that video is stop-motion animation. The tiny robots just can't make a strong enough connection for that kind of thing yet.
(2) Don't worry animal right activists! They collect them from around the lab from windowsills and dusty corners they don't kill them especially.

Tuesday, 16 June 2009

Evolving an Artificial Homeostatic System

New paper added to my reading list.

Moioli, R.C. Vargas, P.A. Von Zuben, F.J. and Husbands, P. (2008) “Evolving an Artificial Homeostatic System”. In: Lecture Notes in Artificial Intelligence vol. 5249 pp. 278–288.
LINK

In this paper they evolve a robotic controller which uses neural networks to control behaviours (they used something called NSGasNet which they never explain) and an artificial endocrine system to decide which behaviour should be followed at any one time. An artificial endocrine system is a thing which uses "hormones" to assess its internal conditions (battery level etc) and as these get worse more fake hormone is produced (a number gets bigger) until it gets big enough that some kind of action is required. In this experiment they monitored battery level and as battery level decreased hormone increased and when there was enough hormone then the robot would switch from wandering around to going to get more power.
In all it's a thorough(ish) experiment. The way they used to develop the artificial endocrine system seemed to be sound and their results show the robot is robust to changes in battery discharge rate an evolved artificial endocrine system sounds like a good thing. That said, this paper doesn't provide enough information to replicate the results and their measure of when the robot performs well is worryingly qualitative, we effectively have to take their word that the robots performed well (for whatever value of "well" they used).

Monday, 15 June 2009

Bristol Robotics Lab Visit 11/06/09 part one.

Two weeks ago on the 1st of June York university received a shipment of ten epucks. The e-puck is a small and versatile little robot which is packed with sensors. It's got 8 IR sensors, microphones, a speaker, accelerometers, a camera, bluetooth and probably more stuff I haven't listed, all controlled by a PIC; there's a wikipedia article here. Anyway it is my job at the moment to get these robots up and running so that they can be used for swarming experiments by students at this uni, probably masters students and some PhD people like me. What I need to do is to set them up so that you can write a Player/Stage simulation, and the code for your simulation can also be used on the real robots. Now, Player/Stage drivers for the e-puck have already been written and are freely available here so surely my job is done?

Well no. For one thing, the existing Player drivers require that a small piece of code is loaded to the epuck and then your Player/Stage code is run on the computer, commands are sent to your robot via bluetooth. So far my experience with bluetooth and epucks has not been good, mostly this has been down to a lack of drivers, generally I find the connection between the computer and an epuck to be rather fragile. Mostly what I'm doing at the moment is installing Linux onto the robots. Yes that's right, I'm installing linux onto robots that are controlled by a PIC.
It turns out that most of the hard work has been done in this regard. My colleague Wenguo Liu at the Bristol Robotics Lab has designed an extension to the epuck which is basically a tiny computer which runs linux. It's my job to take these boards and to install them onto our epucks. The linux board technology is pretty new at the moment, so much so that they are waiting for a paper on them to be published before they make the board designs and information open source.

This is why, on the 11th of June 2009 I visited the Bristol Robotics Lab...

Wednesday, 10 June 2009

A Player/Stage Manual

Player/Stage is an open source robot simulator, which has become pretty much industry standard (at least in academia). It's a really powerful program that allows you to design a model of your robot and then write controlling code to make it do things in your simulation.
The really good thing about it is that the code you write to control your robot in simulation can also be used to control a REAL robot. Think about that for a moment... you can write a simulation and the same code that you simulate you can then put on to the robot. That's a wonderful thing!

The downside to Player/Stage is that it's a complete pig to learn to use. The online documentation has a lot of information, but very little of it is in enough depth for you to actually implement it yourself. This is understandable of course, it's an open source project the people who contribute to it generally contribute in ways to add functionality or to iron out any bugs. Trawlling through pages of documentation is no fun and no-one would willingly give up their spare time to do it. This leaves Player/Stage learners like me something of a problem when it comes to actually learning how to get things done.

Well, I've been learning on and off for the past few months and as I went, I wrote a manual to make the process easier for others in the future. At the moment the first draft is complete and I have someone learning to make simulations from it. So far the feedback is good but I expect I'll need to add more as I learn, things I've missed out, things I got wrong etc. After another re-drafting I'm going to try and submit it to the Player/Stage project to get as many people benefitting from my work as possible!

Watch this space...

EDIT: as of 13th July 2009 this manual has been finished and published! Finally a Player/Stage manual

Tuesday, 9 June 2009

Introduction

So what is this blog?

I'm a PhD student at the University of York, studying swarm robotics. this blog is where I will put information about stuff I've done with robots that maybe other people might want to repeat or use. Apart from that I'll also blog about things I've discovered which I thought were interesting, journal papers I've read and maybe even other stuff I've done.