Social robots are kind of the hip thing to build in the robotics industry today. Home assistants such as Jibo are intended to become the interactive hub of our houses.
But social robots are really not robots as most conceive of them. They don't perform work inside of a home or even move around, in some cases. This is because they are intended to interact with us. To be a friend and information resource, with a bit of personality.
When social robots are viewed not as robots, but as interactive machines, then their application and potential shows through. It recalls Steve Jobs' requirement that the Mac appear to have a face so that is would be a friendly computer. Social robots take that idea to the extreme.
In a mobile world, social robots may bring back the waning desktop computer. They are a computer that has eliminated the keyboard and mouse and works with you as seamlessly as a person would (or at least that is the goal). You speak to them and they give you visual and physical feedback. Social robots create a level of experience beyond the one sided conversation with a keyboard.
Normally, social robots do not appear to be that impressive. They are basically a smartphone on a stand. Their inability to take physical action on the real world makes them appear to be less than robots. But when you move away from the body and instead look at the brain/computing side, then they are a quantum leap in UX design.
Social robots are not meant to transform robotics, but they can and will transform computing. At least until AR comes far enough along.
This project is one of the most homegrown that we have had at Slant.
LittleArm was a project created by our founder, Gabe Bentz, at his home, just for fun. But when he started showing it around, everyone liked it. With all of the interest that it was getting Slant decided to sponsor the project.
After a period of development the arm is ready for a full release. So we have launched a Kickstarter to get the ball rolling.
The LittleArm is a great STEM tool. It is fully open source and 3D printed so that students can study and modify every part of it for their projects. Grippers are interchangeable and the breadboard allows custom circuits and additions such as sensors.
We have already had interest from many STEM teachers in our own home town. We are sure that the the arm will be a kit in classrooms.
We are currently developing software for the LittleArm. At the moment it is still pretty rough, but very functional. Within the current python app which we have created, users can control the angles of each of the servos. When the arm is in a position the user can record a position as a waypoint. When the user has defined all the actions and waypoints they can simply request the arm to play the sequence and watch it go. Not quite a true collaborative robot, but we are pretty close. (Later we may make a training arm for it.)
The LittleArm is a great kit for STEM. It is a very low cost and highly flexible option for teachers to introduce students to STEM topics.
Please support us on Kickstarter so we can keep this project going.
When you think of a robot what is the image you have in your head? Probably a Terminator or a BB-8. What do these robots have in common? They both are multipurpose machines. They can fire guns as well as follow and protect you. They can pick up a wrench or interact with you.
But if you look at robots today, there is really not a single multipurpose robot in existence. Many of them are limited by there mechanics. Robots like the roomba will never do anything more than vaccum the floor, because it has no hands to do anything else. But more complex robots like Asimo, which could be multipurpose, are limited by software. Getting a beer is hard. So hard that many in the industry actively avoid it.
Nearly any robot around today is designed to do only one thing, and maybe variation thereof. Baxter does pick and place. Jibo is an interface. Roomba vacuums floors. BigDog carries stuff behind soldiers. Anytime you see any of these robots doing anything more than this, it is either in a research lab or a canned puppet.
But all of this is only due to software. The difficulty in making robots multitools is almost entirely in the software. Roomba could be a security guard if you just added a webcam, but it is not smart enough to navigate a house with intent.
Now let me draw a comparison to another very basic device which, due to software, can now read facial expressions and even move in some cases. The Smartphone. The smartphone is just a computer with a few sensors and vibrating motor. It was designed to surf the web and make phone calls. That is pretty much it. Since those jobs overlapped it was basically a single function device.
But with software the smartphone now does so much more. It entertains, interacts, and even moves. Things that were not part of its design.
This was made possible because the smartphone is inherently modifiable beyond its core functions. Its screen allows completely different interface layouts. Its sensors provide enough information to operate in a 3D world. And its portability allows it to be taken into different environments.
There is a not a robot that does what the smartphone did. There is not a robot that has core functionality but can become so much more. The closest stab at it so far is Pepper, but only because she has an app store.
Pepper is mobile and has a screen on here chest. Though she has arms they are essentially for expression. This basically puts Pepper into a the class of "smartphone on a stand." It is a new more interact version of computing but not much more because she can't interact with the physical world in a legitimate way.
The smartphone was so successful because it was vanilla. You could eat it plane but other people could add chocolate syrup and sprinkles. (sorry for the analogy in the analogy) There is not a vanilla robot. Such a machine would have to have the core capabilities of being mobile, and being able to interact with the world in physical way, as well and an expandable interface method, like speech.
When such a robot comes into existence then software developers will be able to run with it. Build applications that use the robots core capabilities to do more.
Right now the industry wants to make a custom machine for a specific purpose. What they should be doing is creating a generic machine and creating custom software for a specific purpose. A vanilla robot that you can add sprinkled to.
Jerry has an entirely 3D printed gripper. We did this because we knew that for all the applications of Jerry a single gripper would not be viable. There would have to be many options which could be easily interchanged and modified.
Initially we designed the grippers to use an linear actuator and cable system similar to the rest of the arm. But that system has been proven to be unreliable, so we we went with a high strength servo.
The new design does not have as high of grip force as the original grippers, but that is not terribly necessary since many of the tasks that Jerry performs use hooking or holding, so the actual gripping motion does not have to be very strong.
We are going to continue to develop the gripper for our applications, but it is available for purchase or download as is right now.
A nice timeline or our fictional robotic friends and nemeses
ABB had this great infographic on industrial robots.
Robotics is a hugely multidisciplinary field. You can have an engineer working beside an animator. Or a computer scientist sitting next to a biologist. The creation of artifical creatures requires so many skills that it is almost impossible to believe that a single person can contain it all.
But that is is simple thinking. A mechanical engineer does not know how every single machine ever built works. Likewise a roboticist does not have to understand how every kind of robot works. In the future robotics will begin develop from haveing control engineers and mechanical designers to having degrees focused on walking robots and bio-robots. Already these specialties are emgerging, but only in research not as actual classes or degrees.
But in either case, what is the bare minimum that will constitute a roboticist. If we were to create a range of study (as many universities have begun to do) what would it include. Here is our list.
At Slant we always focus on hardware first. We have done this because hardware is the single largest constraint when building a robot. Hardware has cost associated with it, and it will define how useful the robot will be. Software can compensate and readily be changed. You build a structure on a solid foundation. Hardware is the foundation in robotics. If there is no hardware you do not have a robot you have a computer.
With all that being said, computer science is vastly important. A robot is a thinking machine. If it can't think it is not a robot. But "thinking" is a broad term. the creation of a robot that "think" like an ant does not take a large amount of experience in computer science, and yet that ant robot could be very useful. Robotics again is the application of a body more than the application of a brain.
Additionally if you want to plan out 10-20 years robotic AI capabilities will have evolved to a point where there will just be a seed software that you plant in any robot and it grows into the body, no coding required. Instead of having to have a CS degree you will build a robot shell and then insert "infant" software" that controls the robot and learns to use the body it has been given.
And since computer intelligence is getting to the point where it can evolve there is less need for us to put effort there. Hardware cannot evolve. We have to perform the design and build there.
The biology requirement probably has a lot of engineers up in arms. But if you study current events you must study history. We do this in order to avoid past mistakes and see what worked well. Biology is the precedent in robotics. Nature has created machines of infinite complexity using devices that we don't yet fully understand. But we can apply a few of tricks.
Biology also spans the two first requirements. Brain and Brawn. Nature has provided intelligence of varying degrees to use as models as well as a wealth of bodies to mimic. Roboticists should be required to study an area of biology ranging from psychology to genetics. These disciplines all add to the tool kit needed to create artificial creatures.
Today a roboticist either begins as a computer programmer or an engineer. But this results in people applying a very narrow view to a design problem. Robotics is too broad to be a programmer with no mechanical expertise, or an engineer with no knowledge of natural mechanisms. Certainly specialties will arise. But they will soon cease being in the area of controls or computer science. Instead they will become broad enough to encompass subsets of robots themselves. But when that happens the roboticist will still need 3 basic knowledge bases to perform thier duties.
Whether creating nano robots, or space rovers, a roboticist has a decent command of mechanical design, computer science, and biology. If you want to be a roboticist start in one of those areas. And we highly recommend mechanical design. Computers will be programming themselves soon enough. But the invention of bodies for those computers will still be on us for a while.