Seems there is a race on.
A race to get a human form, completely autonomous (eventually) and fully independent ROBOT into your home. There are promises being made that 2026 will see the launch of at least some of these Robots in a relatively primitive but functional form, perhaps largely tele-operated initially. You may be able to pay outright for one or pay a monthly fee, some have already touted prices and one has even launched pre-orders.
Laws of Robotics
So why am I blogging about this? It’s largely because I’m a big fan of Asimov, having read the entire Robot series in the long past. What I do clearly remember, and what Apple TV’s “Foundation” series has reminded me of, is the initial 3 Laws of Robotics and how we got to the newer Zeroth Law. The 3 original are listed below:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
There is a very good reason for these laws, these fictional Robots and I’m certain the ones coming to us soon as well, will be or are already stronger than a regular human. They will be built with metals and plastics, and probably have comparable or greater weight than a similar sized human. They will not feel pain and will have hands capable of crushing yours to a pulp within micro-seconds. The pounds per square inch of power they will be able to deliver will be enough to overcome most humans physically, easily breaking any of your bones if they so choose.
Race to the Bottom, No Controls
To me it’s clear all these companies desperate to be the first or the best at getting a robot into your home or workplace are not all that bothered about the potential behaviours or physical motions that may result in actual physical harm to a human or even your pets.
It’s the approach of the autonomous vehicle industry, where some human fatalities are treated as mere bumps in the literal road on the way to full self driving capability. However this ‘rise of the robots’ is in my opinion the more dangerous. The more cameras and sensors you place onto a robot, the more expensive it will be both the physical price and the processing & analysis of all that extra incoming data. Will the robot be able to see behind it, or below it before placing it’s feet down? How will it track the movement of humans in the environment it is within, without error or blind spots?
Ask yourself, what controls will be put into place by all these different and heavily competing organisations to ensure the safety of humans and pets around these robots? Will they rely purely upon software and sensors which are prone to bugs, errors & failures? What happens when a firmware update causes your robot to spin crazily with it’s arms out knocking you out? I’d love to the see the T&C’s you’ll have to agree to when you sign up for one.
The companies racing to humanoid style robots thus far are:
- 1x’s NEO Home Robot (order now, get it 2026!) – 1x.tech/neo
- Teslas Optimus robot – tesla.com/en_eu/AI
- Figure AI’s Figure 02 – figure.ai
- Dyna Robotics – weirdly doesn’t have a www, but has x.com/DynaRobotics
- Boston Dynamics – come on, you’ve seen this one already bostondynamics.com/
- Agility Robotics Digit – agilityrobotics.com
- Apptroniks Apollo – apptronik.com
- Unitrees H1 – unitree.com
- UBTECHs Walker S2 – ubtrobot.com
- PAL Robotics TALOS – pal-robotics.com
- Engineered Arts Ameca – engineeredarts.co.uk
- Hanson Robotics Sophia – amazingly she has Saudi citizenship! hansonrobotics.com
- Physical Intelligences HiroBot – physicalintelligence.company/research/hirobot
- Toyota (yes, them!) have the T-HR3 – tri.global
- see References for more complete list
Of the above only 1x have announced that they will sell you one and you can pre-order now, $20K outright or $500/month. Do not expect these companies to work together on a set of rules & standards for robotics in the home, each will decide their own competing paths.
AI/Software companies creating the A(G)I for these robots:
Robots malfunction, don’t they?
I could write & write, but I will keep it brief. When in the fictional world of Asimov’s robots one of them came to a conundrum of galactic proportions, it stopped thinking in terms of individual humans but in terms of groups of humans…you could say it became concerned about the ‘greater good’. This robot was R.Giskard who had accidentally developed limited telepathic powers (a result of his creators daughter tinkering with his positronic brain) which led him to develop the Zeroth Law, which is:
- A robot may not injure humanity or, through inaction, allow humanity to come to harm.
His internal grappling with this new law led to his own positronic demise, yet he passed on this law and his enhanced mental capability to R.Daneel Olivaw who continued to influence humankind for many thousands of years after (for Foundation fans, this is who Demerzel was originally).
So what now with the robot to whom you gave free access to your home? Any robot in the home will be a set of complex mechanical and electrical machinery combined with A(G)I – so the possible problems are not purely on the physical, the robots AI could decide you need to be eliminated. Find this hard to believe? Here’s an example of a new primary robotics rule from the September 2010 ‘Principles of Robotics’ meeting held in the UK:
Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.
There you go, there is the ‘EXCEPT’ principle. IF we think you’re a bad person or decide later that you are a bad person based on our viewpoint? Our robot will kill you. This proposal rule was created by the top robotics individuals in the UK yet they still put a huge exclusion clause in… and an ambiguous one at that.
There is already plenty of precedent, tele-operated ‘robotic’ UAVs/Drones that have targeted and killed 1000’s of innocent civilians across the Middle East, South Asia and middle Africa. Complete wedding parties obliterated based on incorrect HUMINT, often completely innocent but at times a single ‘valid’ target is taken out despite the many additional murders of innocents.
Not one single UAV or Drone operator, or anyone in the command chain, has been found guilty or charged with these clear cases of extrajudicial killings i.e. MURDER. It’s just ‘oooops, we’ll try not to do it again’.
So you’re now going to trust a company that will churn out these robots in order to enrich itself will have your best interests at heart? I wish you the best of luck.
So what DO we do
We need an equivalent to Asimov’s laws, they need to be agreed globally and to be hardcoded into every single autonomous robot. This is without exception.
Why do I say this? If the war-fighting companies (why do we sanitise these orgs by calling them ‘defence’ companies?!) decide to create a domestic version of their battlefield robot the differentiator will only be software code between a strangle/suffocate order versus unload the dishwasher order.
Frank Pasquale in his great book “New Laws of Robotics (2020)” comes up with some interesting concepts for new laws, he totally sees a future where robots compliment humans in both the workplace and domestic space. However I don’t think he goes far enough
A robot must never injure a human being or, through inaction, allow a human being to come to harm
Let’s take the 1st law, it’s too ambiguous imho and allows someone else to define what ‘harm’ or ‘injure’ means. Is it only talking about the physical aspect? What about mental/emotional harm? We also expect the robot itself to evaluate when it might do harm i.e. it’s marking it’s own homework!
Here’s my first re-draft of the 1st Law of Robotics for 2025:
A robot shall not cause or permit harm to any human being, physically, psychologically, or by neglect of safety, and shall actively defer to independently verifiable safety constraints beyond its own reasoning. In the event of uncertainty, malfunction, or conflicting directives, it must default to immediate safe deactivation or isolation.
This means in creating humanoid robots we must architect them from the ground up to enforce the concept of safety, we also add a hard and fast rule of failback to immediate deactivation in preventing harm. There’s a lot more to this and better experts than me who should be working on it.
The bottom line is we shouldn’t let organisations whose over-riding principles are profit & market share lead the charge on this, we know Governments are slow to pivot to new technologies but we need immediate action on this from the highest levels and to be agreed upon globally with next to no ambiguity.
Else humans will be harmed, including children, and for some companies such harm is merely the price of doing business. Yet we call ourselves civilised.
References
- https://rossdawson.com/futurist/companies-creating-future/top-companies-rise-humanoid-robots/
- https://webarchive.nationalarchives.gov.uk/ukgwa/20210701125353/https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principlesofrobotics/
- https://www.hup.harvard.edu/books/9780674975224
