Many years ago, visionary author Isaac Asimov laid incredible groundwork for the age to come. In I, Robot he articulated three governing principles (laws) for his positronic robot brain:
1.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
More than 40 years later, Asimov realized that technology must also support a greater goal, and he added a fourth (sometimes called the zeroth) rule:
4.
A robot may not harm humanity,
or, by inaction, allow humanity to come to harm.
These laws are a foundation for applying technology well beyond Asimov’s books. Our team tackles the challenges of the new frontier and turns opportunity into reality — all while honoring Asimov’s fourth rule.