Intel Collaborative Research Institutes (ICRI)
Collaborative Research Institutes are large and longer-term collaborations that aim at jointly exploring a new space. The goal is to jointly produce leading scientific results, ecosystem visibility, mindshare, and business impact for Intel and its ecosystem partners.
The goal of Intel Collaborative Research Institutes is to foster collaboration between partner research organizations to advance the state of the art. Together with Intel Labs researchers on site, the participants design, prototype, and publish new and innovative research, and validate these ideas in real-world scenarios. The focus is on practical innovations that enhance the state of the art and have the potential to enhance the security and resilience of real-world autonomous systems. To give partners freedom to explore, we appreciate success in many forms. Each Intel Institute has an Intel team on site to support and guide research to ensure success of each Institute.
About CARS:
“Collaborative Autonomous & Resilient Systems (CARS)”, i.e., the study of security, privacy, and safety of autonomous systems that collaborate with each other. Examples include drones, self-driving vehicles, or collaborative systems in industrial automation.
CARS introduce a new paradigm to computing that is different from conventional systems in a very important way: they must learn, adapt, and evolve with minimal or no supervision. A fundamental question therefore, is what rules and principles should guide the evolution of CARS? In natural life forms, this is achieved via natural selection – a random trial and error method that, over time, ensures that only the fittest survive. That approach, however, may not be acceptable for man-made CARS. Alternate approaches to guide the evolution of CARS are necessary.
The key research goal is how to ensure the “Do no Harm” principle.
This raises related security related questions in multiple research areas:
CARS Stability: A key requirement for CARS is Stability. This is so because, like any well-designed control system, you don’t want CARS to spiral into undesirable states (e.g. chain reactions that place the CARS into harmful states). The goal is to regulate the autonomous behavior to keep it within acceptable bounds or detect and mitigate behaviors that are out of given bounds.
CARS Compliance: How should the CARS behave, and how do we ensure it will do so accordingly. The first challenge we run into here, is how to formally specify the behavior of the CARS, and how do we ensure that the specification is consistent (i.e. not contradictory) and meets expectations of regulators and users. . The second challenge becomes, how do we show that the constructed CARS will behave and remain in compliance with the spec? What are appropriate bounds of autonomy?
CARS Accountability: CARS don’t exist in a vacuum. Rather, they become part of our everyday social infrastructure. CARS therefore must account for the Human Factors that it affects, in particular it must be held accountable to some human entity. So, how do we build CARS that support ethics, legal liability and audit? How much responsibility should the CARS undertake, and how much the human in the loop? How can this be specified, validated, and enforced? CARS by definition has the ability to take decisions about its own operation. There is a real need for a human entity to recreate and interpret this decision pathway to figure out “what happened and why”?
CARS Risk Management: How do we quantify the risks of CARS, and its impact? How do we decide if the risks are acceptable?