A blog about embedded computing systems, cyber-physical systems, and Internet of Things. We concentrate on technical topics but also discuss their implications.
Thursday, December 26, 2013
Embedded Systems Channel on YouTube
I have started to record videos for my book Computers as Components. You can find these videos on my new YouTube channel. Click on this link or search for "Marilyn Wolf embedded" to get there. The videos make use of the PowerPoint you can find on the book Web site but these videos are arranged into short, 5-10 minute chunks that focus on one or two topics. I have put together playlists on topics to help organize the videos. I will need a few months to fill out the videos for the entire book, so stay tuned!
Thursday, December 19, 2013
Credit Card Swiper Attack
News sources, including CNN's story here, are reporting that a vast amount of credit card data has been stolen from Target's customers. It appears that the card swipers were hacked to grab data from the customers' cards.
Monday, December 16, 2013
Big Signals
Big Data is a popular buzzword in computer science and with good reason. The analysis of large data sets is both a difficult problem and one with a wide range of applications; the selection of ads for blogs based on their content and the user's activity is just one example.
But traditional Big Data systems, like Google Cloud, are designed for more traditional database applications. They aren't built to handle time-oriented data. Cyber-physical systems use time as a fundamental concept. Time-series data and signals are two terms for this concept.
Of course, entire fields (e.g., signal processing) have sprung up to develop the mathematics of signal processing. But the design of large computing systems that can efficiently handle time-series data has lagged behind.
That's where Big Signals comes in. We need cloud computing systems that are designed to manage signals and time-series data. We process signals in a different way than we do, for example, sales transactions. Cloud systems that operate on signal-oriented data will want to process small windows of signals in real time to identify important events; they will also want to analyze historical data at multiple time scales in order to identify larger trends.
Here are a few examples of how to use Big Signals in the cloud. Farmers may use historical weather data to figure out how to plant, water, and feed their crops. Medical teams may use cloud-based systems both to monitor a patient's current state as well as to run longer-term analyses for diagnosis. Energy systems may use historical load data to manage energy generation; it can also use historical weather data to predict the generation capacity available from wind and solar.
The existing cloud computing systems are a good start, but we need to understand how data schema, access scheduling, and other problems to handle the challenges of Big Signals.
But traditional Big Data systems, like Google Cloud, are designed for more traditional database applications. They aren't built to handle time-oriented data. Cyber-physical systems use time as a fundamental concept. Time-series data and signals are two terms for this concept.
Of course, entire fields (e.g., signal processing) have sprung up to develop the mathematics of signal processing. But the design of large computing systems that can efficiently handle time-series data has lagged behind.
That's where Big Signals comes in. We need cloud computing systems that are designed to manage signals and time-series data. We process signals in a different way than we do, for example, sales transactions. Cloud systems that operate on signal-oriented data will want to process small windows of signals in real time to identify important events; they will also want to analyze historical data at multiple time scales in order to identify larger trends.
Here are a few examples of how to use Big Signals in the cloud. Farmers may use historical weather data to figure out how to plant, water, and feed their crops. Medical teams may use cloud-based systems both to monitor a patient's current state as well as to run longer-term analyses for diagnosis. Energy systems may use historical load data to manage energy generation; it can also use historical weather data to predict the generation capacity available from wind and solar.
The existing cloud computing systems are a good start, but we need to understand how data schema, access scheduling, and other problems to handle the challenges of Big Signals.
Sunday, December 15, 2013
CPS in the Clouds
Cloud computing for cyber-physical systems is in vogue; see, for example, this NSF-sponsored workshop on the topic. The idea behind cloud CPS is much the same as for information technology---move some important operations to remote server farms. This idea leverages both efficient servers and ubiquitous Internet.
However, the technical challenges behind cloud CPS are different and arguably harder than those for database-oriented operations. Control systems have deadlines. If your control system is in the clouds, then each control sample has to perform its complete round trip: to the cloud server through the Internet, computing in the cloud, then back to the physical plant. The basic physics of communication mean that we won't be able to put very high rate, low latency control loops in the cloud. Bandwidth isn't the only requirement---latency is ultimately determined by the speed of light. But there are a lot of control loops that are slow enough to allow them to be put in the clouds. Many hierarchical control systems have some very fast control loops and supervisory control that runs much more slowly. My favorite example is traffic control. The decisions required to time the lights could be performed in the cloud; rather than have each city buy and maintain its own traffic flow system, all cities could share a more sophisticated control system located in the cloud.
People often assume that cost savings is the primary motivation for putting cyber-physical systems in the cloud. In fact, reliability is an even greater motivation. Designing a highly-available server farm is a challenging task. A typical large server farm consumes the electricity of a city of 50,000 people, but it crams all that energy into a space the size of a large conference room. All the heat that is generated by those computers makes for a very challenging heat transfer problem. Not only does the heat cost money to eliminate with cooling systems, it's a major source of failures as components overheat.
If you run a safety-critical, high-reliability cyber-physical system, you should seriously think about putting your SCADA (supervisory control) system in the cloud, preferably run by someone who does it full time. The challenge of running a highly-reliable server system is big enough that it shouldn't be left to amateurs.
However, the technical challenges behind cloud CPS are different and arguably harder than those for database-oriented operations. Control systems have deadlines. If your control system is in the clouds, then each control sample has to perform its complete round trip: to the cloud server through the Internet, computing in the cloud, then back to the physical plant. The basic physics of communication mean that we won't be able to put very high rate, low latency control loops in the cloud. Bandwidth isn't the only requirement---latency is ultimately determined by the speed of light. But there are a lot of control loops that are slow enough to allow them to be put in the clouds. Many hierarchical control systems have some very fast control loops and supervisory control that runs much more slowly. My favorite example is traffic control. The decisions required to time the lights could be performed in the cloud; rather than have each city buy and maintain its own traffic flow system, all cities could share a more sophisticated control system located in the cloud.
People often assume that cost savings is the primary motivation for putting cyber-physical systems in the cloud. In fact, reliability is an even greater motivation. Designing a highly-available server farm is a challenging task. A typical large server farm consumes the electricity of a city of 50,000 people, but it crams all that energy into a space the size of a large conference room. All the heat that is generated by those computers makes for a very challenging heat transfer problem. Not only does the heat cost money to eliminate with cooling systems, it's a major source of failures as components overheat.
If you run a safety-critical, high-reliability cyber-physical system, you should seriously think about putting your SCADA (supervisory control) system in the cloud, preferably run by someone who does it full time. The challenge of running a highly-reliable server system is big enough that it shouldn't be left to amateurs.
Saturday, December 7, 2013
Medical Cyber-Physical Systems
We have had medical electronics devices for several decades and they have made a huge difference in medical care. A nurse once explained to me how he used to set up a drip for a patient. It required a lot of manual tweaking of the tubes and drip rate. And this was in the 1970's, not so long ago. Continuous monitoring instruments have also made a huge difference in patient care.
We are in the midst of a new round of medical device innovation. This time, innovation emphasizes systems. Networked devices have existed for awhile but largely with proprietary interfaces. The push to digitize and integrate medical records into a unified system is leading manufacturers toward an increasingly cyber-physical approach to medical device design.
Several proposals have been developed for the integration of medical devices, including MDPnP, the Medical Device Coordination Framework, and the University of Pennsylvania Medical Application Platform (MAP) architecture. Several common themes emerge from these efforts: frameworks that support closed-loop design; safety of the system operation even when individual devices fail; and the need to provide quality-of-service (QoS) guarantees for real-time data.
Patient records have become much more integrated and accessible to a broader range of medical personnel. A decade ago, a lot of hospitals moved records on paper from one part of the hospital to another---none of their data was digitized. As doctors learn to use these integrated systems, we can expect that they will find new applications that require new capabilities.
We are in the midst of a new round of medical device innovation. This time, innovation emphasizes systems. Networked devices have existed for awhile but largely with proprietary interfaces. The push to digitize and integrate medical records into a unified system is leading manufacturers toward an increasingly cyber-physical approach to medical device design.
Several proposals have been developed for the integration of medical devices, including MDPnP, the Medical Device Coordination Framework, and the University of Pennsylvania Medical Application Platform (MAP) architecture. Several common themes emerge from these efforts: frameworks that support closed-loop design; safety of the system operation even when individual devices fail; and the need to provide quality-of-service (QoS) guarantees for real-time data.
Patient records have become much more integrated and accessible to a broader range of medical personnel. A decade ago, a lot of hospitals moved records on paper from one part of the hospital to another---none of their data was digitized. As doctors learn to use these integrated systems, we can expect that they will find new applications that require new capabilities.
Monday, December 2, 2013
Thoughts on Embedded Computer Vision
The embedded computer vision space seems to be heating up. The OpenCV platform has been used to develop computer vision applications on workstations and is increasingly used for embedded platforms as well. The new OpenVX standard provides a high-level, platform-independent interface to OpenCV functions.
What markets need computer vision? Digital cameras use quite a few vision functions---for example, face detection is used for focus and exposure compensation. Surveillance systems use computer vision to alert system operators to important events. Cars use cameras to both analyze the scene outside of the car (looking for pedestrians, lane following, etc.) and inside the car (driver monitoring, for example). Gesture control systems are now commonplace in video gaming systems and are poised to move into other types of products as well.
What markets need computer vision? Digital cameras use quite a few vision functions---for example, face detection is used for focus and exposure compensation. Surveillance systems use computer vision to alert system operators to important events. Cars use cameras to both analyze the scene outside of the car (looking for pedestrians, lane following, etc.) and inside the car (driver monitoring, for example). Gesture control systems are now commonplace in video gaming systems and are poised to move into other types of products as well.
Sunday, December 1, 2013
Big, Embedded Software
I just found this interesting article in Aviation Week on the software for the 787. The problem they describe seems to center on requirements: the avionics software sends too many alerts. Because this plane is much more heavily instrumented, it has a lot of data at its disposal. I suspect that the designers didn't think to write requirements specifically about the sensitivity of the alert system: how many alerts per hour, etc. The traditional focus in the design of such systems is on ensuring that a particular alert is generated. But as we move to more heavily instrumented systems across the board, we need to think more systematically about what we do with all that sensor data. We are instrumenting not just airplanes, but buildings, factories, and roadways. The analysis tools for all these systems must balance two tasks: making sure that important alerts are delivered quickly to the appropriate authority; and ensuring that events are accurately tagged with the appropriate level of alert. A well-known phenomena of alert-based systems is that too many false alarms cause human operators to disregard all alerts. We won't get people completely out of the loop for quite some time. Even if we do manage to build totally autonomous systems, they will face the same problem of properly discriminating between important and unimportant events.
Subscribe to:
Posts (Atom)