Dr. Hadjicostis' Areas of Research
1) Fault-Tolerant Combinational/Sequential Systems and Networks via Error Control Coding
A principal focus of Dr. Hadjicostis' research is in the area of fault-tolerant dynamic systems and networks. A major challenge in providing fault tolerance to such systems/networks is presented by the fact that a single undetectable fault (in a component, link or node) at any time step can have repercussions on their future overall functionality even if the cause of the fault later disappears and the system returns to its normal functionality. The reason is that the internal state of the system may become corrupted in a way that affects the system's future state evolution and behavior. As the complexity of modern systems and networks increases, the development of techniques for detecting, handling and correcting faults in such dynamic environments, perhaps by taking advantage of their structural dynamics and/or interconnection topology, becomes imperative. This is particularly useful in distributed systems and networks where the malfunctioning of certain components/links needs to be detected and identified by tracking their effects at different points in time and different components of the system.
Within this line of work, Dr. Hadjicostis focuses, in particular, on the development of a set of criteria that can be used as guidelines for the construction of practical fault-tolerant dynamic systems and networks, as well as in the theoretical aspects pertaining to the fundamental limits and tradeoffs involved. One of the goals of Dr. Hadjicostis' research is to relax the common assumption that checking mechanisms be fault-free and to construct reliable dynamic systems (networks) out of unreliable components. These unreliable components (gates, processing elements, links or nodes) could be produced using novel/emerging technologies (e.g., quantum systems) or current technologies (e.g., silicon-based technology) with relaxed constraints.
2) Fault-Tolerant Distributed Algorithms for
Consensus and Coordination
This line of work addresses the increasingly important problem of robust/reliable coordination and control of geographically distributed, complex, real-time systems over shared cyber-infrastructures. In their most general form, these cyber-infrastructures (which include both wireless and wired broadband networks) can be viewed as the backbone for information exchange between various interacting system components (nodes), such as sensors, actuators, and computer-based controllers, which are typically required to enforce coordination and regulation tasks in a distributed manner by utilizing existing, possibly heterogeneous, communication links (edges). When attempting to perform distributed consensus and coordination in these emerging systems, a major challenge is the possibility that one or more communication links may fail (e.g., because of increased noise or congestion in the communication channel) and/or one or more components may not behave according to protocol (e.g., because of a transient/permanent fault or due to a malicious attack). The fact that the system may involve numerous components and links implies that the likelihood of a fault increases; at the same time, however, the fact that this type of systems typically do not rely on a single component (i.e., there is no single point of failure) gives hope for the possibility of sophisticated designs for distributed schemes that can detect/identify the fault, and eventually work around it, so that the distributed system seamlessly continues to operate as expected. It turns out that the underlying graph that describes the communication links (edges) between system components (nodes) plays a significant role not only in the efficiency of the distributed control/coordination algorithms but also in their robustness.
3) Monitoring, Diagnosis and Control in Complex Digital Systems and Heterogeneous Networks
Another focus of Dr. Hadjicostis' research is the development of (possibly distributed) monitoring and testing strategies for complex networks and systems. Effective schemes need to detect and identify a large class of abnormal conditions (or failures), while avoiding excessive hardware overheads and costs. Due to the complexity and heterogeneity of modern systems and networks, it is imperative that our monitoring schemes are able to overcome incomplete or erroneous information supplied by imperfect components. This research has important consequences for the real-time analysis and control of communication, transportation, or other critical networks; it also has important ramifications for digital system testing and verification.
4) Discrete Event Systems for System Automation
This part of Dr. Hadjicostis' work focuses on the study of discrete event systems, i.e., systems that are characterized by discrete and qualitative changes of (possibly symbolic) state values caused by the occurrences of discrete events (as opposed to ticks of a clock --as in discrete-time systems-- or continuously --as in continuous-time systems). In particular, Dr. Hadjicostis has focused on the analysis, monitoring (e.g., fault diagnosis), verification of various properties of interest (e.g., diagnosability, opacity), and supervisory control. Dr. Hadjicostis is also interested in using algebraic techniques that are rather powerful in providing insights and constructions for many practical problems that arise in DES applications.
5) Other interests: Coding, Graph Theory, Embedded
Architectures and Algorithms for Signal Processing
Partial List of Research
Ongoing: Coordination for Control of Distributed
The proposed research develops
theory and techniques for Communication for Control of
Distributed Systems. Such systems emerge through the increasing
use of cyber-infrastructures as backbones for information
exchange between sensors, computer-based controllers and
sensors, in order to enforce coordination and regulation tasks.
Examples include Qatar gas and oil distribution networks, the
electricity distribution in the Gulf region, and traffic
networks. The exchange of information between sensors,
controllers and actuators in such systems often occurs over
digital communication channels, subject to power and rate
constraints, that may introduce noise, delays, or other
disturbances. Reliable and timely exchange of information
necessitates the integration of techniques from various
disciplines, ranging from information and communication theory,
control of stochastic dynamical systems, and cooperative
decision-making. The main emphasis of the proposed research is
to address communication issues for real-time control of
distributed systems, in order to achieve a common objective
performance via distributed decision-making. The proposed
project is multidisciplinary and utilizes tools from Information
Theory (e.g., capacity and compression of information), Control
Theory (e.g., filtering theory, and stochastic optimal control
via limited rate feedback), and Multi-Agent Team Theory (e.g.,
mathematical models for cooperative decision under different
Joint project with Profs. P. R. Kumar and C. Georghiades of Texas A&M University (College Station, US), J. Boutros of Texas A&M University (Qatar), and C. D. Charalambous University of Cyprus (Cyprus).
Publications related to this project can be found in Publications.
Ongoing: Increasing Crops Biomass by Uncovering the
Circadian Clock Network using Dynamical Models
The circadian clock is an internal
timing system that allows plants to predict daily and seasonal
changes in light and temperature and thus to adapt
photosynthesis, growth, and development to external conditions.
The core oscillator is well understood in the model plant
Arabidopsis, however, relatively little is known about the
dynamic effects of the clock on agronomic behaviour of crop
plants. This project aims to model the circadian clock of the
crop barley and its effects on the transcriptome, metabolome and
phenotypic performance. To this end, the project will adapt
tools from the fields of Control Systems and Machine Learning to
learn how species in complex networks regulate each other and
how these regulations vary in response to genetic or
environmental changes. In particular, the project will utilize
the Nu gap metric with a wide utility in understanding changes
in network connections due to a variety of biological
perturbations, ranging from the effects of drugs on human cells
to the impact of environmental changes on crop performance. The
Nu gap defines differentially expressed systems obtained in
different environments or genotypes, as opposed to the standard
and simpler tools on differentially expressed genes. First, the
project aims to develop and improve biological modelling and Nu
gap analysis as a practical tool for biology using simulated
datasets with increasing complexity. Then, the Nu gap model will
be applied in order to to obtain linear and nonlinear causal
dynamic relationships using experimental time series datasets
from the model plant Arabidopsis. Finally, Nu gap analyses will
be used to define the barley core oscillator and its effects on
transcriptome, sugar metabolism and agronomic performance using
barley clock mutants. Resultant models will allow predicting
plant performance in response to genetic perturbations of the
clock. Understanding the circadian clock of the model crop
barley and its effects on important agronomic traits may have
great impact on precision breeding of barley and related
Joint project with Profs. J. Goncalves of University of Luxembourg (Luxembourg), M. von Korff of Heinrich Heine University (Dusseldorf, Germany), and L. Ljung of Linkoping University (Linkoping, Sweden), and Drs. M. Spiller of Syngenta Seeds GmbH (Bad Salzuflen, Germany), and Clemens Ostrowicz of University of Luxembourg (Luxembourg).
Publications related to this project can be found in Publications.
Ongoing: Distributed Control Strategies with
Application to Robust Fractional Order Controllers for
Emerging networked control systems
are vulnerable to faults caused by random disturbances due to
physical or environmental conditions (e.g., due to uncertainties
introduced by the underlying communication infrastructure). At
the same time, however, the distributed nature of such systems
and the existence of diverse interconnectivity between multiple
sensors, computational elements, and actuators, can potentially
(under proper design) be exploited to provide robustness and
tolerance to abnormalities introduced by faults (caused by
operational, environmental, or other abnormalities). The
proposed research addresses the problem of fault-tolerant
monitoring and control of distributed systems, with application
to the highly important distillation field and, in particular,
isotopic separation by distillation. The importance of the
project is emphasized on one side by its applicative, practical
nature (focused on a field of international interest, such as
distillation and isotope separation), and on the other side in
terms of frontier research related to the development of
distributed, fault-tolerant robust control strategies, with
potential applicability to a multitude of emerging embedded
systems (ranging from smart grids and traffic networks to
automotive control and sensor networks). From this point of
view, this bilateral cooperation project (Cyprus-Romania) will
address key control challenges, of high actuality.
Joint project with Prof. Eva - Henrietta Dulf of the Department of Automation of the Technical University of Cluj-Napoca.
Publications related to this project can be found in Publications.
Ongoing: Compositional Techniques for Analysis of
Safety Critical Interconnected Systems
The proliferation of digital systems and networking technologies over the last few years has revolutionized many aspects of the scientific and commercial world, and has greatly affected daily life functions. Emerging “cyber-infrastructures” and “cyber-physical systems” are obtained by composing elementary modules (subsystems) according to some established rules or protocols (interfaces), and include both purely discrete systems (such as electronic banking and governance applications, grid computing environments, digital systems on a chip, etc.) as well as systems that involve both discrete and continuous aspects (such as traffic networks of various sorts, embedded electronic devices, real-time systems, power grids, etc.). In general, the analysis of the resulting monolithic system becomes intractable very quickly (its complexity typically increases exponentially with the number of constituent modules) and can be further exacerbated by the interaction of components with both discrete and continuous dynamics. Realizing the potentially simplifying role of the underlying modularity, several researchers have started investigating the modular analysis (e.g., verification or testing) and operation of such systems. What is largely absent from these discussions, however, is a system-theoretic framework for addressing important concerns about safety, reliability/dependability, and security, a deficiency that is inhibiting the true proliferation of these technologies in emerging applications.
The proposed project will focus on modular discrete event systems (DES), such as interconnected finite state machines or Petri nets (composed of several modules that interact via shared transitions or places), and will investigate the role of (possibly physical) coupling and its use in guiding the design of appropriate interfaces and control laws so as to achieve safety, reliability/dependability, and security objectives. The immediate goal of the project will be to gain insight about systems with purely discrete dynamics, such as finite automata and Petri nets, and to understand how their analysis and the verification of various properties of interest can benefit from the underlying modularity of such systems. The long term hope is that the project will also provide insights regarding the extension of these methodologies to hybrid systems (that involve both discrete and continuous dynamics).
Joint project with Prof. Alessandro Giua and Prof. Carla Seatzu of the Department of Electrical and Electronic Engineering (DIEE) of the University of Cagliari.
Publications related to this project can be found in Publications.
Potential ThesisProjects for PhD Students:
you are a current or prospective graduate student at UCY,
interested in the work pursued in our group, you can get a
fairly good idea of the research interests I am interested in
by looking at my website and, in particular, my publications.
If you find a topic/publication that interests you, feel free
to contact me but please include the reasons why you are
interested in the topic, any relevant past experience that you
might have, and a brief CV (including past projects and
publications, related or unrelated).
Please keep in mind that I receive many emails from prospective students and I apologize that I do not have time to reply to all of them.
HMM Classification and Biosequencing
Earlier work on probabilistic failure diagnosis in finite state machines led quite naturally to the problem of classification of hidden Markov models (HMMs). This classification becomes more challenging when errors can corrupt the output sequence (e.g., via label insertions, deletions, and transpositions, which can be used to model various ways of data corruption). By exploiting the structure of the underlying HMMs, we have been able to obtain bounds on the a priori probability of HMM misclassification, i.e., the probability that we observe a sequence that is more likely generated from a model other than the one that actually generated it. Specifically, we have been able to show that, under certain (easy to check) conditions on the graphical structure of the HMMs, the bound on the probability of HMM misclassification goes down exponentially with the length of the observation sequence. These bounds on the probability of misclassification can also be used to characterize (bound) the “distance” or “difference” between two HMMs and can find application in many areas where HMMs are used. For example, existing work for protein structure prediction relies very heavily on identifying homologous sequences with known structure to be used as templates; one can therefore potentially apply the aforementioned techniques in order to evaluate our ability to discriminate between two different HMM templates. One question of particular importance is the computational complexity associated with the use of our methods in biosequencing applications. More generally, within the context of this project, we are interested in exploiting the implications of our classification algorithms and obtaining performance bounds on problems of sequence similarity, homology and alignment using probabilistic models.
Diagnosis using Belief Propagation Algorithms
This project investigates how belief propagation algorithms can be adopted for the problem of diagnosing multiple diseases/faults/abnormalities based on a set of observed symptoms or findings. The work is motivated by a broad range of applications (such as network security, fault detection/isolation, and medical diagnosis). The core problem is described by a weighted bipartite graph that consists of a set of components (or diseases), a set of alarms (or symptoms) and a set of dependencies between them. The weighted connections represent the causal relationships from the diseases to the symptoms, and the goal is to find the combination of components that has the maximum a posteriori probability (MAP) based on the alarms (symptoms/findings) observed and the a priori probabilities of components, alarms and connection failures. Since the problem is in general NP-complete, the project studies computationally efficient iterative algorithms that can efficiently provide good solutions. In addition, by analyzing the structure of the underlying bipartite graph, analytical bounds on the performance of these algorithms can be obtained (e.g., in terms of the probability of making erroneous diagnosis with respect to the MAP solution). We are particularly interested in applying our algorithms to the QuickMedical Reference (QMR) database with the ultimate goal of providing decision-support for medical diagnosis. We are also using theoretical machinery to obtain analytical bounds on the performance of these algorithms and to characterize maximally discriminatory tests.
Verification of Diagnosability and Opacity
A system is diagnosable if any given fault that occurs at some point in time is guaranteed to be detected/identified after a finite number of event occurrences (which in turn generate the observations based on which we need to diagnose the fault). A system is (current-state) opaque if an outside observer can never conclude with certainty that its (current) state belongs to a given set of secret states; in other words, for any possible behavior in the system, the sequence of observations that are generated always allow for the possibility that the system lies in a state outside the secret set. The verification of the properties of diagnosability and opacity are rather well understood: both can be verified via the construction of an observer (i.e., a current state estimator) which has complexity exponential in the number of states of the underlying system. However, it turns out that the verification of diagnosability can be achieved with complexity that is polynomial in the number of states of the system whereas the verification of opacity is an inherently hard problem. In this project, we are interested in understanding how the verification of these important properties can be simplified if the underlying system is composed of a set of modules, each of which may have significantly smaller number of states than the overall system. There already exist some promising results in this direction but there is a lot that remains to be accomplished, particularly in terms of how the various modules are combined.
Distributed Weight-Balancing in Directed Graphs
Distributed systems whose components (nodes) can exchange information via interconnections (links) that form an arbitrary communication topology (graph) that is not necessarily fully connected arise in many distributed control tasks, ranging from formation control and distributed averaging to consensus and distributed optimization. In many applications, it is imperative to obtain a weight assignment on the links, such that the resulting graph is balanced, i.e., for each node, the sum of the weights on its incoming links is equal to the sum of the weights of its outgoing links. Distributed methodologies for obtaining weight assignments that balance undirected graphs are rather trivial; however, the task is significantly more challenging for directed graphs and has recently started to draw the attention of the research community.
Some ofDr. Hadjicostis' Past Research Projects
Past: Control for Coordination of Distributed
This project dealed with control for
coordination of distributed systems and was motivated by case
studies of (i) control for underwater vehicles, (ii) aerial
vehicles, (iii) road control and communication networks, (iv)
automated guided vehicles, and (v) complex machines. The
research thrust was in control design and control synthesis, and
specifically in control synthesis of a global coordinator of a
distributed system, in communication for control, in informatics
for control, and in tools for control design. Control design for
the case studies based on the research thrust formed the main
effort of the project and was disseminated to the user partners.
Expected impact includes (a) enabling low cost monitoring for
the environment and for natural resources by underwater and
aerial vehicles; (b) new services and applications for new
markets, particularly for automated guided vehicles at container
terminals, and for control and communication networks on
motorway networks; and (c) improved performance of distributed
The consortium consisted of four user partners and eight academic partners, which combined for very wide and deep expertise in many topics. Dr. Hadjicostis was part of a four-member team from the University of Cyprus, which was one of the academic partners. Within the scope of this project, Dr. Hadjicostis was the leader of the work package on Informatics for Control, which deals primarily with synthesis and design of distributed algorithms for control and failure detection in distributed systems. This involved understanding the role of existing models with respect to certain properties of interest, including observability and controllability properties; the ultimate goal was to develop formal mechanisms that can handle dynamically changing links and that are capable of accurately modeling real systems. Moreover, the research concerned energy-accuracy trade offs in geographically distributed systems, and the role of the underlying network structure in our ability to quickly and reliably calculate certain functions in a distributed system.
The publications that emanated from this project are included in Publications.
The UCY team also involved Prof. Charalambos Charalambous, Prof. Christos Panayiotou, and Prof. Marios Polycarpou.
Past: Resilient Network Control Systems
The compounding complexity of
digital devices, the expansion of networks in size and
diversity, and the ever increasing dependency of business and
government sectors alike on networked infrastructures has
undoubtedly resulted in a pressing need for advanced
design/analysis tools and for effective monitoring and control
strategies. More critically, however, it has become urgently
necessary to obtain scalable and effective methodologies for
diagnosing faults, assessing and estimating system properties of
interest, and operating these complex systems in uncertain
environments and possibly in the presence of communication
constraints, faults or adversaries. This project directly
addressed these needs by focusing on networked control systems
(initially within the context of interacting discrete event
systems and eventually expanding to switched linear systems).
The project concentrated on the following two objectives:
(i) Establishment of techniques for monitoring and diagnosing faults or, more generally, abnormal behavior and functional changes in dynamic systems and networks, under limited and possibly corrupted information. This aspect of the project was highly interdisciplinary and combines techniques from a variety of fields, including Systems and Control, Detection and Estimation, Computer Science, and Applied Mathematics.
(ii) Development of resiliency- and privacy-ensuring control strategies for networked control systems. The project focused on developing strategies that enable complex networked control systems to retain part of their input or internal state private (e.g., unknown to external observers that have partial access to the activity occurring in a given networked system). In the case of discrete event systems (namely, finite automata), we developed opacity enforcing control strategies using a combination of tools (ranging from computer security and supervisory control to distributed algorithms and Byzantine fault-tolerant communication protocols). These strategies have to be efficient (in terms of the use of computational and communication resources) and also be resilient to faulty components (subsystems or communication links) or even conspiracies by malicious nodes (that try to expose or influence the operation of the system via a coordinated attack).
The successful completion of this research will have far-reaching ramifications for testing, monitoring, maintaining, and controlling complex systems and networks, such as traffic networks, power distribution systems, and large networks of sensors and actuators. In particular, they will allow the automated operation of detection and control mechanisms, which can ultimately lead to resilient and safe operation of these complex systems despite the presence of malicious or non-malicious disruptions. Though some of these challenges have been addressed using centralized algorithms (e.g., monolithic diagnosers and controllers for supervisory control), the scientific challenge in the case of the large-scale networked control systems that emerge as a result of the proliferation of networking and digital technology is to extend these techniques to distributed/decentralized settings, understand the costs and performance tradeoffs involved, and (if necessary) develop new algorithms that can provide suboptimal but adequate performance at reasonable costs.
The publications that emanated from this project are included in Publications.
Past: Assessing and Building Trust in Next
Generation Network Architectures
Sponsored by Cyprus Research Promotion Foundation
The main objective of this research
was to develop a secure system that stores, distributes, and
revokes certificates used for authentication in the
next-generation Internet. The analysis used, as a basis example,
the recently proposed store and forward architecture. The key
management system in such architectures needs to integrate
traditional methods of key management and trust over wired and
wireless ad hoc networks, and has to provide new methods of
executing key distribution. Ideas that emerged from the analysis
in this project may drive the design of standards that can
potentially be linked to current industry trends.
The publications that emanated from this project are included in Publications.
Joint project with Dr. George Hadjichristofi.
Past: Diagnosis and Assessment of Faults, Misbehavior and Threats in Distributed Systems and Networks
This project was a multi-university effort that aimed at developing theory and techniques for monitoring and diagnosing faults, hazards or, more generally, functional changes in dynamic systems and networks, under limited and possibly corrupted information. Its goal was to develop a unifying and multifaceted approach to this problem by decomposing the large body of fault diagnosis research into six topics:
Our research team involved researchers from Boston University, Massachusetts Institute of Technology, University of Illinois (lead), University of Oklahoma and Yale University, and leveraged its expertise in the areas of fault diagnosis, sequential detection, system-level diagnosis, distributed control, modeling, analysis and performance evaluation, applied probability, graph theory, belief propagation and model reduction to the problem of detecting, identifying and localizing faults and abnormalities in dynamically evolving environments.
This was a joint project with Profs. Carolyn Beck and R. Sreenivas at UIUC, Prof. Ioannis Paschalidis at Boston University, Prof. Sekhar Tatikonda at Yale, Prof. K. Thulasiraman at the University of Oklahoma and Prof. John Tsitsiklis at MIT.
Past: Diagnosis and Tolerance of Faults and Misbehavior in Distributed Systems via Structured Redundancy
This project investigated methodologies for detecting, locating and overcoming faults and misbehavior in networks and distributed systems. Starting from a generally applicable probabilistic description of faults and their effects, the project developed and analyzed efficient and effective heuristics for detecting and identifying (isolating) faults. Since fault detection/isolation in this context is generally NP-Complete, we focused on effective belief-propagation algorithms that have polynomial complexity and are amenable to distributed implementation. These algorithms allowed us to investigate the role of structured redundancy in the given distributed system and how it can be used in a variety of contexts, ranging from equipment diagnosis and medical diagnosis to multiple intrusion detection. The project also studied state estimation in distributed systems and analyzed its implications to trust and privacy. In particular, we considered notions of state opacity in distributed systems (i.e., the existence of a set of states that needs to be kept opaque --- secret --- from outside observers) and developed strategies to analyze and verify them, as well as supervisory control methodologies to enforce them while minimally restricting the system behavior. Finally, we studied systematic ways of overcoming faulty or malicious behavior when performing function calculation (including simple dissemination of information) in distributed systems. More specifically, we analyzed linear iterative strategies where each node updates at each time-step a local value to be a weighted average of its own previous local value and those of its neighbors. Such strategies not only allow (after a sufficiently large number of time-steps) each node to obtain enough information to calculate the desired function of the initial node values, but are also robust to faults or misbehavior by nodes in the network. The strategies developed in this project have potential applications in distributed system operation and maintenance.
Past: An Integrated Approach to Fault Tolerance in Discrete-Time Dynamic Systems
As the complexity of dynamic systems and
networks grows through the continuous deployment of embedded
systems and the availability of novel sensor and actuator
technologies, the likelihood of temporal or permanent failures
at certain components or communication links of the system
increases significantly and the consequences become highly
unpredictable and severe. Even within a single digital
device, the reduction of voltages and capacitances, the
shrinking of transistor sizes and the sheer number of gates
involved has led to a significant increase in the frequency of
so-called ``soft-errors,'' and has prompted leading
semiconductor manufacturers to admit that they may be facing
difficult challenges in the future. The occurrence of failures
becomes a major concern when the systems involved are
life-critical (such as military, transportation or medical
systems), or operate in remote or inaccessible environments
(where repair may be difficult or even impossible). This project
aimed at obtaining systematic approaches for modeling,
detecting, identifying and correcting faults in order to ensure
the proper functionality of discrete-time dynamic systems or
networks. Unlike traditional control where the goal is to
stabilize a given dynamic system (while perhaps maintaining some
sort of optimality in the applied control input), a
fault-tolerant design aims at ensuring that any deviation from
the expected system behavior is confined within a small time
interval (usually one discrete-time step). In addition, the
designer of a fault-tolerant system needs to account for the
possibility of failures in the sensors or communication links,
or even in the error detecting/correcting mechanism itself. This
project took a system-theoretic viewpoint towards the design of
fault-tolerant dynamic systems; the main goals were to obtain
resource-efficient fault-tolerant implementations and to
characterize their fundamental limitations by jointly exploiting
system-, coding- and information-theoretic techniques.
Past: Enabling Novel Digital Sequential Circuit Designs through Error Control and Noise Tolerance Techniques
This project aimed at evaluating the practical implications of recently developed error control and noise tolerance techniques in the construction of reliable, high performance digital sequential circuits. The main focus was to explore how dynamic error correction (DEC) and algorithm noise-tolerant (ANT) methodologies can enable next-generation sequential circuit architectures that are cost-effective and operate at speed and energy efficiencies that potentially exceed the limits imposed by current VLSI architectures. The objective of this research was two-fold:
Joint project with Prof. Naresh Shanbhag at UIUC.
Past: Fault-Tolerant Operation and Control of Energy Processing Systems
The high availability of networking and digital technologies has opened up a number of exciting possibilities for building reliable energy processing systems and automated fault detection and accommodation mechanisms. However, before traditional fault tolerance techniques (like modular redundancy and error-control coding) can proliferate in the context of energy processing systems, a number of questions need to be addressed. The main issues studied in this project pertained to the dynamics of the underlying energy processing system (including their coupling with the fault-tolerant procedures) and the reliability of the monitoring/correcting mechanisms (which can themselves malfunction due to a power failure). The main goal was to develop a comprehensive framework for dynamical state estimation, fault detection and fault accommodation in energy processing systems, such as terrestrial and autonomous power systems, electric drives and power electronic systems, as found in both civilian and military sectors. In particular, this project made connections with traditional fault tolerance techniques by developing distributed monitoring/correcting schemes and by explicitly accounting for the system dynamics in order to overcome faults that affect the functionality of the system.
Joint project with Prof. Alex Stankovic at Northeastern University.
Past: Architectures for Secure and Robust Distributed Infrastructures
This was large project that involved
researchers from four different academic institutions (Caltech,
MIT, Stanford and UIUC) with a variety of backgrounds and
expertise. The following excerpt, taken from the project's
webpage describes its main focus: "The major barrier
constraining the successful management and design of large-scale
distributed infrastructures is the conspicuous lack of knowledge
about their dynamical features and behaviors. Up until
very recently analysis of systems such as the Internet, or the
national air traffic system, have primarily relied on the use of
non-dynamical models, which neglect their complex, and
frequently subtle, inherent dynamical properties. These
traditional approaches have enjoyed considerable success while
systems are run in predominantly cooperative and ``friendly''
environments, and provided that their performance boundaries are
not approached. With the current proliferation of applications
using and relying on such infrastructures, these infrastructures
are becoming increasingly stressed, and as a result the
incentives for malicious attacks are heightening. The stunning
fact is that the fundamental assumptions under which all
significant large-scale distributed infrastructures have been
constructed and analyzed no longer hold; the invalidity of these
non-dynamical assumptions is witnessed with the greater
frequency of catastrophic failures in major infrastructures such
as the Internet, the power grid, the air traffic system, and
national-scale telecommunication systems."
Within the context of this project,
Dr. Hadjicostis' research focused on the challenges that arise
in regards to distributed or hierarchical control and
coordination, fault tolerance, safety and scalability. The goal
of the proposed research was to develop models and algorithms
appropriate for evaluating the sensitivity of a complex
interconnected system to failures or parameter perturbations. A
familiar example that illustrates the complexity of the issues
involved is the commercial air traffic network: flight and
ground operations scheduling are performed at several, highly
interacting levels, including the central flow management (FAA
systems command center), en-route traffic control, local flow
control, departure and arrival planning at the airports (TRACON
and airport tower facilities), and individual airline
constraints and ground personnel limitations (operation control
centers). Malicious attacks, accidental malfunctions, personnel
shortage, or delays at different components of this system can
have not only localized effects but can also manifest themselves
as a cascading failure in the overall system. The coupling
between different airports and air traffic operations enables a
single failure, perhaps as simple as a broken conveyor belt at
one airport, to potentially trigger a complicated failure mode
and result in a highly undesirable global behavior. Another
familiar example is the telephone network and its vulnerability
to minor failures. What is alarming in these large
interconnected systems is that an intelligent attacker that is
aware of the vulnerabilities of a certain critical system may be
able to cause severe economic damages or chaotic consequences
through a relatively "innocent" attack on a single component of
the system. Available techniques relied heavily on costly
simulations that provide little insight or intuition; the goal
of this part of the proposed research was to develop novel
models and algorithms for analyzing the effects of failures or
parameter perturbations in complex dynamic systems and networks.
The team at the University of Illinois also included Profs. Carolyn Beck and Geir Dullerud.
Past: Hierarchical and Reconfigurable Schemes for Distributed Control over Heterogeneous Networks
The main topic of this research project was the problem of reliable control of geographically distributed complex real-time systems over a heterogeneous communication network. In its most general form, the heterogeneous network can be viewed as the backbone for information exchange between sensors, computerized control sites and actuators. Technological advances in terms of cost-effective special-purpose computing architectures and high accessibility of network connectivity offered at the time exciting possibilities for computer-based control methodologies that can be applied either centrally or distributively/hierarchically, depending on the underlying application and objective. In the former case, a central control location uses the network to gather information from the various system sensors and to relay carefully computed actions to the actuators. In the case of distributed/hierarchical control, the network is used to exchange information between sensors, actuators, and multiple control sites. Each such processing site receives information from a (not necessarily exclusive or fixed) subset of sensors and is responsible for sending optimal control signals to the corresponding actuator(s).
Joint project with Profs. Tamer Basar, Geir Dullerud, Seth Hutchinson, Constantine Polychronopoulos, R. Srikant, and Petros Voulgaris.
Past: Enhanced Equalization and Decoding for EDGE, 3G and Beyond
Sponsored by Motorola
This project investigated problems in equalization and decoding for wireless communications as applicable to EDGE, 3G and future systems. Specifically, the project investigated the applicability of BAD and turbo-linear equalization algorithms to 3G and EDGE-type systems for wireless channels. The project also investigated space-time coding approaches for time-varying channels. Within the context of this project, Dr. Hadjicostis research has been focusing on soft-decision decoding algorithms for linear block codes.
Joint project with Profs. Ralf Koetter and Andrew Singer.
Write to chadjic AT ucy.ac.edu