I am a software engineer at Human Longevity, Inc., a genomics and cell therapy-based diagnostic and therapeutic company. Using advances in genomic sequencing, the human microbiome, proteomics, informatics, computing, and cell therapy technologies, HLI is building the s most comprehensive database on human genotypes and phenotypes to tackle the diseases associated with aging-related human biological decline. Before joining HLI, I re-engineered a human mutation mapping pipeline SaaS system at Illumina, Inc., a developer and manufacturer of life science tools and integrated systems for the analysis of genetic variation and biological function.
From 2003 until mid-2013, I was a Research Staff Member at the IBM Almaden Research Center, where I most recently led the development of the IBM Neuro Synaptic Core Simulator (NSCS) as part of the IBM SyNAPSE project. NSCS models a reconfigurable cortical hardware circuit capable of capturing the various cognitive abilities of the brain, and is intended to evaluate the expected behavior of neuronal algorithms, such as image processing algorithms, when deployed on hardware implementations. Evaluations performed with NSCS demonstrated the potential and power of neuronal algorithms in advance of hardware implementations, thus enabling efficient research and development within this new problem-solving domain.
Prior to NSCS development, I was the research technical leader for the IBM Virtual Mission Bus (VMB) project. The VMB was a middleware system for supporting distributed, adaptive, hard real-time applications for a dynamic cluster of satellites, under the aegis of the DARPA System F6 program. I led a combined research and development team that designed and implemented the VMB, and that produced a successful technology demonstration of the VMB.
In general, my technical interests include: high-performance system simulations, lightweight distributed consistency control, secure group membership protocols, and algorithms for automatic resource reservation and management. Research projects I am working on (or have worked on), in descending chronological order, include:
Decentralized recovery for survivable storage systems (Doctoral thesis research at Carnegie Mellon University)
You will need Adobe Reader to read the Portable Document Format (PDF) files.
Cognitive Computing is an emerging field with a goal to develop a coherent, unified, universal mechanism to engineer the mind. Cognitive computing seeks to implement a unified computational theory of the mind, taking advantage of the ability of the brain to integrate ambiguous sensory information, form spatiotemporal associations and abstract concepts, and make decisions and initiate sophisticated coordinated actions.
Our approach to cognitive computing is to develop dedicated hardware systems for implementing a canonical cortical circuit that can achieve tremendous gains in power and space efficiency when compared to traditional von Neumann circuits. Such efficiency is crucial when scaling such circuits to the size of a mammalian cortex. Our cortical circuit is a reconfigurable network of spiking neurons that is composed of neuron processing elements connected through synapse memory elements—both akin to the basic building blocks of the brain.
To validate and verify the configuration of our hardware, we have developed a simulator that can reproduce hardware functional behavior when testing circuits at the size of a mammalian cortex. Such a simulator also doubles as a research tool for developing and testing new cognitive computing algorithms for implementation on the hardware.
For more information, please visit:
Distributed, adaptive, hard real-time applications, such as process control or guidance systems, have requirements that go beyond those of traditional real-time systems: accommodation of a dynamic set of applications, autonomous adaptation as application requirements and system resources change, and security between applications from different organizations. Developers need a middleware with features that support developing and running these applications, especially as commercial and defense systems become more network-centric. The Virtual Mission Bus (VMB) middleware, targeted at both distributed IT systems and real-time systems, provides the essential basic services to support these applications and the tools for building more complex services, all while keeping the middleware kernel minimal enough for embedded system use. We successfully used the VMB to prototype a distributed spacecraft cluster system.
Storage systems for large and distributed clusters of compute servers are themselves large and distributed. Their complexity and scale makes it hard to manage these systems, and in particular they make it hard to ensure that applications using them get good, predictable performance. At the same time, shared access to the system from multiple applications, users, and competition from internal system activities leads to a need for predictable performance.
The storage quality-of-service project at the UCSC Storage Systems Research Center investigates mechanisms for improving storage system performance in large distributed storage systems through mechanisms that integrate the performance aspects of the path that I/O operations take through the system, from the application interface on the compute server, through the network, to the storage servers. We focus on five parts of the I/O path in a distributed storage system: I/O scheduling at the storage server, storage server cache management, client-to-server network flow control, client-to-server connection management, and client cache management.
The growth in the amount of data being stored and manipulated for commercial, scientific, and intelligence applications is worsening the manageability and reliability of data storage systems. The expansion of such large-scale storage systems into petabyte capacities puts pressure on cost, leading to systems built out of many cheap but relatively unreliable commodity storage servers. These systems are expensive and difficult to manage—current figures show that management and operation costs are often several times purchase cost—partly because of the number of components to configure and monitor, and partly because system management actions often have unexpected, system-wide side effects. Also, these systems are vulnerable to attack because they have many entry points, and because there are no mechanisms to contain the effects either of attacks or of subsystem failures.
Kybos is a distributed storage system that addresses these issues. It will provide manageable, available, reliable, and secure storage for large data collections, including data that is distributed over multiple geographical sites. Kybos is self-managing, which reduces the cost of administration by eliminating complex management operations and simplifying the model by which administrators configure and monitor the system. Kybos stores data redundantly across multiple commodity storage servers, so that the failure of any one server does not compromise data. Finally, Kybos is built as a loosely coupled federation of servers, so that the compromise or failure of some servers will not impede remaining servers from continuing to take collective action toward system goals.
Our primary application is the self-management of federated (but potentially unreliable) clusters of storage servers, but we anticipate that the algorithms we have developed (and will implement) will have broad applicability to the general class of problems involving the coordination of independent autonomous agents with a collective set of mission goals.
Modern society has produced a wealth of data to preserve for the long term. Some data we keep for cultural benefit, in order to make it available to future generations, while other data we keep because of legal imperatives. One way to preserve such data is to store it using survivable storage systems. Survivable storage is distinct from reliable storage in that it tolerates confidentiality failures in which unauthorized users compromise component storage servers, as well as crash failures of servers. Thus, a survivable storage system can guarantee both the availability and the confidentiality of stored data.
Research into survivable storage systems investigates the use of m-of-n threshold sharing schemes to distribute data to servers, in which each server receives a share of the data. Any m shares can be used to reconstruct the data, but any m - 1 shares reveal no information about the data. The central thesis of this dissertation is that to truly preserve data for the long term, a system that uses threshold schemes must incorporate recovery protocols able to overcome server failures, adapt to changing availability or confidentiality requirements, and operate in a decentralized manner.
To support the thesis, I present the design and experimental performance analysis of a verifiable secret redistribution protocol for threshold sharing schemes. The protocol redistributes shares of data from old to new, possibly disjoint, sets of servers, such that new shares generated by redistribution cannot be combined with old shares to reconstruct the original data. The protocol is decentralized, and does not require intermediate reconstruction of the data; thus, it does not introduce a central point of failure or risk the exposure of the data during execution. The protocol incorporates a verification capability that enables new servers to confirm that their shares can be used to reconstruct the original data.
[I began this research project while interning with the Storage Systems Program at Hewlett-Packard Labs.]
Modern high-end disk arrays often have several gigabytes of cache RAM. Unfortunately, most array caches use management policies which duplicate the same data blocks at both the client and array levels of the cache hierarchy: they are inclusive. Thus, the aggregate cache behaves as if it was only as big as the larger of the client and array caches, instead of as large as the sum of the two. Inclusiveness is wasteful: cache RAM is expensive.
We explore the benefits of a simple scheme to achieve exclusive caching, in which a data block is cached at either a client or the disk array, but not both. Exclusiveness helps to create the effect of a single, large unified cache. We introduce a DEMOTE operation to transfer data ejected from the client to the array, and explore its effectiveness with simulation studies. We quantify the benefits and overheads of demotions across both synthetic and real-life workloads. The results show that we can obtain useful (sometimes substantial) speedups.
During our investigation, we also developed some new cache-insertion algorithms that show promise for multi-client systems, and report on some of their properties.