www.fgks.org   »   [go: up one dir, main page]

Posts Tagged ‘Engineering’:


Implementation and certification of a quality management system to a recognized international standard: Organizational advantages and benefits

Todays world of competition, in both the manufacturing and service sectors, necessitates that companies consider all aspects of customer satisfaction. That means, for one, that organizations must pay greater attention to the means by which their overall operational system and its processes are functioning to generate the level of quality of products and services that customers are demanding. To help accomplish this, a series of international Quality Management Systems standards have come into existence through the International Organization for Standardization. This dissertation provides a background on some of the more popular standards and focuses on one of them, ISO 9001, and four manufacturing company clients and the advantages and benefits they derived from implementing and becoming certified to that standard. The four companies realized notable improvements, as was also found in the review of literature on the topic, in return on assets, productivity/efficiency, sales, employee safety compliance, operational costs, scrap and rework, audits by customers, understanding of customer requirements, on-time delivery, communications/coordination, overall operations, product conformity controls, and documented processes/tasks. The advantages and benefits gained by the four companies ranged from 35% to 55% of the total of 31 categories of advantages and benefits cited in the literature. The conclusion drawn is that it would behoove all companies, in order to remain competitive, to consider the implementation of a Quality Management System standard, such as ISO 9001. A suggested manner for accomplishing this is offered, including information on time line steps, tasks and deliverables, project phases, and the application of ISO 9001 in a Quality Management System manual.



Employing Risk Management to Control Military Construction Costs

Systems acquisition inherently contains elements of uncertainty that must be effectively managed to meet project cost, schedule, and performance objectives. While the U.S. Department of Defense has a record of employing systems engineering technical management processes including risk management) to address these uncertainties for major weapon systems acquisition, the application of risk management to Military Construction MILCON) projects is a recent development. This research studies the use of a formal risk management program on a MILCON project and assesses whether such use influences the projects total cost growth relative to that of U.S. Army Corps of Engineers historical data. A case study methodology is employed assessing the National Geospatial-Intelligence Agency NGA)s multibillion-dollar NGA Campus East program. Keywords: Risk Management, Military Construction MILCON), Construction, Risk, U.S. Army Corps of Engineers USACE, COE), National Geospatial-Intelligence Agency NGA)



A Bayesian Model for Controlling Cost Overrun in a Portfolio of Construction Projects

Planning and executing a successful capital project is one of the main objectives of every public agency. A successful capital project is defined as a project completed in accordance with a given scope, within budget, and on time. Due to risks associated with complex projects, an owner agency usually adds an amount known as contingency to the estimated project cost to absorb the monetary impact of the risks and to prevent cost overrun. However, studies show that large capital infrastructure projects, especially transit projects all around the globe have been mostly experiencing cost and schedule overruns. Despite all efforts and evolving new probabilistic methods to establish sufficient and optimum contingency budget, many agencies have not been able to provide adequate contingency for their large capital projects. For instance, nearly 50% of the large active transportation projects in the United States overran their initial budgets. Some agencies have reacted to this issue by employing approaches that result in too large a contingency budget. Having too much contingency can be just as undesirable as insufficient contingency, especially where the agency is dealing with a portfolio of projects rather than a single project. Assigning large contingencies will use up the agencys budget and will reduce the number of projects that may receive funding. In this research, a new probabilistic model is proposed for calculation of contingency in a portfolio of construction projects. A Bayesian approach is used to update historical contingency values based on new project data that becomes available as construction projects are completed. Most agencies dealing with a portfolio of infrastructure projects should define the level of confidence gamma for the portfolio budget based on available funding and the agencys policy goals. An important question is what level of confidence eta is needed at the individual project level to insure that the portfolio budget will not overrun with a probability of more than 1–gamma. This information is indispensable for the conduct of probabilistic risk assessment for individual projects. The mathematical model developed in this research provides an analytical tool for calculating contingency levels in such a way to meet agency goals with respect to individual projects and the project portfolio. The model assumes a hybrid normal distribution for the cost of individual projects and uses the historical data to calculate the primary parameters of the model. The model defines the required confidence level for the risk assessment of individual project with respect to the desired confidence level for sufficiency of the portfolio budget. The required increase in the portfolio budget is calculated based on the desired confidence level. The correlation between costs of projects is recognized and a structured guideline along with a mathematical method is suggested for estimating correlation coefficients between costs of projects in the portfolio. To consider the recent performance of projects and to update model characteristics based on new project data that becomes available, a Bayesian approach is employed to update the model on regular intervals, such as once every two years. As more information becomes available, the required adjustment in portfolio budget will be reduced, because the accuracy of estimating the contingency is improved. The proposed model is an effective tool for the agencies to develop contingency budgets based on all the performance data historically available and the new data that becomes available in the future. Even though the proposed model is a generic model that can be used on any type of infrastructure projects, our emphasis in this research is mostly on transit projects. Because of this, the funding process for the Federal Transit Administration FTA) is analyzed and the practical application of the model is based on transit projects characteristics and costs.



On the economic viability of network systems and architectures

Understanding the relationship between technology and economics is fundamental to making judicious policy and design decisions. Many technologies that are successful in meeting their technical goals often fail to get adopted due to economic factors. This holds true even for networked systems, e.g., the Internet, which witnessed failures in the adoption of QoS solutions, IPv6 migration etc., due to factors such as high costs, lack of demand, and weight of incumbency. To gain better insights into these issues, researchers need access to analytical frameworks that account for both technological and economic factors and provide useful design guidelines. This dissertation was motivated primarily by the need to undertake such a holistic, multidisciplinary approach towards creating such analytical frameworks. We focus on three important aspect related to deployment, adoption, and design of network systems and architectures. The Internet has been one of the most successful network technologies, serving both as a shared platform for easy deployment of new services and a driver for their adoption. But recent trends in convergence of voice, video, and data services, along with advances in virtualization technologies, raise questions as to whether deploying heterogeneous services on a shared network is right or not, especially given the operational complexity and costs involved. We develop a model to investigate the trade-offs between shared and dedicated infrastructures and identify the operational metrics that influence which infrastructure choice benefits more from resource reprovisioning. Closely related to the issue of network service deployment is that of its successful adoption, which serves as the second topic of this dissertation. An entrants success hinges not only on technical superiority but also on other factors, including its ability to win over an incumbents installed base by using gateways. Our model for adoption of competing technologies reveals several interesting behaviors, including the possibility for converters to reduce overall market penetration across both technologies and to prevent the convergence of the adoption process to a stable state. Lastly, we consider the issue of network platform design. The emergence and adoption of new technologies depend on the functional capabilities provided by the underlying network platform. Answering whether the minimalist design of Internet is still relevant as it evolves into an ecosystem of software services, require a cost-benefit analysis of choosing between a functionality-rich and a minimalist design. We develop a two-sided market model to show how this design choice crucially depends on the relationship between the cost of adding features to the platform and the benefits that application developers derive from them. The frameworks developed in this dissertation have the potential for application in many different network settings, and can spur further research on various topics in network economics.



Two Essays on Oil Futures Markets

The first chapter of this dissertation estimates the relative contributions of two major exchanges on crude oil futures to the price discovery process– Chicago Mercantile Exchange CME) and Intercontinental Exchange ICE), using trade-by-trade data in 2008. The study also empirically analyzes the effects of trading characteristics on the information share of these two markets. Trading characteristics examined in the study include trading volume, trade size, and trading costs. On average, CME is characterized by greater volume and trade size but also slightly greater bid-ask spread. CME leads the process of price discovery and this leadership is caused by relative trade size and volatility before the financial crisis of 2008; however post-crisis period this leadership is caused by trading volume. Moreover, this study presents evidence that, in times of large uncertainty in the market, the market maker charges a greater bid-ask spread for the more informative market. The second chapter examines the influence of expected oil price volatility, the behavior of the Organization of Petroleum Exporting Countries OPEC), and the US Dollar exchange rate volatility on the backwardation of crude oil futures during the period from January 1986 to December 2008. The results indicate that oil futures are strongly and weakly backwardated 57% and 69% of the time, respectively. The regression analysis of weak backwardation shows that oil volatility, OPEC overproduction difference between quota and the actual production), and the volatility of the US Dollar against the Japanese Yen have a positive significant effect on oil backwardation, while OPEC production quota imposed on its members has a negative significant effect on oil backwardation. However the volatility of US Dollar against the British Pound has no significant effect on oil backwardation. The regression analysis of strong backwardation produces qualitatively the same results except that volatility has no effect. In a sub-period analysis, evidence also indicates that trading volume of oil funds and backwardation are negatively related, suggesting that oil funds increase the demand of futures relative to that of spot. Keywords: Oil futures, price discovery, trading characteristics, bid-ask spread, financial crisis, backwardation, OPEC, oil funds, and exchange rate.



Discrete-Time H2 Guaranteed Cost Control

In this dissertation, we first use the techniques of guaranteed cost control to derive an upper bound on the worst-case H2 performance of a discrete-time LTI system with causal unstructured norm-bounded dynamic uncertainty. This upper bound, which we call the H2 guaranteed cost of the system, can be computed either by solving a semi-definite program SDP) or by using an iteration of discrete algebraic Riccati equation DARE) solutions. We give empirical evidence that suggests that the DARE approach is superior to the SDP approach in terms of the speed and accuracy with which the H2 guaranteed cost of a system can be determined. We then examine the optimal full information H2 guaranteed cost control problem, which is a generalization of the state feedback control problem in which the H2 guaranteed cost is optimized. First, we show that this problem can either be solved using an SDP or, under three regularity conditions, by using an iteration of DARE solutions. We then give empirical evidence that suggests that the DARE approach is superior to the SDP approach in terms of the speed and accuracy with which we can solve the optimal full information H2 guaranteed cost control problem. The final control problem we consider in this dissertation is the output feedback H2 guaranteed cost control problem. This control problem corresponds to a nonconvex optimization problem and is thus “difficult” to solve. We give two heuristics for solving this optimization problem. The first heuristic is based entirely on the solution of SDPs whereas the second heuristic exploits DARE structure to reduce the number of complexity of the SDPs that must be solved. The remaining SDPs that must be solved for the second heuristic correspond to the design of filter gains for a estimator. To show the effectiveness of the output feedback control design heuristics, we apply them to the track-following control of hard disk drives. For this example, we show that the heuristic that exploits DARE structure achieves slightly better accuracy and is more than 90 times faster than the heuristic that is based entirely on SDP solutions. Finally, we mention how the results of this dissertation extend to a number of other system types, including linear periodically time-varying systems, systems with structured uncertainty, and finite horizon linear systems.



The Relationship of Equipment Reliability Maintenance Allocation to Productivity and Quality

Unscheduled equipment downtime (UDT) in U.S. factories costs more than $300 billion on plant maintenance and operations annually, with 80% of this amount spent to correct chronic failures of machines and systems which cause production stoppage. This problem of unscheduled equipment downtime, which violates the third theoretical principle of lean philosophy, was the focus of this study. The purpose of this study was to address the need for effective maintenance practices to reduce UDT in a large manufacturing plant in the United States. The research questions asked whether more careful maintenance scheduling could reduce unscheduled downtime, thus increasing factory productivity and improving product quality as measured by decreased late deliveries. The overall quantitative research design was a correlational case study involving equipment reliability maintenance allocation, productivity, and product quality. Maintenance records for over 6,000 machines for a period of 1 year were analyzed using Pearson correlations to identify key contributors to UDT. These analyses were used to document the significant relationship linking UDT to late deliveries, and that newly implemented strategies for scheduling maintenance were associated with significantly lower UDT and late deliveries when contrasting a six-month period in 2009 before the schedule change with the same period in 2010 after the change. The result was a clear decrease in monthby- month UDT, yielding a 9% improvement in productivity and 12% decrease in late deliveries. The positive social impact of this study can be found in decreasing costs for the manufacturer by improving productivity, leading to business growth and increased employment, and in improved product quality and increased customer satisfaction.



Dynamic modeling and forecasting algorithms for financial data systems

It is a valid question that why a Control Systems Engineer would be interested in dealing with financial instruments. Financial instruments involving option theory are very elegant, math oriented and practical. These mathematical tools have created a new industry known as Derivative Industry or Hedge-Fund Industry or so called Risk-Management Industry. This thesis is aimed at developing investment strategies involving the decision making needs via control system techniques. The problem, in general, is computationally challenging particularly when investment of many securities is involved resulting in a high dimensional computational framework. Furthermore, complications may arise due to realistic restrictions and non-linearities. The various areas of financial engineering are very fertile for the application of the system methodology and control theory techniques. Modeling, optimization, identification and computational methods used in the Systems Engineering can be successfully applied to the financial instruments. The ideas developed in this thesis are more about the scientific reasoning involving financial instruments rather than specific situations alone. Major contribution of this thesis is the time series optimal prediction filter and the development of the Dynamic Modeling and Forecasting Algorithm DMFA). The proposed algorithm predicts the next data point of the financial time series while dynamically computing the parameters from existing data. The computation of the parameters is optimized by use of the recursive matrix inversion algorithm. The system is solved via an innovative technique of inversion such that it avoids explicit inversion of more than a 2 X 2 matrix and computation of higher dimensional determinants and co-factors. This results in new contributions to computation finance and numerical methodology along with arbitrage decision and hedging strategies under market uncertainties as well as robust control applications. The minimum mean-square algorithm used assures system stability via poles within the unit circle. The DMFA method is a superior auto regression AR) model as a general system of time-series realizations in-order to calculate the coefficients that fit the model for a better prediction. Theoretical modeling and market specific volatility models, updated volatility computation are derived from the observation data.



Analyses of highway project construction risks, performance, and contingency

Past studies have highlighted the importance of risk assessment and management in construction projects and transportation industry, and have identified cost and time as the most important risks that transportation professionals want to understand and manage. The main focus of this study is to comprehensively analyze transportation construction risk drivers and identify the correlation of the significant risk drivers with project characteristics, cost growth, schedule growth, and project contingency. This study has adopted 31 relevant and significant programmatic and project-specific risk drivers from different past studies. These risk drivers have been analyzed and evaluated using survey responses from professionals in the context of highway transportation projects. Risk assessments including rating of the encountered risk drivers and their correlation with project characteristics have been carried out within the context of highway construction projects in the United States. Correlations of the construction project performance or risk measures, cost growth percentage, and schedule growth percentage, with the rating values of identified risk drivers values have enabled a better understanding of the impacts of risks and the risk assessment process for highway transportation projects. The impact of significant risk drivers on reported construction cost contingency amounts has also been analyzed. The purpose of this effort was to assess impact of ratings for cost impact, schedule impact, and relative importance of the identified risk drivers on contingency amounts. Predetermined method is the common way to calculate contingency amount in transportation projects. In this study parametric modeling has been used to analyze the relationship between predetermined contingency amounts in transportation projects with perceived risk rating values in order to understand how the expert judgments regarding risk ratings can be used in determination of contingency amounts.



Building a system dynamics simulation model in support of ERP project implementation

An Enterprise Resources Planning ERP) system is classified as a cross-functional enterprise-wide information system that integrates all departments and automates business processes across a company. The integrated approach can bring a tremendous payback if companies have planned well for implementation. Previous studies show that the challenges in ERP project implementation can affect most major departments and tend to create changes in many business processes. The problems related to ERP project implementation have become increasing multifaceted, uncertain, and interrelated. This research develops a theoretically sound and suitable comprehensive system model for the simultaneous analysis of ERP implementation processes. This study examines the critical factors of ERP implementation challenges in order to identify the key components causing the failures. These key components serve as foundation information to build a conceptual system model that analyzes the ERP project implementation system components and their cause-effect relationships. In addition, the study constructs a dynamic, graph-based operational System Dynamics model that can simulate the process of ERP system implementation to explore the different implementation system behavior under different policy change settings. The System Dynamics simulation model can demonstrate the impact of ERP implementation issues and serve as an information system implementation laboratory to perform policy formulation and testing by evaluating the performance and behavior of model variables. The result of the study contributes to the understanding of the changes derived by an ERP system implementation that can guide ERP decision makers and administrators for planning and strategic purposes.



© Social Sciences