Reducing Adoption Barriers of Agile Development Processes in Large andGeographically Distributed Organizations

Agile software development processes have received much attention from the software development industry in the last decade. The goal of agile processes is to focus the importance of people as primary contributors of the project and reduce the administrative overhead of producing working code for the stakeholders of the project. This paper explores some of the explicit and implied constraints of agile software development processes. It focuses on several common practices of agile processes, particularly those that might limit their adoption by large and geographically distributed organizations. This paper makes recommendations to reduce the barriers to adoption of agile processes by these types of organizations. It attempts to answer questions such as: Is it possible for a large organization with many established business and development processes to incrementally adopt an agile process? Is it possible to adopt agile development processes to work for many individuals who are physically isolated, such as work-at-home software developers? Is it possible to adopt agile development processes to work for a large team, divided into many sub-teams that are geographically distributed and possibly working in different time zones? Extreme Programming is probably one of the most recognized agile software development process today. It was introduced in the late 1990s by Kent Beck and eventually published as a book (Beck, 2005). Beck’s approach documented the values, principles and practices necessary too deliver lower defect, working software with less formal process and more focus on the skills of people and community that produces it. Extreme Programming is targeted to small, collocated teams of about twelve people. Other proponents of agile software development processes understood the increasing interest in their approaches by the software industry and followed with the Manifesto for Agile Software Development. The contributors of the Manifesto were the creators of many different agile, iterative and incremental software development processes. Their goal was to unify principles they shared in common. The work was authored by “[…] representatives from Extreme Programming, SCRUM, DSDM, Adaptive Software Development, Crystal, Feature-Driven Development, Pragmatic Programming, and others […]” (Manifesto, 2001). Beck and Andres (2005) present the primary practices of Extreme Programming in their book. Two practices stand out as a limitation of scaling Extreme Programming to teams in multiple locations, or even work-at-home employees. They are Sit Together and Pair Programming. Sit Together is a practice that encourages the team to work in a unified area, such as a large, open room that promotes easy communication. Pair Programming is a technique where two developers sit together at a single workstation and take turns designing and writing code. As one developer is writing code, the other is observing, asking questions and offering suggestions as the current piece of work progresses. The goal of these two practices is to lower the defect rate through a constantly available communication and collaboration of developers sharing the same physical space. Beck and Andres (2005) also discuss the importance of team size in a project that uses Extreme Programming. They recommend a team size of about twelve people. The reason for this size has as much to do with coordination of development activities as it does with psychological needs of being a part of a team. The larger a team grows, the less personal the connections between team members become. Faces are more difficult to remember and communication among all members gravitates toward infrequency. These challenges with team size become amplified with work-at-home software developers who may only be in the physical presence of other members of the team a few times a year at specific events such as all-hands meetings. Active and regular communication is a requirement with agile software development. Ramesh (2006) describes the perceived advantages of teams distributed across time zones and continuous development, e.g. as one team ends for the day and goes to bed another is coming to work to pick up where the last left off. However, there is actually a communication disconnection between the geographically distributed team in this situation, and the teams are forced into a mode of asynchronous communication, potentially slowing down progress. This problem relates to two principles of the Manifesto for Agile Software Development (2001) that presents a challenge to geographically distributed teams. The first is “Business people and developers must work together daily throughout the project.” The second is “The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.” Both principles are related to communication among developers, management, stakeholders and users of the project. Lindvall (2004) points out that incremental adoption of agile practices into an existing large organization can be challenging. An existing organization typically has the expectation that existing business and development processes are followed regardless of project size and the process used. Educating those outside of the agile pilot project and resetting their expectations for following the established processes can create tension. A specific example is that development in agile-driven projects usually starts with a subset of the requirements set. This is a quality of agile development processes and has to do with working on what is understood to be the goal of the project today. As working builds are created and delivered to stakeholders the requirements set can be appended and refined until there is agreement that a reasonable goal has been established. Murthi (2002) documents a case of using Extreme Programming on a 50-person development project and cites the ease of starting early with a partial requirements set, and using the subsequent working results for two goals: show stakeholders working software to build confidence in the development team and giving stakeholders something to help refine their own understanding of their needs. Incrementally developed requirements, constantly refined budgeting and burn rate of finances that are typical of agile development process management can present a unique challenge to a project that is completely or partially outsourced. Cusumano (2008) details the need for an iterative contract between the customer and outsourcing provider. A fixed-price contract can be nearly impossible to design when agile development processes are in use by either party. Boehm (2005) also discusses the problem of using agile processes within the realm of contracting to the private and public sector. Problems can be encountered when measuring progress of a contract’s completion. As a consumer following an agile process, the requirements can remain a moving target well into the project’s life cycle. As a provider following agile development processes, it can become nearly impossible to provide final system architectural details early in the life cycle to the consumer for review. Boehm also points out the difficulties to overcome by providers utilizing agile processes when seeking certification for CMMI and ISO-related international standards. The barriers of agile development process adoption by a large or geographically distributed organization can be reduced by a combination of two approaches. The first approach is the application of tooling and technologies to support the practices of agile software development that scale to an organization’s needs. The second approach is to continuously refine the practices in conflict with the organization’s existing mode of operation over time. An example of practice refinement through technology adoption is The Sit Together and Pair Programming practices from Extreme Programming, and working together daily and face-to-face interaction among customers and developers as recommended principles of the Manifesto of Agile Software Development. These practices and principles are the most obvious barrier to adoption of pure agile development processes within a large or geographically distributed team. The essence of the Sit Together practice is to provide a means to communicate at-will among team members. Technologies that help support this practice in distributed environments include instant messaging systems that can provide a mechanism for short question and answer sessions for two or more participants in the project at once. Longer conversations among the team can be supported through VOIP solutions, reservation-less teleconference solutions, Skype and XMPP-based messaging solutions that can allow several team members at a time impromptu contact and discussion opportunities for project issues. Speakerphones allow collocated sub-teams to participate in conversations about the project across geographic locations. In all the examples cited, full-duplex voice communication is essential for effective discussion among several team members at once. This type of communication allows the audio channels to work in both directions simultaneously, which means someone can talk and interrupt someone speaking as they could when they are in person. Many inexpensive speakerphones are half duplex. These types of devices block the receiving audio channel when the person is speaking. Someone wanting to stop the speaker to clarify a point is unable to do so until the person speaking pauses. Background noise, such as a loud computer fan or air conditioner can cause similar problems for half-duplex communication systems. Pair Programming can be performed through a combination of voice communication and desktop screen sharing technology. Individuals working within the same network or virtual private network can use solutions like Microsoft NetMeeting or Virtual Network Computing (VNC) to share, view and work within each other’s development environment and perform pair programming over any distance. Web-based and wide-area-network tooling to support the incremental development and tracking of plans, requirements and defects is available from several vendors such as IBM and Rally Software Development Corporation. Gamma (2005) presented The Eclipse Way at EclipseCon several years ago. The motivation behind his presentation was the many requests he received from users of the Eclipse environment to understand how a team distributed throughout the world could continue to release as planned and with a low defect rate. The Eclipse Foundation has a centralized data center in Canada for several of its activities including continuous integration and automated testing of nightly builds. The build and testing process of the Eclipse environment is fully automated for each platform it supports. Additionally, end-users are encouraged to install and use nightly builds after they pass the automated suite of tests. Other barriers to adopting agile development processes cannot be solved with tooling alone. Ramesh (2006) found that the solution to working across multiple time zones is to synchronize some meetings, and rotate the time of the meeting so that each group takes turns in suffering from an extraordinarily early or late meeting so that everyone on the project can communicate live. Solving the opposing forces in contract negotiating requires creativity. Boehm (2005) recommends disbursing “[…] payments upon delivery of working running software or demonstration of progress rather than completion of artifacts or reviews.” According to Boehm there is not yet a well-defined compatible solution to agility in process and certification of ISO or CMMI related certifications. Lindvall (2004) concluded that adoption of agile development processes by large organizations is best accomplished through hybrid integration with the existing processes, particularly the established quality processes. With this approach, the existing quality processes can be used to measure the effectiveness of the agile software development process under pilot. This paper described several of the qualities shared by different agile software development processes. It focused on those aspects that potentially limit agile process adoption by large and geographically distributed organizations. The recommendations made in this paper include technology solutions to improve collaboration and communication among distributed developers and consumers of the project. The technology considerations also help alleviate management concerns such as incremental planning and budgeting of agile projects. Recommendations were also provided for large organizations with established processes and approaches pilot projects utilizing agile development can take to leverage those processes to demonstrate their value. It is possible to adopt agile software development processes for large and geographically distributed organizations. Adoption requires thoughtful and careful application, integration and refinement of the practices at the core of these agile processes for a successful outcome. REFERENCES Beck, K., Andres, C. (2005). Extreme Programming Explained. Second Edition. Copyright 2005, Pearson Education, Inc. Boehm, B., Turner, R. (2005). Management Challenges to Implementing Agile Processes in Traditional Development Organizations. IEEE Software. 0740-7459/05. Cusumano, M.A. (2008). Managing Software Development in Globally Distributed Teams. Communications of the ACM. February 2008/Vol. 51, No. 2. Gamma, E., Wiegand, J. (2005). Presentation: The Eclipse Way, Processes That Adapt. EclipseCon 2005. Copyright 2005 by International Business Machines. Leffingwell, D. (2007). Scaling Software Agility: Best Practices for Large Enterprises. Copyright 2007 by Pearson Education, Inc. Lindvall, M., Muthig, D., Dagnino, A., Wallin, C., Stupperich, M., Kiefer, D., May, J., Kahkonen, T. (2004). IEEE Computer. 0018-9162/04. Manifesto. (2001). Manifesto for Agile Software Development. Retrieved 2 October 2008 from http://agilemanifesto.org/. Murthi, S. (2002). Scaling Agile Methods - Can Extreme Programming Work for Large Projects? www.newarchitectmag.com. October 2002. Ramesh, B., Cao, L., Mohan, K., Xu, P. (2006). Can Distributed Software Development Be Agile? Communications of the ACM. October 2006/Vol. 49, No. 10. ...

October 12, 2008 · 10 min · 2104 words · Jim Thario

Applicability of DoDAF in Documenting Business Enterprise Architectures

As of 2005, the Department of Defense employed over 3 million uniformed and civilian people and it had a combined $400 billion fiscal budget (Coffee, 2005). The war-fighting arm of the government has had enormous buying power since the cold war and the complexity of technologies used in military situations continues to increase. To make the most optimal use of its dollars spent, reduce rework and delays in delivery of complex solutions, the DoD needed to standardize the way providers described and documented their systems. The DoD also needed to promote and enhance the reuse of existing, proven architectures for new solutions. The Department of Defense Architecture Framework (DoDAF) is used to document architectures of systems used within the branches of the Department of Defense. “The DoDAF provides the guidance and rules for developing, representing, and understanding architectures based on a common denominator across DoD, Joint, and multinational boundaries.” (DODAF1, 2007).DoDAF has roots in other enterprise architecture frameworks such as Zachman Framework for Information System Architecture (Zachman, 1987) and Scott Bernard’s EA-Cubed framework described in (Bernard, 2005). Zachman and Bernard’s architecture frameworks have been largely adopted by business organizations to document IT architectures and corporate information enterprises. Private sector businesses supplying solutions to the DoD must use the DoDAF to document the architectures of those systems. These suppliers may not be applying concepts of enterprise architecture to their own business, or they may be applying a different framework internally with an established history of use in the business IT sector. The rigor defined in DoDAF version 1.5 is intended for documenting war fighting and business architectures within the Department of Defense. The comprehensive nature of DoDAF including the required views, strategic guidance, and data exchange format also makes it applicable to business environments. For those organizations in the private sector that must use the DoDAF to document their deliverables to the DoD, it makes sense to approach adoption of DoDAF in a holistic manner and extend the use of DoDAF into their own organization if they intend to adopt any enterprise architecture framework for this purpose.The Department of Defense Architecture Framework is the successor to C4ISR. “The Command, Control, Communications, Computers, and Intelligence, Surveillance, and Reconnaissance (C4ISR) Architecture Framework v1.0 was created in response to the passage of the Clinger-Cohen Act and addressed in the 1995 Deputy Secretary of Defense directive that a DoD-wide effort be undertaken to define and develop a better means and process for ensuring that C4ISR capabilities were interoperable and met the needs of the war fighter.” (DODAF1, 2007). In October 2003, DoDAF Version 1.0 was released and replaced the C4ISR framework. Version 1.5 of DoDAF was released in April of 2007. DoDAF solves several problems with the acquisition and ongoing operations of branches within the Department of Defense. Primarily it serves to reduce the amount of misinterpretation in both directions of communication by system suppliers outside of the DoD and consumers within the DoD. The DoDAF defines a common language in the form of architectural views for evaluating the same solution from multiple vendors. The framework is regularly refined through committee and supports the notion of top-down architecture that is driven from a conceptual viewpoint down to the technical implementation.Version 1.5 of DoDAF includes transitional improvements to support the DoD’s Net-Centric vision. “[Net-Centric Warfare] focuses on generating combat power from the effective linking or networking of the war fighting enterprise, and making essential information available to authenticated, authorized users when and where they need it.” (DODAF1, 2007). The Net-Centric Warfare initiative defines simple guidance within DoDAF 1.5 to support the vision of the initiative and guide qualities of the architecture under proposal. The guidance provided within DoDAF includes a shift toward a Services-Oriented Architecture, which we often read about in relationship to the business sector. It also encourages architectures to accommodate unexpected but authorized users of the system. This is related to scaling the solution and loose coupling of system components used in communication of data. Finally, the Net-Centric guidance encourages the use of open standards and protocols such as established vocabularies, taxonomies of data, and data interchange standards. These capabilities will help promote integrating systems into larger, more information intensive solutions. As this paper is written, Version 2.0 of DoDAF is being developed. There is currently no timeline defined for release.DoDAF defines a layered set of views of a system architecture. The view progress from conceptual to technical. Additionally a standards view containing process, technical, and quality requirements constrain the system being described. The topmost level of view is the All Views. This view contains the AV-1 product description and the AV-2 integrated dictionary. AV-1 can be thought of as the executive summary of the system’s architecture. It is the strategic plan that defines the problem space and vision for the solution. The AV-2 is the project glossary. It is refined throughout the life of the system as terminology is enhanced or expanded. The next level of view is the Operational Views. This level can be thought of as the business and data layer of the DoDAF framework. The artifacts captured within this view include process descriptions, data models, state transition diagrams of significant elements, and inter-component dependencies. Data interchange requirements and capabilities are defined within this view. Example artifacts from the operational view include the High-Level Operational Concept Graphic (OV-1), Operational Node Connectivity Description (OV-2), and Operational Activity Model (OV-5). The third level of views of Systems and Services View. This view describes technical communications and data interchange capabilities. This level of the architecture is where network services (SOA) are documented. Physical technical aspects of the system are described in this level as well, including those components of the system that have a geographical requirement. Some artifacts from the Systems and Services View include Systems/Services Interface Description (SV-1), Systems/Services Communications Description (SV-2), Systems/Services Data Exchange Matrix (SV-6), and Physical Schema (SV-11).DoDAF shares many of the beneficial qualities of other IT and enterprise architecture frameworks. A unique strength of DoDAF is the requirement of a glossary as a top-level artifact in describing the architecture of a system. (RATL1, 2006). Almost in tandem with trends in the business IT environment toward Service-Oriented Architectures, DoDAF 1.5 has shifted more focus to a data-centric approach and network presence in the Net-Centric Warfare initiative. This shift is motivated by the need to share operational information with internal and external participants who are actors in the system. This need is also motivated by the desire to assemble and reuse larger systems-level components to build more complex war fighting solutions. As with other frameworks, DoDAF’s primary strength is in the prescription of a common set of views to compare capabilities of similar systems. The views enable objective comparisons between two different systems that intend to provide the same solution. The views enable faster understanding and integration of systems delivered from provider to consumer. The view also allows for cataloging and assembling potentially compatible systems into new solutions perhaps unforeseen by the original provider. The DoDAF view can effect a reduction of deployment costs and lower possibility of reinventing the same system due to lack of awareness about existing solutions. A final unique strength of DoDAF is that it defines a format for data exchange between repositories and tools used in manipulating the architectural artifacts. The (DODAF2, 2007) specification defines with each view the data interchange requirements and format to be used when exporting the data into the common format. This inclusion in the framework supports the other strengths, most importantly automation of discovery and reuse of existing architectures.Some weaknesses of DoDAF can be found when it is applied outside of its intended domain. Foremost, DoDAF was not designed as a holistic, all encompassing enterprise architecture framework. DoDAF does not capture the business and technical architecture of the entire Department of Defense. Instead it captures the architectures of systems (process and technical) that support the operations and strategy of the DoD. This means there may be yet another level of enterprise view that relates the many DoDAF-documented systems within the DoD into a unified view of participating components. This is not a permanent limitation of the DoDAF itself, but a choice of initial direction and maximum impact in the early stages of its maturity. The focus of DoDAF today is to document architectures of complex systems that participate in the overall wartime and business operations of the Department of Defense. A final weakness of DoDAF is the lack of business-financial artifacts such as a business plan, investment plan and return-on-investment plan.It is the author’s observation that the learning curve for Zachman is potentially smaller than DoDAF. Zachman’s basic IS architecture framework method is captured in a single paper of less than 30 pages, while the DoDAF specification spans several volumes and exceeds 300 pages. Zachman’s concept of a two-dimensional grid with cells for specific subjects of documentation and models is easier for an introduction to enterprise architecture. It has historically been developed and applied in business information technology situations. Zachman’s experience in sales and marketing at IBM motivated him to develop a standardized IS documentation method. There are more commonalities than differences in the artifacts used in both DoDAF and Zachman methods. Zachman does not explicitly recommend a Concept of Operations Scenario, which is an abstract flow of events, a cartoon board, or artistic rendering of the problem space and desired outcome. This does not mean a CONOPS (Bernard, 2005) view could not be developed for a Zachman documentation effort. Business process modeling, use-case modeling, and state transition modeling are all part of DoDAF, Zachman, and Bernard’s EA-cubed frameworks. (Bernard, 2005).The EA-cubed framework developed by Scott A. Bernard was heavily influenced by Zachman’s Framework for Information Systems Architecture. Bernard scaled the grid idea to support enterprise architecture for multiple lines of business with more detail than was possible with a two-dimensional grid. The EA-cubed framework uses a grid similar to Zachman’s with an additional dimension of depth. The extra dimension allows each line of business within the enterprise to have its own two-dimensional grid to document their business and IT architecture. Cross-cutting through the cube allow architects to identify potentially common components to all lines of business - a way to optimize cost and reduce redundant business processes and IT systems. The EA-cubed framework includes business-oriented artifacts for the business plan, investment case, ROI, and product impact of architecture development. As mentioned above, DoDAF does not include many business-specific artifacts, specifically those dealing with financials. Both Zachman and EA-cubed have more layers and recommended artifacts than DoDAF. EA-cubed has specific artifacts for physical network level and security crosscutting components, as an example. The Systems and Services view of DoDAF recommends a Physical Schema artifact to capture this information if needed. In the case of DoDAF, vendors may not know in advance the physical communication medium deployed with their system such as satellite, microwave or wired networks. In these cases, the Net-Centric Warfare guidance within DoDAF encourages the support of open protocols and data representation standards.DoDAF is not a good starting point for beginners to enterprise architecture concepts. The bulk of the volumes of the specification can be intimidating to digest and understand without clear examples and case studies to reference. Searching for material on Zachman on the Internet produces volumes of information, case studies, extensions and tutorials on the topic. DoDAF was not designed as a business enterprise architecture framework. The forces driving its development include standardizing documentation of systems proposed or acquired through vendors, enabling reuse of existing, proven architectures, and reduce time to deploy systems-of-systems built from cataloged systems already available. Many of the documentation artifacts that Zachman and EA-cubed include in their frameworks are also prescribed in DoDAF, with different formal names but essentially the same semantics. The framework recommends more conceptual-level artifacts than Zachman. This could be attributed to the number of stakeholders involved in deciding if a solution meets the need. DoDAF includes a requirement for glossary and provides architectural guidance with each view based on current DoD strategy. Much of the guidance provided in DoDAF is directly applicable to the business world. The Net-Centric Warfare strategy, which is discussed in within the guidance, is similar to the Service-Oriented Architecture shift happening now in the private sector. Lack of business-strategic artifacts such as business plan, investment plan, and ROI estimates would force an organization to supplement prescribed DoDAF artifacts with several of their own or from another framework. The Department of Defense Architecture Framework was designed to assist in the acquisition of systems from suppliers. There are many point-in-time similarities between Zachman and DoDAF in terms of DoDAF’s level of refinement for use with large enterprises. DoDAF could potentially benefit from a similar approach as Bernard’s, in that the flat tabular view is scaled up with depth. A extension of DoDAF with a third dimension could be used to document the architectures of multiple lines of business within an enterprise with more detail than is possible with a single artifact set. With minor enhancements, the DoDAF is a viable candidate for business enterprise architecture efforts. ReferencesArmour, F.J., Kaisler, S.H., Liu, S.Y. (1999). A Big-Picture Look at Enterprise Architectures, IT Professional, vol. 1, no. 1, pp. 35-42. Retrieved from http://doi.ieeecomputersociety.org/10.1109/6294.774792.Bernard, S.A. (2005). An introduction to enterprise architecture. (2nd ed.) Bloomington, IN: Author House.Coffee, P. (2005). Mastering DODAF will reap dividends. eWeek, 22(1), 38-39. Retrieved August 3, 2008, from Academic Search Premier database.Dizard, W. P. (2007). Taking a cue from Britain: Pentagon’s tweaked data architecture adds views covering acquisition, strategy. Government Computer News, 26, 11. p.14(1). Retrieved August 02, 2008, from Academic OneFile via Gale: http://find.galegroup.com.dml.regis.edu/itx/start.do?prodId=AONEDoDAF1. (2007). DoD Architecture Framework Version 1.5. Volume I: Definitions and Guidelines. Retrieved 31 July 2008 from http://www.defenselink.mil/cio-nii/docs/DoDAF_Volume_I.pdf.DoDAF2. (2007). DoD Architecture Framework Version 1.5. Volume II: Product Descriptions. Retrieved 31 July 2008 from http://www.defenselink.mil/cio-nii/docs/DoDAF_Volume_II.pdf.IBM. (2006). An IBM Rational Approach to the Department of Defense Architecture Framework (DoDAF). Retrieved 2 August 2008 from ftp://ftp.software.ibm.com/software/rational/web/whitepapers/G507-1903-00_v5_LoRes.pdf.Leist, S., Zellner, G. (2006). Evaluation of current architecture frameworks. In Proceedings of the 2006 ACM Symposium on Applied Computing (Dijon, France, April 23 - 27, 2006). SAC ‘06. ACM, New York, NY, 1546-1553. DOI= http://doi.acm.org/10.1145/1141277.1141635.RATL1 (2006). An IBM Rational approach to the Department of Defense Architecture Framework (DoDAF) Part 1: Operational view. Retrieved 1 August 2008 from http://www.ibm.com/developerworks/rational/library/mar06/widney/.RATL2 (2006). An IBM Rational approach to the Department of Defense Architecture Framework (DoDAF) – Part 2: Systems View. Retrieved 1 August 2008 from http://www.ibm.com/developerworks/rational/library/apr06/widney/.Zachman, J.A. (1987). A framework for information systems architecture. IBM Systems Journal, Vol. 26, No. 3, 1987. Retrieved July 2008 from http://www.research.ibm.com/journal/sj/263/ibmsj2603E.pdf. ...

August 9, 2008 · 12 min · 2423 words · Jim Thario

Issues of Data Privacy in Overseas Outsourcing Arrangements

Outsourcing is a business concept that has been receiving much attention in the new millennium. According to Dictionary.com (2008) the term outsourcing means to obtain goods or services from an outside source. The process of outsourcing a portion of a business’ work or material needs to an outside provider or subcontractor has been occurring for a long time. The information technology industry and outsourcing have been the focus of editorials and commentaries regarding the movement of technical jobs from the United States to overseas providers. The globalization of business through expanding voice and data communication has forged new international partnerships and has increased the amount of outsourcing happening today. Businesses in the U.S and Europe spend billions in outsourcing agreements with overseas service providers. According to Sharma (2008), spending for outsourcing in the European Union is almost $150 billion (GBP) in 2008. The overriding goal in outsourcing work to a local or overseas provider is to reduce the operations cost for a particular part of the business. Many countries, such as India and China have lower wages and businesses in the U.S. and Europe can save money by hiring an overseas contractor to perform a portion of their work. Outsourcing is gaining popularity in the information age by assisting information technology companies in performing some of their business tasks. This can include data processing, and call routing and handling. With the growth of the technology industry also comes the problem of maintaining and protecting private information about the details of individuals, such as medical history or financial data. Many countries such as the United States and Europe have mandatory personal data privacy laws. These laws do not automatically translate to national laws where the outsourcing service provider is located, or potentially the service provider’s subcontractors. This paper discusses the issues of outsourcing work to an overseas provider when personal data is involved in the outsourced tasks. It presents several solutions to help manage the risk of data breaches caused by disparate laws in countries currently popular for information technology outsourcing. The most common types of work outsourced to overseas service providers include bulk data processing, call center handling, and also paralegal outsourcing. The last example of overseas outsourcing can include work such as legal research, contract and brief writing, and transcription. Outsourcing firms typically do not have a U.S. law license, which limits the extent of their involvement in legal work. The United States is expanding national information protection laws. Two of the most common laws are the Health Insurance Portability and Accountability Act (HIPAA) and the Gramm-Leach-Bliley Act (GLB). The U.S. Congress enacted the HIPAA act in 1996. It is related to the protection of health information that can be used to identify someone or disclose a medical condition. “The data privacy and security requirements of HIPAA apply to health plans, health care providers, and health care clearinghouses. Among its many requirements, HIPAA mandates the creation and distribution of privacy policies that explain how all individually identifiable health information is collected, used, and shared.” (Klosek, 2005). U.S. Congress enacted the GLB act in 1999. The Financial Privacy Rule of the Act is related to documenting and auditing the processes used by an organization for assuring privacy of information that can identify persons, as in HIPAA, and private data about their finances. Both HIPAA and GLB require the organization to publish the information privacy policy and notify the consumer each time it changes. “[…] The GLB Act focuses upon privacy of the non-public information of individuals who are customers of financial institutions.” (Klosek, 2005). The U.S. is not considered at the forefront of privacy protection laws. Likewise, many countries have absolutely no privacy protection laws for their citizens. The European Union is one of the strictest regions with respect to data privacy and outsourcing work that handles private information. The privacy directive for the entire EU was passed in 1998. It specifies a minimum standard for all member countries to follow in handling private personal data and transferring it between companies inside and outside of the European Union. “The EU privacy directive 1998 aims to protect the privacy of citizens when their personal data is being processed. […] One of the provisions of this directive […] addresses the transfer of personal data to any country outside of the EU.” (Balaji, 2005). In most cases, European companies transferring personal data to an overseas outsourcing provider would need to assure the contractor follows the EU rules for handling and processing the data. The EU is also in the process of pre-certifying certain countries for properly handling personal data according the directive standards. Businesses in the Philippines have been providing outsourcing solutions for information technology businesses for over a decade. Estavillo (2006) states the government has increased its focus on keeping the outsourcing landscape fertile in the Philippines. It has created an optional certification program for local businesses based on the government’s own guidelines for protection of information used in data processing and communications systems. The government hopes to continue to expand its reach into enforcing data protection by penalizing unlawful activities such as data breaches and unauthorized access to data intensive systems. Recently ISO has started an international certification effort called ISO 27001. The purpose of the certification is to prove a company documents and follows information security practices and controls. Ely (2008) points out that an ISO 27001 audit is against the processes of the outsourcing provider’s choosing, and to make sure the outsourcing firm follows the industry’s best practices and compliance guidelines of the home country and it deeply understands them. Often an overseas company will adopt HIPAA or Payment Card Industry (PCI) standards for handling of personal data and be certified against that standard for ISO 27001. Any size company can be certified under this standard, and there are no international restrictions regarding who may be certified. Outsourcing work in the information technology industry almost always includes the access or transfer of data between the client organization and the outsourcing provider. Voice conversations and movement of data over an international connection can be subject to interception and monitoring by U.S. and foreign surveillance programs. Ramstack (2008) finds that “[…] paralegal firms in India are doing a booming business handling the routine legal work of American law firms, such as drafting contracts, writing patents, indexing documents or researching laws.” A lawsuit was filed in May of 2008 that requests a hold on new legal outsourcing work until outsourcing companies can provide assurances that data transferred overseas can be protected against interception by U.S. and foreign intelligence collection agencies. The fear is that private legal information about citizens could be transferred from intelligence agencies to law enforcement agencies in the same or allied countries. The mix of international standards and laws offer little hope of legal action across borders when personal data is misused or illegally accessed. The flood of competition among overseas outsourcing companies does offer some hope that reputations are extremely important for sensitive outsourcing agreements. Once an outsourcing provider has been tainted with a bad reference for bulk data processing of foreign citizen’s medical information, for example, it will limit the firm’s financial upside until its reputation can be rebuilt. All of the focus should not be only on the outsourcing provider. It is important for an organization to define and understand its own processes involving data privacy internally before beginning an outsourcing agreement. People within the business who work around and regularly handle private data should be included early in the process of defining the requirements about outsourcing information-related work. These contributors can include the IT and business controls staff members and staff supporting the efforts of the CIO’s office. A cross-company team should define the conditions needed to work with private data regardless of the outsourcing group - local or overseas. They can also help define constraints placed on the outsourcing service provider. “Ensure that the contractual arrangement covers security and privacy obligations. Include language in the contract to articulate your expectations and stringent penalties for violations. Review your provider’s organizational policies and awareness training for its employees.” (Balaji, 2004). Large outsourcing providers may chose to outsource their work to smaller companies in their local country. It is important to be able to control the primary outsourcing company’s ability to subcontract work to other providers or to require that data handling standards in the contract are transitive to all subcontractors who may become involved, at the risk of the original outsourcing provider. In this case it is also important to have the outsourcing service provider identify in advance all or most of the subcontractors involved to obtain references. It is important to define in the outsourcing contract what happens when the relationship terminates. The transition plan for the end of the outsourcing agreement must include a process for obtaining control of data transferred to the outsourcing provider from the customer organization. There should be a way to return the data to the customer organization or assure its destruction on the outsourcing provider’s information systems. Although it has been a part of business for as long as there has been business, outsourcing in the information age brings with it new risks as well as opportunities for business cost optimization and scaling. Risks in outsourcing information services for private data can be mitigated partially through a detailed contract in addition to outsourcing vendor transparency. The best way to ensure compliance to contractual terms is to be sure the customer organization understands their own data privacy standards and treats all outsourcing situations with the same requirements followed internally. The customer organization should perform or obtain third-party audit reports of the outsourcing provider’s processes and systems for ongoing reassurance of proper handling of private data. References Balaji, S. (2004). Plan for data protection rules when moving IT work offshore. Computer Weekly. 30 November 2004, Pg. 26. Ely, A. (2008). Show Up Data Sent Offshore. INFORMATIONWEEK, Tech. Tracker. 2 June 2008, Pg. 37. Estavillo, M. E., Alave, K. L. (2006). Trade department prods outsourcing services to improve data security. BusinessWorld. 9 August 2006, Pg. S1/1. Klosek, J. (2005). Data privacy and security are a significant part of the outsourcing equation. Intellectual Property & Technology Law Journal. June 2005, 17.6, Pg. 15. Outsourcing. (n.d.). Dictionary.com Unabridged. Retrieved June 23, 2008, from Dictionary.com website: http://dictionary.reference.com/browse/outsourcing. Ramstack, T. (2008). Legal outsourcing suit spotlights surveillance fears. The Washington Times. 31 May 2008, Pg. 1, A01. Sharma, A. (2008). Mind your own business. Accountancy Age. 14 February 2008, Pg. 18. ...

June 28, 2008 · 9 min · 1754 words · Jim Thario

Concepts and Value of the "4+1" View Model of Software Architecture

This essay describes the concepts and value of the “4+1” View Model of Software Architecture described by Philippe Kruchten in 1995. The purpose of the 4+1 view model is to provide a means to capture the specification of software architecture in a model of diagrams, organized into views. Each view represents a different concern and diagrams within each view use a diagramming notation suitable for that diagram’s purpose. The answers provided in each view answer questions related to the structure, packaging, concurrency, distribution, and behavior of the software system. The “+1” is a view of the scenarios and behavior of the software being described. This view drives development of the other views. The value the 4+1 view model approach brings to software architecture is that it is not specific to any class of software system. The principles behind the 4+1 view model can be applied to any scale of software system, from embedded software to web applications distributed across many collaborating servers. The software architecture of business IT systems can be represented using the 4+1 view model. What is a model? “A model plays the analogous role in software development that blueprints and other plans (site maps, elevations, physical models) play in the building of a skyscraper.” (OMG, 2005) Software can be specified using just textual requirements or it can be shown as a model of collections of diagrams with textual notes describing specific details. Models provide a filter for humans to deal with a lot of information at one time. Models give us a big picture, just like a blueprint does. Diagrams within a model can be organized by subject, purpose, or locality within a system. For building construction, a single page in a roll of blueprints might describe the routing plan for plumbing or electrical conduits. A different page might detail the foundation. Likewise, a diagram within a model might show us the structure of the database. A different diagram will show where each piece of the software runs on a network. The content of diagrams in models can be at any level of “zoom” to describe parts of the software. Simple data structures can be described in a diagram, as can complex scenarios carried out by several servers in synchronization. Kruchten’s purpose in the 4+1 view model is to capture and document the software’s architecture using diagrams organized in several views. What is software architecture? “Software architecture is the principled study of the overall structure of software systems, especially the relationship among subsystems and components.” (Shaw, 2001) I interpret the word “relationship” in this context to mean many possible kinds of relationships. One kind of relationship between subsystems is where one subsystem relies on the services of another subsystem. There can be a behavioral relationship among subsystems, where the protocol of messages between them must be documented. Another type of relationship among subsystems is collocation - how do they communicate? Can they communicate? What is the mechanism used to store transaction data and are the interfaces and support code packaged within each subsystem to allow data storage to happen? These are all questions answered by information at the level of software architecture. “Software architecture is concerned with the high-level structures of a software system, the relationships among them, and their properties of interest. These high-level structures represent the loci of computation, communication, and implementation.” (Garlan, 2006) A driving force behind the 4+1 view model is that a single diagram cannot communicate information about all the different kinds of relationships within a software system. A diagram that showed all the different concerns of a software’s architecture simultaneously would be overwhelming. Each view in the 4+1 view model has a different concern or subject. Multiple diagrams can exist within each view, like files exist within a folder by subject. Modeling and diagramming tools are used to create diagrams and organize them when applying the 4+1 view model. Many tools exist to build diagrams, including Microsoft Visio (VISIO, 2008), Enterprise Architect (EA, 2008) and Rational Software Architect (RSA, 2008). Kruchten uses a notation called the Booch notation in his paper to capture information in his diagrams for each view. Since Kruchten wrote his paper over ten years ago, the Booch notation has been refined and was contributed into the Unified Model Language specification from the Object Management Group. The 4+1 views are the logical view, process view, development view and physical view. The “+1” view contains the scenarios that represent the system’s interaction with the outside world. The scenarios are requirements. They drive the development of the other views of the architecture. The logical view contains the decomposition of the system into functions, structures, classes, data, components and layers. Kruchten points out that several different types of diagrams might be necessary within the logical view, to represent code, data, or other types of decomposition of the requirements. Mainly the scenarios, or “+1” view influences development of this view. The logical view is needed by the development and process views. The process view is concerned with the actual running processes in the deployed system. Processes are connected to each other through communication channels, like remote procedure calls or socket connections. Elements within the logical view run on processes, so there is traceability from the process view back to the logical view. Some projects, like the development of a code editor, will not require a process view since there is only one process involved. The third view is the development view. The scenarios and the elements in the logical view drive the contents of the process view. The development view documents the relationships and packaging of the elements from the logical view into components, subsystems and libraries. Diagrams within a development view might show which classes or functions are packaged into a single archive for installation. The diagrams within the development view should allow someone to trace back from a package of code to elements in the logical view. Dependencies among packages of code are documented in this view also. The fourth view is the physical view and it is created from the scenarios, process view and development view. The fourth view shows the allocation of packages of code and data, and processes to processing nodes, e.g. computers. The relationship between nodes is also shown in this view, usually in the form of physical networks or other physical data channels that allow processes on different nodes to communicate. The final “+1” view is the scenarios, which represent requirements for the behavior of the system. Kruchten’s paper shows examples using object scenario and object interaction diagrams. One could also use classic flow charts, use cases or UML activity diagrams to capture the scenarios of the software system. At a minimum, the scenarios should document how the system behaves and interacts with the outside world, either with people or with other systems. The information captured within a “4+1” View Model of Software Architecture is common to all software systems and can be applied as a general approach to document and communicate about information systems. Business information systems are very often database-centric, and use fat-client or web-based interfaces to enter, search, update and remove data. A business system can enforce a workflow of approvals before it allows a transaction to complete. Data warehousing solutions exist to archive, profile and find patterns in data for new. Many businesses are deploying self-service web sites for customers to interact with their business without constraining the customer to specific times a transaction can take place. Each of these qualities of business systems can be captured with one or more views of the “4+1” model. A logical view can be used to document the database schema, code modules, and even individual pages of content within a web solution. The development view for a J2EE solution would document how HTML files, JSP files, and Java code is packaged into archive files before deployment to the application server. The process view for a client-server database system would show code modules assigned to the user’s application process. The database schema and stored procedures would be assigned to the relational database server processes. Finally, a physical view of a web-based database application would show separate servers for the web and database. The web server process from the process view would be assigned to the web server node, as would the packages of HTML, CGI and other code in the development view. The physical view would also show a similar traceability for the database server node. The value of “4+1” View Model of Software Architecture is that it serves as general guiding principles to answer the question of what needs to be documented at a minimum when describing software architecture. Each view within the model has a well-defined subject or concern for the diagrams that are organized within the view. All software can be described in terms of behavior, structure, packaging and where it executes. These are the basic qualities the 4+1 view intent to document for easier human consumption. There are no official constraints to the notation styles that can be used by diagrams in each view. When applied to larger systems the logical view will contain many types of diagrams. The notation independence makes it a very flexible approach to use for many styles of software. When it is taught to a team along with diagramming skills, it can be used as significant form of communication and provide clarity among software project team members when creating new or documenting legacy IT projects. References Garlan, D., Schmerl, B. (2006). Architecture-driven Modeling and Analysis. 11th Australian Workshop on Safety Related Programmable Systems (SCS ’06). Kruchten, P. (1995). Architectural Blueprints - The “4+1” View Model of Software Architecture. IEEE Software 12 (6), 42-50. Object Management Group. (2005). Introduction to OMG UML. Retrieved May 10, 2008 from http://www.omg.org/gettingstarted/what_is_uml.htm. Rational Software Architect product page. (2008). Retrieve May 10, 2008 from http://www-306.ibm.com/software/awdtools/architect/swarchitect. Shaw, M. (2001). The Coming-of-Age of Software Architecture Research. IEEE. 0-7695-1050-7/01. Sparx Systems home page. (2008). Retrieved May 10, 2008 from http://www.sparxsystems.com.au. ...

May 30, 2008 · 8 min · 1665 words · Jim Thario

The Differentiator

Are you a software engineer? Today is a good day. Have you read the news? Read here for a quick review about the SM-3 missile versus the USA-193 satellite smack-down that took place over the Pacific Ocean. This event was not exciting to me because it was a demonstration of American military capability - I mean, it was that, but my interests in the event have a different motivation. It was exciting to me because this was a hammering success for the software engineers that modified the Navy’s systems to pop that satellite over a hundred miles above the planet without a warhead. It wasn’t like the Navy had to get close enough to detonate the missile. They had to be dead on because this was a kinetic kill at closing speeds over 20,000 MPH. This event was a strong example of software as a differentiator. Missiles and rockets are becoming commodity items. Russia has them, China has them, and the Middle East has them or is testing them. In fact, most countries with a vowel in their name have missile capabilities. A missile is not a big deal - a tube with propellant. Light it off, it might go up, sideways, spin wildly or just fall over and explode. The SM-3 has been around for a few years, but the military has never admitted to trying to use it to shoot down a satellite in orbit. The SM-3 was originally designed to go nose-to-nose with incoming short and medium range missiles. The exciting story-behind-the-story for me is that software brought that satellite down and the SM-3 missile provided a reliable and high-performance lift for the software to find it’s target.Today software is the key differentiator in a world of commodity technology. Think about it. A majority of us have cell phones. They are shrinking in size and expanding in capability. Where does that capability ultimately come from? Why would you buy one phone over another? I select a phone and carrier based on features. Where do the features come from? Is it in the case, the antenna, the battery, the screen, or the memory card? All cell phones have these in one form or another. What differentiates them from one another is the software. The phone’s software realizes the capability to share a chat in your social network, send a ring tone to a friend, find an archived text message from your sibling, and to learn about the latest discounts at the stores in your area reported by the GPS.Have you seen Ford’s new commercials recently? They are touting the Sync system. In fact, they are spending a lot of money showcasing that and not MPG, crash tests, 60-0 stopping ability, etc. Whether or not that is a good idea is yet to be seen. The hardware that goes into Sync goes into a lot of in-car entertainment and phone systems: speakers, radio, CD, MP3 player, microphone, antenna, LCD screen and little buttons on the steering wheel. Big deal. Commodity items. What differentiates Sync is the voice recognition system and the integration of pieces inside and outside the system. So, what is Sync? It is the software that realizes the features and value proposal of the Sync concept.Today software is what differentiates individual pieces and parts from something innovative that creates new value. My final example is the Toyota Prius. It has something called Hybrid Synergy Drive. It not enough to say it is a gas and electric hybrid. That wouldn’t do it justice. It is a drive-by-wire system, and at it’s heart: software making the decisions when to go electric, gas, recharge, and much more. A human tasked with driving and making these continuous decisions on how to generate power most efficiently from all the available choices would not be practical or possible.Today is one more good day for software engineers - the people behind the Wizard’s curtain. Well done. I’ll see you at the bar for a toast. Without you, it’s just a box of pieces and parts. ...

February 21, 2008 · 4 min · 671 words · Jim Thario

Throughts on the relationship between Rational Method Composer and EPF Composer

This seems to be a topic of increasing discussion both inside IBM and within the Eclipse Process Framework community. Questions such as “Which offering will get feature XYZ first?” “Are they functionally equivalent?” “Should the customer buy Rational Method Composer or will EPF Composer do the same thing?” are asked weekly. To refresh everyone, Rational Method Composer is a commercial tool by IBM Rational Software for the authoring of method content and for publishing configurations of method content as processes. EPF Composer is a subset of RMC code and was donated by IBM to the Eclipse Foundation as open source. The idea over time is that EPF Composer will be a core component of RMC, while RMC will add value through proprietary features and support that might not be possible in a purely open source offering.I would like to see the relationship between EPF Composer and Rational Method Composer develop in the same way the relationship of Red Hat Enterprise Linux and Fedora Core Linux has evolved. Red Hat Enterprise Linux and Fedora Core Linux are the result of Red Hat’s experience in developing, maintaining, and selling Linux distributions over more than a decade. Red Hat Enterprise Linux is a commercial distribution of Linux that is sold by Red Hat. You cannot download RHEL executable code for free. Each major release of Red Hat Enterprise Linux is stable, evolves conservatively, and this all works very well if you are an IT administrator who does not want to deal with constant architectural churn of your server operating system. Fedora Core Linux, on the other hand, is entirely open source and is available in source or binary form for download by anyone. Fedora Core Linux pushes the technology barrier to the bleeding edge. One could consider Fedora Core Linux unstable in terms of constant change, yet revolutionary in terms of the capabilities it incorporates with this regular cycle of change. An example would be the inclusion of Xen virtualization technology recently added to Fedora Core 5. Xen is developed out of University of Cambridge. Imagine having virtual machine technology, like what mainframes have had for decades, as a standard feature of your PC operating system. How would having the ability to partition the operating system into multiple, independent virtual systems change the landscape of data center design? It will. Once it is there, administrators will begin to count on it. Xen is not quite stable, yet adding it to Fedora Core 5 will push Xen toward stability by making it accessible in a highly popular Linux distribution. As cutting edge features are added to Fedora Core Linux and stabilized, they are eventually consumed by Red Hat Enterprise Linux and supported over the long term [years] by the RHEL teams. We will see Xen show up in a future release of Red Hat Enterprise Linux when it has stabilized enough for commercial adoption. Additionally, proprietary features such as hardware device drivers and other closed-source capabilities can be found in RHEL, but will never make it to Fedora Core Linux.Let’s project this idea onto Rational Method Composer and EPF Composer. Imagine EPF Composer is where new experimental ideas are realized into the tool for authoring and publishing software processes. Risks would be taken here, changes happen quickly, and the essence of the tool represents the cutting edge of ideas in the IT process authoring space from experts in business and academia. As new concepts are stabilized in EPF Composer and deemed fit for commercial inclusion, they are consumed by Rational Method Composer and supported by the world’s largest Information Technology company and the service professionals behind it. This would not mean that Rational Method Composer would be behind the times in terms of features. It means those features taken from EPF Composer and added into Rational Method Composer would be supported over the long term [years] and allow for a predictable maintenance path for CIOs, on-site technical support and formal training professionals. Additionally, Rational Method Composer might get capabilities that are not applicable to an entirely open source tool. A partnership with another vendor might allow Rational Method Composer to import and export data with another commercial closed source tool. Such an agreement would not be possible in open source.I think it is important to define the nature of the relationship between these two offerings and how they will benefit from each other’s existence. This is one possible approach for how that relationship might evolve. ...

March 22, 2006 · 4 min · 741 words · Jim Thario

OPEN Process Framework Repository

The following message was received today on the epf-dev mailing list for the Eclipse Process Framework. This is an exciting announcement from Donald Firesmith because it is another example of the process engineering community, both commercial and academic, bringing the content it has been developing for years to EPF to take advantage of the standardization of metamodel and tooling to author and publish the material.On behalf of the OPEN Process Framework Repository Organization (www.opfro.org) and the OPEN Consortium (http://www.open.org.au/), I would like to officially announce that we will be donating our complete OPFRO repository of over 1,100 reusable, open-source method components to the eclipse epf project as an additional third repository. Currently, our repository is based on the OPEN Metamodel, but we will shortly begin translating it to fit the epf SPEM metamodel andassociated xml xsd. We will also be working over the next few weeks to determine what level of effort support we can donate to epf.Donald FiresmithChair, OPFRO ...

March 17, 2006 · 1 min · 160 words · Jim Thario

Eclipse Process Framework

I am a committer on the Eclipse Process Framework (EPF) open source project. The code and content that makes up EPF was donated from the Rational Method Composer product and the Rational Unified Process. The open source version of RUP is called BUP, which stands for Basic Unified Process. Today you can download EPF Composer from the web site and begin authoring your own method content and publishing process configurations, or you can use the BUP method content and customize it for your own development project. There is also a published version of BUP available for download as well. EPF Composer and the published BUP web site are available from the EPF download page. ...

February 15, 2006 · 1 min · 114 words · Jim Thario

Rational Method Composer

This past year I joined the Rational Method Composer (RMC) team at IBM. Rational Method Composer is a tool to author method content and configure that method content into processes. RMC can be used for authoring software development processes, IT operations processes, or any complex business process that requires documentation and consistency. Processes can be published and distributed via HTML sites. What I like about RMC is that is brings the concept of knowledge reuse to process engineering. Method content can consist of the roles, tasks, and work products which are essentially smaller generic pieces of a process. Those pieces can then be assembled into a process configuration and published. Using the same library of method content, a process author could build a configuration for a new software project and also a configuration for product maintainance. ...

February 14, 2006 · 1 min · 136 words · Jim Thario

From history or current day society, select five famous people that you would use to build the perfect team.

For my perfect team I want to build a software development team and staff the lead roles. There are many roles involved with the creation and sale of a software product. I am going to focus on the team responsible for the creation of the solution. The roles I chose to staff are project management, requirements analyst, engineering, content and documentation, and customer support leads. Many people can share a single role, or each person can have multiple roles. For my case, each person gets a single role. The project manager is responsible for monitoring the progress, time lines, budgets, and in general doing what needs to be done to see the project reach its conclusion. The project manager is often a central figure of communication between the development team and other groups. My project manager is Meg Whitman from eBay. [1] Meg has turned eBay into an online mainstay with $4 billion a year in revenue and a $60 billion market capitalization. The requirements analyst uses a variety of techniques to understand the problem from first hand contact with stakeholders inside and outside the organization. Grace Hopper [2] lived from 1906 to 1992. She is responsible for such ideas as compiled source languages and was deeply involved in trying to make computers easier for developers and operators. She often placed herself in the problematic situation to understand it and help propose a solution. The engineering lead is a broad role incorporating all of the technical aspects and control systems in place for the project. For this role I will choose Alan Cox [3] from the team of Linux contributors. Alan was responsible for many of the improvements to Linux that helped it gain respect as a reliable platform. Although a deeply technical person, Alan has an MBA that I believe gives him an insight to the economics of engineering problems. The content and documentation specialist is responsible for all information included with the solution that is needed by the consumer. This role is also responsible for any included templates or other information that can jump-start the solution for the user. Carl Sagan [4] will be my content and documentation producer. Carl Sagan taught science and wrote about it his entire life. He contributed to the popularization of science in America. Customer support provides help, receives and records defect reports and enhancement requests, and provides assistance with unique problems or environments. Blake W. Nordstrom [5] of the Nordstrom department stores will be in charge of my customer service organization. Nordstrom has a reputation of excellent service and has been aggressively applying technology to improve their customer’s experience. [1] http://money.cnn.com/2005/10/31/news/newsmakers/top50_women_fortune_111405/?cnn=yes [2] http://www.sdsc.edu/ScienceWomen/hopper.html [3] http://en.wikipedia.org/wiki/Alan_Cox [4] http://en.wikipedia.org/wiki/Carl_sagan [5] http://www.referenceforbusiness.com/biography/M-R/Nordstrom-Blake-W-1961.html ...

November 6, 2005 · 3 min · 447 words · Jim Thario