Human Versus Machine Code Analysis

I see human code reviews as one tool in the quality toolbox. My opinion is that to keep code reviews interesting and engaging, humans should be the last link in the chain and get the most interesting problems. What I mean is that if the code review is burdened with pointing out that an opened resource was not closed or that a specific path through the code will never happen, code reviews become draining and boring. I also believe that code reviews need to scale up to teams that are not co-located. That might mean using an asynchronous process, like a workflow system or using collaboration tools to do the code review through teleconferences and screen sharing. A workflow system can prevent code from promotion into the mainline build until one or more reviewers have accepted it. To keep the code reviews interesting and challenging, I give the grunt work to the machines and use static analysis and profiling tools first. Before you can involve the humans, your code needs to pass the suite of static analysis tests at the prescribed level. This will weed out all the typical mistakes that are larger than what a compiler finds. There are many analysis and profiling tools available in open source and commercially. Most of my development work is in server-side Java, and my analysis tools of choice are FindBugs, PMD and the profiling tool in Rational Software Architect. FindBugs is a byte code analyzer, so it looks at what the Java compiler produces and is less concerned with the form of source code. PMD analyzes source code. Both tools have configurable thresholds for problem severity and they can accept custom problem patterns. PMD has a big library of problem patterns, including things like overly complex or long functions or methods. The RSA profiling tool only tests timing down to the method level of classes. It can quickly help a developer focus on where the sluggish parts of a system are hiding, which is valuable information going into a review. Once the code makes it through this array of automated tests, bring the humans in to look at it and get their input. I have found this approach in our case changes the review from a potentially adversarial situation into one with an educational tone. The review meeting, if it happens synchronously, is not overtaken by the small problems and pointing out basic mistakes. It is concerned with making recommendations at a higher level to improve the larger design. FindBugs, U. of Maryland, http://findbugs.sourceforge.net/ PMD, SourceForge, http://pmd.sourceforge.net/ Rational Software Architect for WebSphere Software, http://www-01.ibm.com/software/awdtools/swarchitect/websphere/ ...

December 17, 2009 · 3 min · 432 words · Jim Thario

Easing into Agile

The article I found this week was written by two individuals working for Nokia Networks. They were involved in training product development staff in agile practices. Vodde and Koskela (2007) discussed Nokia’s environment for the past decades and their experiences in introducing test-driven development into the organization. The implication in the article is that because of the size and amount of retraining necessary to move toward agile development, Nokia is adopting agile practices a piece at a time (small bites) versus dropping the waterfall approach entirely and throwing the development teams into a completely new and unfamiliar situation. Vodde and Koskela also point out the benefit they found in using hands-on instruction for TDD versus lecture-based education. The authors make a few observations during the time they were teaching TDD to experienced software developers. One important observation was, “TDD is a great way to develop software and can change the way you think about software and software development, but the developer’s skill and confidence still play a big role in ensuring the outcome’s quality.” The exercise the authors used in their course was to develop a program to count lines of code in source files and tests to verify the program’s operation. Each session would add a new requirement in the form of a new type of source file. The students were forced into an evolutionary/emergent situation in which the design had to change a little as the current and new problems of each requirement were solved. What the students’ speculated as a design at the beginning and what they actually ended with were different. The authors conclude with some recommendations for successful TDD adoption with other agile practices or as an isolated practice in a legacy environment: Removing external dependencies helps improve testabilityReflective thinking promotes emergent designA well-factored design and good test coverage also help new designs emerge Reference Vodde, B., Koskela, L. (2007). Learning Test-Driven Development by Counting Lines. IEEE Software. 0740-7459/07. ...

December 23, 2009 · 2 min · 324 words · Jim Thario

Reducing Adoption Barriers of Agile Development Processes in Large andGeographically Distributed Organizations

Agile software development processes have received much attention from the software development industry in the last decade. The goal of agile processes is to focus the importance of people as primary contributors of the project and reduce the administrative overhead of producing working code for the stakeholders of the project. This paper explores some of the explicit and implied constraints of agile software development processes. It focuses on several common practices of agile processes, particularly those that might limit their adoption by large and geographically distributed organizations. This paper makes recommendations to reduce the barriers to adoption of agile processes by these types of organizations. It attempts to answer questions such as: Is it possible for a large organization with many established business and development processes to incrementally adopt an agile process? Is it possible to adopt agile development processes to work for many individuals who are physically isolated, such as work-at-home software developers? Is it possible to adopt agile development processes to work for a large team, divided into many sub-teams that are geographically distributed and possibly working in different time zones? Extreme Programming is probably one of the most recognized agile software development process today. It was introduced in the late 1990s by Kent Beck and eventually published as a book (Beck, 2005). Beck’s approach documented the values, principles and practices necessary too deliver lower defect, working software with less formal process and more focus on the skills of people and community that produces it. Extreme Programming is targeted to small, collocated teams of about twelve people. Other proponents of agile software development processes understood the increasing interest in their approaches by the software industry and followed with the Manifesto for Agile Software Development. The contributors of the Manifesto were the creators of many different agile, iterative and incremental software development processes. Their goal was to unify principles they shared in common. The work was authored by “[…] representatives from Extreme Programming, SCRUM, DSDM, Adaptive Software Development, Crystal, Feature-Driven Development, Pragmatic Programming, and others […]” (Manifesto, 2001). Beck and Andres (2005) present the primary practices of Extreme Programming in their book. Two practices stand out as a limitation of scaling Extreme Programming to teams in multiple locations, or even work-at-home employees. They are Sit Together and Pair Programming. Sit Together is a practice that encourages the team to work in a unified area, such as a large, open room that promotes easy communication. Pair Programming is a technique where two developers sit together at a single workstation and take turns designing and writing code. As one developer is writing code, the other is observing, asking questions and offering suggestions as the current piece of work progresses. The goal of these two practices is to lower the defect rate through a constantly available communication and collaboration of developers sharing the same physical space. Beck and Andres (2005) also discuss the importance of team size in a project that uses Extreme Programming. They recommend a team size of about twelve people. The reason for this size has as much to do with coordination of development activities as it does with psychological needs of being a part of a team. The larger a team grows, the less personal the connections between team members become. Faces are more difficult to remember and communication among all members gravitates toward infrequency. These challenges with team size become amplified with work-at-home software developers who may only be in the physical presence of other members of the team a few times a year at specific events such as all-hands meetings. Active and regular communication is a requirement with agile software development. Ramesh (2006) describes the perceived advantages of teams distributed across time zones and continuous development, e.g. as one team ends for the day and goes to bed another is coming to work to pick up where the last left off. However, there is actually a communication disconnection between the geographically distributed team in this situation, and the teams are forced into a mode of asynchronous communication, potentially slowing down progress. This problem relates to two principles of the Manifesto for Agile Software Development (2001) that presents a challenge to geographically distributed teams. The first is “Business people and developers must work together daily throughout the project.” The second is “The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.” Both principles are related to communication among developers, management, stakeholders and users of the project. Lindvall (2004) points out that incremental adoption of agile practices into an existing large organization can be challenging. An existing organization typically has the expectation that existing business and development processes are followed regardless of project size and the process used. Educating those outside of the agile pilot project and resetting their expectations for following the established processes can create tension. A specific example is that development in agile-driven projects usually starts with a subset of the requirements set. This is a quality of agile development processes and has to do with working on what is understood to be the goal of the project today. As working builds are created and delivered to stakeholders the requirements set can be appended and refined until there is agreement that a reasonable goal has been established. Murthi (2002) documents a case of using Extreme Programming on a 50-person development project and cites the ease of starting early with a partial requirements set, and using the subsequent working results for two goals: show stakeholders working software to build confidence in the development team and giving stakeholders something to help refine their own understanding of their needs. Incrementally developed requirements, constantly refined budgeting and burn rate of finances that are typical of agile development process management can present a unique challenge to a project that is completely or partially outsourced. Cusumano (2008) details the need for an iterative contract between the customer and outsourcing provider. A fixed-price contract can be nearly impossible to design when agile development processes are in use by either party. Boehm (2005) also discusses the problem of using agile processes within the realm of contracting to the private and public sector. Problems can be encountered when measuring progress of a contract’s completion. As a consumer following an agile process, the requirements can remain a moving target well into the project’s life cycle. As a provider following agile development processes, it can become nearly impossible to provide final system architectural details early in the life cycle to the consumer for review. Boehm also points out the difficulties to overcome by providers utilizing agile processes when seeking certification for CMMI and ISO-related international standards. The barriers of agile development process adoption by a large or geographically distributed organization can be reduced by a combination of two approaches. The first approach is the application of tooling and technologies to support the practices of agile software development that scale to an organization’s needs. The second approach is to continuously refine the practices in conflict with the organization’s existing mode of operation over time. An example of practice refinement through technology adoption is The Sit Together and Pair Programming practices from Extreme Programming, and working together daily and face-to-face interaction among customers and developers as recommended principles of the Manifesto of Agile Software Development. These practices and principles are the most obvious barrier to adoption of pure agile development processes within a large or geographically distributed team. The essence of the Sit Together practice is to provide a means to communicate at-will among team members. Technologies that help support this practice in distributed environments include instant messaging systems that can provide a mechanism for short question and answer sessions for two or more participants in the project at once. Longer conversations among the team can be supported through VOIP solutions, reservation-less teleconference solutions, Skype and XMPP-based messaging solutions that can allow several team members at a time impromptu contact and discussion opportunities for project issues. Speakerphones allow collocated sub-teams to participate in conversations about the project across geographic locations. In all the examples cited, full-duplex voice communication is essential for effective discussion among several team members at once. This type of communication allows the audio channels to work in both directions simultaneously, which means someone can talk and interrupt someone speaking as they could when they are in person. Many inexpensive speakerphones are half duplex. These types of devices block the receiving audio channel when the person is speaking. Someone wanting to stop the speaker to clarify a point is unable to do so until the person speaking pauses. Background noise, such as a loud computer fan or air conditioner can cause similar problems for half-duplex communication systems. Pair Programming can be performed through a combination of voice communication and desktop screen sharing technology. Individuals working within the same network or virtual private network can use solutions like Microsoft NetMeeting or Virtual Network Computing (VNC) to share, view and work within each other’s development environment and perform pair programming over any distance. Web-based and wide-area-network tooling to support the incremental development and tracking of plans, requirements and defects is available from several vendors such as IBM and Rally Software Development Corporation. Gamma (2005) presented The Eclipse Way at EclipseCon several years ago. The motivation behind his presentation was the many requests he received from users of the Eclipse environment to understand how a team distributed throughout the world could continue to release as planned and with a low defect rate. The Eclipse Foundation has a centralized data center in Canada for several of its activities including continuous integration and automated testing of nightly builds. The build and testing process of the Eclipse environment is fully automated for each platform it supports. Additionally, end-users are encouraged to install and use nightly builds after they pass the automated suite of tests. Other barriers to adopting agile development processes cannot be solved with tooling alone. Ramesh (2006) found that the solution to working across multiple time zones is to synchronize some meetings, and rotate the time of the meeting so that each group takes turns in suffering from an extraordinarily early or late meeting so that everyone on the project can communicate live. Solving the opposing forces in contract negotiating requires creativity. Boehm (2005) recommends disbursing “[…] payments upon delivery of working running software or demonstration of progress rather than completion of artifacts or reviews.” According to Boehm there is not yet a well-defined compatible solution to agility in process and certification of ISO or CMMI related certifications. Lindvall (2004) concluded that adoption of agile development processes by large organizations is best accomplished through hybrid integration with the existing processes, particularly the established quality processes. With this approach, the existing quality processes can be used to measure the effectiveness of the agile software development process under pilot. This paper described several of the qualities shared by different agile software development processes. It focused on those aspects that potentially limit agile process adoption by large and geographically distributed organizations. The recommendations made in this paper include technology solutions to improve collaboration and communication among distributed developers and consumers of the project. The technology considerations also help alleviate management concerns such as incremental planning and budgeting of agile projects. Recommendations were also provided for large organizations with established processes and approaches pilot projects utilizing agile development can take to leverage those processes to demonstrate their value. It is possible to adopt agile software development processes for large and geographically distributed organizations. Adoption requires thoughtful and careful application, integration and refinement of the practices at the core of these agile processes for a successful outcome. REFERENCES Beck, K., Andres, C. (2005). Extreme Programming Explained. Second Edition. Copyright 2005, Pearson Education, Inc. Boehm, B., Turner, R. (2005). Management Challenges to Implementing Agile Processes in Traditional Development Organizations. IEEE Software. 0740-7459/05. Cusumano, M.A. (2008). Managing Software Development in Globally Distributed Teams. Communications of the ACM. February 2008/Vol. 51, No. 2. Gamma, E., Wiegand, J. (2005). Presentation: The Eclipse Way, Processes That Adapt. EclipseCon 2005. Copyright 2005 by International Business Machines. Leffingwell, D. (2007). Scaling Software Agility: Best Practices for Large Enterprises. Copyright 2007 by Pearson Education, Inc. Lindvall, M., Muthig, D., Dagnino, A., Wallin, C., Stupperich, M., Kiefer, D., May, J., Kahkonen, T. (2004). IEEE Computer. 0018-9162/04. Manifesto. (2001). Manifesto for Agile Software Development. Retrieved 2 October 2008 from http://agilemanifesto.org/. Murthi, S. (2002). Scaling Agile Methods - Can Extreme Programming Work for Large Projects? www.newarchitectmag.com. October 2002. Ramesh, B., Cao, L., Mohan, K., Xu, P. (2006). Can Distributed Software Development Be Agile? Communications of the ACM. October 2006/Vol. 49, No. 10. ...

October 12, 2008 · 10 min · 2104 words · Jim Thario

Applicability of DoDAF in Documenting Business Enterprise Architectures

As of 2005, the Department of Defense employed over 3 million uniformed and civilian people and it had a combined $400 billion fiscal budget (Coffee, 2005). The war-fighting arm of the government has had enormous buying power since the cold war and the complexity of technologies used in military situations continues to increase. To make the most optimal use of its dollars spent, reduce rework and delays in delivery of complex solutions, the DoD needed to standardize the way providers described and documented their systems. The DoD also needed to promote and enhance the reuse of existing, proven architectures for new solutions. The Department of Defense Architecture Framework (DoDAF) is used to document architectures of systems used within the branches of the Department of Defense. “The DoDAF provides the guidance and rules for developing, representing, and understanding architectures based on a common denominator across DoD, Joint, and multinational boundaries.” (DODAF1, 2007).DoDAF has roots in other enterprise architecture frameworks such as Zachman Framework for Information System Architecture (Zachman, 1987) and Scott Bernard’s EA-Cubed framework described in (Bernard, 2005). Zachman and Bernard’s architecture frameworks have been largely adopted by business organizations to document IT architectures and corporate information enterprises. Private sector businesses supplying solutions to the DoD must use the DoDAF to document the architectures of those systems. These suppliers may not be applying concepts of enterprise architecture to their own business, or they may be applying a different framework internally with an established history of use in the business IT sector. The rigor defined in DoDAF version 1.5 is intended for documenting war fighting and business architectures within the Department of Defense. The comprehensive nature of DoDAF including the required views, strategic guidance, and data exchange format also makes it applicable to business environments. For those organizations in the private sector that must use the DoDAF to document their deliverables to the DoD, it makes sense to approach adoption of DoDAF in a holistic manner and extend the use of DoDAF into their own organization if they intend to adopt any enterprise architecture framework for this purpose.The Department of Defense Architecture Framework is the successor to C4ISR. “The Command, Control, Communications, Computers, and Intelligence, Surveillance, and Reconnaissance (C4ISR) Architecture Framework v1.0 was created in response to the passage of the Clinger-Cohen Act and addressed in the 1995 Deputy Secretary of Defense directive that a DoD-wide effort be undertaken to define and develop a better means and process for ensuring that C4ISR capabilities were interoperable and met the needs of the war fighter.” (DODAF1, 2007). In October 2003, DoDAF Version 1.0 was released and replaced the C4ISR framework. Version 1.5 of DoDAF was released in April of 2007. DoDAF solves several problems with the acquisition and ongoing operations of branches within the Department of Defense. Primarily it serves to reduce the amount of misinterpretation in both directions of communication by system suppliers outside of the DoD and consumers within the DoD. The DoDAF defines a common language in the form of architectural views for evaluating the same solution from multiple vendors. The framework is regularly refined through committee and supports the notion of top-down architecture that is driven from a conceptual viewpoint down to the technical implementation.Version 1.5 of DoDAF includes transitional improvements to support the DoD’s Net-Centric vision. “[Net-Centric Warfare] focuses on generating combat power from the effective linking or networking of the war fighting enterprise, and making essential information available to authenticated, authorized users when and where they need it.” (DODAF1, 2007). The Net-Centric Warfare initiative defines simple guidance within DoDAF 1.5 to support the vision of the initiative and guide qualities of the architecture under proposal. The guidance provided within DoDAF includes a shift toward a Services-Oriented Architecture, which we often read about in relationship to the business sector. It also encourages architectures to accommodate unexpected but authorized users of the system. This is related to scaling the solution and loose coupling of system components used in communication of data. Finally, the Net-Centric guidance encourages the use of open standards and protocols such as established vocabularies, taxonomies of data, and data interchange standards. These capabilities will help promote integrating systems into larger, more information intensive solutions. As this paper is written, Version 2.0 of DoDAF is being developed. There is currently no timeline defined for release.DoDAF defines a layered set of views of a system architecture. The view progress from conceptual to technical. Additionally a standards view containing process, technical, and quality requirements constrain the system being described. The topmost level of view is the All Views. This view contains the AV-1 product description and the AV-2 integrated dictionary. AV-1 can be thought of as the executive summary of the system’s architecture. It is the strategic plan that defines the problem space and vision for the solution. The AV-2 is the project glossary. It is refined throughout the life of the system as terminology is enhanced or expanded. The next level of view is the Operational Views. This level can be thought of as the business and data layer of the DoDAF framework. The artifacts captured within this view include process descriptions, data models, state transition diagrams of significant elements, and inter-component dependencies. Data interchange requirements and capabilities are defined within this view. Example artifacts from the operational view include the High-Level Operational Concept Graphic (OV-1), Operational Node Connectivity Description (OV-2), and Operational Activity Model (OV-5). The third level of views of Systems and Services View. This view describes technical communications and data interchange capabilities. This level of the architecture is where network services (SOA) are documented. Physical technical aspects of the system are described in this level as well, including those components of the system that have a geographical requirement. Some artifacts from the Systems and Services View include Systems/Services Interface Description (SV-1), Systems/Services Communications Description (SV-2), Systems/Services Data Exchange Matrix (SV-6), and Physical Schema (SV-11).DoDAF shares many of the beneficial qualities of other IT and enterprise architecture frameworks. A unique strength of DoDAF is the requirement of a glossary as a top-level artifact in describing the architecture of a system. (RATL1, 2006). Almost in tandem with trends in the business IT environment toward Service-Oriented Architectures, DoDAF 1.5 has shifted more focus to a data-centric approach and network presence in the Net-Centric Warfare initiative. This shift is motivated by the need to share operational information with internal and external participants who are actors in the system. This need is also motivated by the desire to assemble and reuse larger systems-level components to build more complex war fighting solutions. As with other frameworks, DoDAF’s primary strength is in the prescription of a common set of views to compare capabilities of similar systems. The views enable objective comparisons between two different systems that intend to provide the same solution. The views enable faster understanding and integration of systems delivered from provider to consumer. The view also allows for cataloging and assembling potentially compatible systems into new solutions perhaps unforeseen by the original provider. The DoDAF view can effect a reduction of deployment costs and lower possibility of reinventing the same system due to lack of awareness about existing solutions. A final unique strength of DoDAF is that it defines a format for data exchange between repositories and tools used in manipulating the architectural artifacts. The (DODAF2, 2007) specification defines with each view the data interchange requirements and format to be used when exporting the data into the common format. This inclusion in the framework supports the other strengths, most importantly automation of discovery and reuse of existing architectures.Some weaknesses of DoDAF can be found when it is applied outside of its intended domain. Foremost, DoDAF was not designed as a holistic, all encompassing enterprise architecture framework. DoDAF does not capture the business and technical architecture of the entire Department of Defense. Instead it captures the architectures of systems (process and technical) that support the operations and strategy of the DoD. This means there may be yet another level of enterprise view that relates the many DoDAF-documented systems within the DoD into a unified view of participating components. This is not a permanent limitation of the DoDAF itself, but a choice of initial direction and maximum impact in the early stages of its maturity. The focus of DoDAF today is to document architectures of complex systems that participate in the overall wartime and business operations of the Department of Defense. A final weakness of DoDAF is the lack of business-financial artifacts such as a business plan, investment plan and return-on-investment plan.It is the author’s observation that the learning curve for Zachman is potentially smaller than DoDAF. Zachman’s basic IS architecture framework method is captured in a single paper of less than 30 pages, while the DoDAF specification spans several volumes and exceeds 300 pages. Zachman’s concept of a two-dimensional grid with cells for specific subjects of documentation and models is easier for an introduction to enterprise architecture. It has historically been developed and applied in business information technology situations. Zachman’s experience in sales and marketing at IBM motivated him to develop a standardized IS documentation method. There are more commonalities than differences in the artifacts used in both DoDAF and Zachman methods. Zachman does not explicitly recommend a Concept of Operations Scenario, which is an abstract flow of events, a cartoon board, or artistic rendering of the problem space and desired outcome. This does not mean a CONOPS (Bernard, 2005) view could not be developed for a Zachman documentation effort. Business process modeling, use-case modeling, and state transition modeling are all part of DoDAF, Zachman, and Bernard’s EA-cubed frameworks. (Bernard, 2005).The EA-cubed framework developed by Scott A. Bernard was heavily influenced by Zachman’s Framework for Information Systems Architecture. Bernard scaled the grid idea to support enterprise architecture for multiple lines of business with more detail than was possible with a two-dimensional grid. The EA-cubed framework uses a grid similar to Zachman’s with an additional dimension of depth. The extra dimension allows each line of business within the enterprise to have its own two-dimensional grid to document their business and IT architecture. Cross-cutting through the cube allow architects to identify potentially common components to all lines of business - a way to optimize cost and reduce redundant business processes and IT systems. The EA-cubed framework includes business-oriented artifacts for the business plan, investment case, ROI, and product impact of architecture development. As mentioned above, DoDAF does not include many business-specific artifacts, specifically those dealing with financials. Both Zachman and EA-cubed have more layers and recommended artifacts than DoDAF. EA-cubed has specific artifacts for physical network level and security crosscutting components, as an example. The Systems and Services view of DoDAF recommends a Physical Schema artifact to capture this information if needed. In the case of DoDAF, vendors may not know in advance the physical communication medium deployed with their system such as satellite, microwave or wired networks. In these cases, the Net-Centric Warfare guidance within DoDAF encourages the support of open protocols and data representation standards.DoDAF is not a good starting point for beginners to enterprise architecture concepts. The bulk of the volumes of the specification can be intimidating to digest and understand without clear examples and case studies to reference. Searching for material on Zachman on the Internet produces volumes of information, case studies, extensions and tutorials on the topic. DoDAF was not designed as a business enterprise architecture framework. The forces driving its development include standardizing documentation of systems proposed or acquired through vendors, enabling reuse of existing, proven architectures, and reduce time to deploy systems-of-systems built from cataloged systems already available. Many of the documentation artifacts that Zachman and EA-cubed include in their frameworks are also prescribed in DoDAF, with different formal names but essentially the same semantics. The framework recommends more conceptual-level artifacts than Zachman. This could be attributed to the number of stakeholders involved in deciding if a solution meets the need. DoDAF includes a requirement for glossary and provides architectural guidance with each view based on current DoD strategy. Much of the guidance provided in DoDAF is directly applicable to the business world. The Net-Centric Warfare strategy, which is discussed in within the guidance, is similar to the Service-Oriented Architecture shift happening now in the private sector. Lack of business-strategic artifacts such as business plan, investment plan, and ROI estimates would force an organization to supplement prescribed DoDAF artifacts with several of their own or from another framework. The Department of Defense Architecture Framework was designed to assist in the acquisition of systems from suppliers. There are many point-in-time similarities between Zachman and DoDAF in terms of DoDAF’s level of refinement for use with large enterprises. DoDAF could potentially benefit from a similar approach as Bernard’s, in that the flat tabular view is scaled up with depth. A extension of DoDAF with a third dimension could be used to document the architectures of multiple lines of business within an enterprise with more detail than is possible with a single artifact set. With minor enhancements, the DoDAF is a viable candidate for business enterprise architecture efforts. ReferencesArmour, F.J., Kaisler, S.H., Liu, S.Y. (1999). A Big-Picture Look at Enterprise Architectures, IT Professional, vol. 1, no. 1, pp. 35-42. Retrieved from http://doi.ieeecomputersociety.org/10.1109/6294.774792.Bernard, S.A. (2005). An introduction to enterprise architecture. (2nd ed.) Bloomington, IN: Author House.Coffee, P. (2005). Mastering DODAF will reap dividends. eWeek, 22(1), 38-39. Retrieved August 3, 2008, from Academic Search Premier database.Dizard, W. P. (2007). Taking a cue from Britain: Pentagon’s tweaked data architecture adds views covering acquisition, strategy. Government Computer News, 26, 11. p.14(1). Retrieved August 02, 2008, from Academic OneFile via Gale: http://find.galegroup.com.dml.regis.edu/itx/start.do?prodId=AONEDoDAF1. (2007). DoD Architecture Framework Version 1.5. Volume I: Definitions and Guidelines. Retrieved 31 July 2008 from http://www.defenselink.mil/cio-nii/docs/DoDAF_Volume_I.pdf.DoDAF2. (2007). DoD Architecture Framework Version 1.5. Volume II: Product Descriptions. Retrieved 31 July 2008 from http://www.defenselink.mil/cio-nii/docs/DoDAF_Volume_II.pdf.IBM. (2006). An IBM Rational Approach to the Department of Defense Architecture Framework (DoDAF). Retrieved 2 August 2008 from ftp://ftp.software.ibm.com/software/rational/web/whitepapers/G507-1903-00_v5_LoRes.pdf.Leist, S., Zellner, G. (2006). Evaluation of current architecture frameworks. In Proceedings of the 2006 ACM Symposium on Applied Computing (Dijon, France, April 23 - 27, 2006). SAC ‘06. ACM, New York, NY, 1546-1553. DOI= http://doi.acm.org/10.1145/1141277.1141635.RATL1 (2006). An IBM Rational approach to the Department of Defense Architecture Framework (DoDAF) Part 1: Operational view. Retrieved 1 August 2008 from http://www.ibm.com/developerworks/rational/library/mar06/widney/.RATL2 (2006). An IBM Rational approach to the Department of Defense Architecture Framework (DoDAF) – Part 2: Systems View. Retrieved 1 August 2008 from http://www.ibm.com/developerworks/rational/library/apr06/widney/.Zachman, J.A. (1987). A framework for information systems architecture. IBM Systems Journal, Vol. 26, No. 3, 1987. Retrieved July 2008 from http://www.research.ibm.com/journal/sj/263/ibmsj2603E.pdf. ...

August 9, 2008 · 12 min · 2423 words · Jim Thario

Throughts on the relationship between Rational Method Composer and EPF Composer

This seems to be a topic of increasing discussion both inside IBM and within the Eclipse Process Framework community. Questions such as “Which offering will get feature XYZ first?” “Are they functionally equivalent?” “Should the customer buy Rational Method Composer or will EPF Composer do the same thing?” are asked weekly. To refresh everyone, Rational Method Composer is a commercial tool by IBM Rational Software for the authoring of method content and for publishing configurations of method content as processes. EPF Composer is a subset of RMC code and was donated by IBM to the Eclipse Foundation as open source. The idea over time is that EPF Composer will be a core component of RMC, while RMC will add value through proprietary features and support that might not be possible in a purely open source offering.I would like to see the relationship between EPF Composer and Rational Method Composer develop in the same way the relationship of Red Hat Enterprise Linux and Fedora Core Linux has evolved. Red Hat Enterprise Linux and Fedora Core Linux are the result of Red Hat’s experience in developing, maintaining, and selling Linux distributions over more than a decade. Red Hat Enterprise Linux is a commercial distribution of Linux that is sold by Red Hat. You cannot download RHEL executable code for free. Each major release of Red Hat Enterprise Linux is stable, evolves conservatively, and this all works very well if you are an IT administrator who does not want to deal with constant architectural churn of your server operating system. Fedora Core Linux, on the other hand, is entirely open source and is available in source or binary form for download by anyone. Fedora Core Linux pushes the technology barrier to the bleeding edge. One could consider Fedora Core Linux unstable in terms of constant change, yet revolutionary in terms of the capabilities it incorporates with this regular cycle of change. An example would be the inclusion of Xen virtualization technology recently added to Fedora Core 5. Xen is developed out of University of Cambridge. Imagine having virtual machine technology, like what mainframes have had for decades, as a standard feature of your PC operating system. How would having the ability to partition the operating system into multiple, independent virtual systems change the landscape of data center design? It will. Once it is there, administrators will begin to count on it. Xen is not quite stable, yet adding it to Fedora Core 5 will push Xen toward stability by making it accessible in a highly popular Linux distribution. As cutting edge features are added to Fedora Core Linux and stabilized, they are eventually consumed by Red Hat Enterprise Linux and supported over the long term [years] by the RHEL teams. We will see Xen show up in a future release of Red Hat Enterprise Linux when it has stabilized enough for commercial adoption. Additionally, proprietary features such as hardware device drivers and other closed-source capabilities can be found in RHEL, but will never make it to Fedora Core Linux.Let’s project this idea onto Rational Method Composer and EPF Composer. Imagine EPF Composer is where new experimental ideas are realized into the tool for authoring and publishing software processes. Risks would be taken here, changes happen quickly, and the essence of the tool represents the cutting edge of ideas in the IT process authoring space from experts in business and academia. As new concepts are stabilized in EPF Composer and deemed fit for commercial inclusion, they are consumed by Rational Method Composer and supported by the world’s largest Information Technology company and the service professionals behind it. This would not mean that Rational Method Composer would be behind the times in terms of features. It means those features taken from EPF Composer and added into Rational Method Composer would be supported over the long term [years] and allow for a predictable maintenance path for CIOs, on-site technical support and formal training professionals. Additionally, Rational Method Composer might get capabilities that are not applicable to an entirely open source tool. A partnership with another vendor might allow Rational Method Composer to import and export data with another commercial closed source tool. Such an agreement would not be possible in open source.I think it is important to define the nature of the relationship between these two offerings and how they will benefit from each other’s existence. This is one possible approach for how that relationship might evolve. ...

March 22, 2006 · 4 min · 741 words · Jim Thario

OPEN Process Framework Repository

The following message was received today on the epf-dev mailing list for the Eclipse Process Framework. This is an exciting announcement from Donald Firesmith because it is another example of the process engineering community, both commercial and academic, bringing the content it has been developing for years to EPF to take advantage of the standardization of metamodel and tooling to author and publish the material.On behalf of the OPEN Process Framework Repository Organization (www.opfro.org) and the OPEN Consortium (http://www.open.org.au/), I would like to officially announce that we will be donating our complete OPFRO repository of over 1,100 reusable, open-source method components to the eclipse epf project as an additional third repository. Currently, our repository is based on the OPEN Metamodel, but we will shortly begin translating it to fit the epf SPEM metamodel andassociated xml xsd. We will also be working over the next few weeks to determine what level of effort support we can donate to epf.Donald FiresmithChair, OPFRO ...

March 17, 2006 · 1 min · 160 words · Jim Thario

Eclipse Process Framework

I am a committer on the Eclipse Process Framework (EPF) open source project. The code and content that makes up EPF was donated from the Rational Method Composer product and the Rational Unified Process. The open source version of RUP is called BUP, which stands for Basic Unified Process. Today you can download EPF Composer from the web site and begin authoring your own method content and publishing process configurations, or you can use the BUP method content and customize it for your own development project. There is also a published version of BUP available for download as well. EPF Composer and the published BUP web site are available from the EPF download page. ...

February 15, 2006 · 1 min · 114 words · Jim Thario

Rational Method Composer

This past year I joined the Rational Method Composer (RMC) team at IBM. Rational Method Composer is a tool to author method content and configure that method content into processes. RMC can be used for authoring software development processes, IT operations processes, or any complex business process that requires documentation and consistency. Processes can be published and distributed via HTML sites. What I like about RMC is that is brings the concept of knowledge reuse to process engineering. Method content can consist of the roles, tasks, and work products which are essentially smaller generic pieces of a process. Those pieces can then be assembled into a process configuration and published. Using the same library of method content, a process author could build a configuration for a new software project and also a configuration for product maintainance. ...

February 14, 2006 · 1 min · 136 words · Jim Thario