Testing the Proximity Sensor of iPhone 4

The proximity sensor problem with iPhone 4 is a topic of much debate on discussion boards, blogs and news sites. The proximity sensor is used by the phone to determine if the user is holding the phone to her ear during a call. The phone uses input from the proximity sensor to decide whether to activate the screen and allow touch input. Many owners of the phone have reported the screen re-enabling while holding the phone to their ear during a call, while others have reported no problems. I am one of the unfortunate owners of the phone that has inadvertently placed a caller on hold or interrupted other callers with touch tones emanating from my end of the call. As of today I am on my second iPhone 4 and disappointed to report my experience has not improved. There are plenty of emotional calls for Apple to quickly address this problem. I want to take a different approach. In this essay, I will provide a discussion about testing approaches and what that means for complex systems. I use the proximity sensor as a real-world example to demonstrate the problem many have experienced and the difficulty involved in testing for it. Inside iPhone is a complex hardware system arranged in a hierarchy of command and control: a microprocessor, memory, storage, transceivers for wi-fi, cellular, and bluetooth networks. It has touch, light, sound and proximity sensor input. It has external interfaces for the dock, a headset, the SIM card. It has a single display integrated with the touch sensor input. The software distributed through these components is a system of collaborating state machines, each one working continuously to keep the outside world pleased with the experience of interfacing with the phone. It is not just a single human the iPhone must keep satisfied. The cellular networks, wi-fi access points, bluetooth devices, iTunes and other external systems are part of this interactive picture as well. This is oversimplified, but you can begin to appreciate the enormous burden of testing such a small, complex device used by millions of people. How does a team even start to tackle such a problem? Meyer (2008) presents seven principles in the planning, creation, execution, analysis and assessment of a testing regimen. Meyer writes, above and beyond any other reason for the testing process “is to uncover faults by triggering failures.” The more failures are triggered and fixed before delivery of a product to the end user, the less expensive it will be than to fix them later. Humans are a required yet flawed variable in the planning and execution of test suites for complex systems like iPhone. Identifying all possible triggers for failure can be nearly impossible. Savor (2008) argues that, “The number of invariants to consider [in test design] is typically beyond the comprehension of a human for a practical system.” How do we test the multitude of scenarios and their variations in complex systems without fully comprehending usage patterns and subtle timing requirements for failure in advance? Meyer (2008) argues that testing time can be more important a criteria than absolute number of tests. When combining time with random testing, also called test escapes, there is a possibility of uncovering more faults than just using a huge, fixed suite of tests continuously repeated without deviation. Test escapes as defined by Chernak (2001) are defects that the fixed testing suite was not able to find, but instead found later by chance, an unassociated test, or by an end-user after the project was delivered to production (e.g. introduction of randomness). Now that we have some background information and terminology, let’s design a test that could make iPhone’s proximity sensor fail to behave correctly. Consider an obvious test case for the proximity sensor: Initiate or accept a call.Hold the phone against ear. Expect the screen to turn off and disable touch input.Hold the phone away from ear. Expect the screen to turn on and enable touch input.End call. This test case can be verified in a few seconds. Do you see a problem with it? It is a valid test, but not a terribly realistic one. The problem with this test case is that it does not reflect what really happens during a call. We do not sit frozen with all of our joints locked into place, refusing to move until the call has completed. To improve the test case, we add some physical action during the call: Initiate or accept a call.Hold the phone against ear. Expect the screen to turn off and disable touch input.Keep the phone still for 30 seconds.Change rotation, angle and distance of phone to ear while never exceeding 0.25 inches from the side of the caller’s head. Expect the screen to remain off and touch input remain disabled.Return to step 3 if call length is less than ten minutes.Hold the phone away from ear. Expect the screen to turn on and enable touch input.End call. Now the test case is reflecting more reality. There are still some problems with it. When I am on a call, I often transfer the phone between ears. Holding a phone to the same ear for a long time gets uncomfortable. During lulls in the conversation, I pull the phone away from my ear to check the battery and signal levels, and then I bring it back to my ear. These two actions need to be added to the test case. Additionally, all of our timing in the test case is fixed. Because of the complex nature of the phone, small variations in timing anywhere can have an impact in successful completion of our test case. Introducing some variability to the test case may raise the chances of finding a failure. In other words, we will purposely create test escapes through random combinations of action and timing. Initiate or accept a call.Hold the phone against ear. Expect the screen to turn off and disable touch input.Keep the phone still for [A] seconds.Randomly choose step 5, 6 or 7:Change rotation, angle and distance of phone to ear while never exceeding 0.25 inches from the side of the caller’s head. Expect the screen to remain off and touch input remain disabled.Pull phone away from ear for [B] seconds and return phone to ear. Expect the screen to turn on and then off at the conclusion of the action.Move phone to opposite ear. Do no exceed [C] seconds during the transfer. Expect the screen to turn on during the transfer and then off at the conclusion of the transfer.Return to step 3 if call length is less than [D] minutes.Hold the phone away from ear. Expect the screen to turn on and enable touch input.End call. There are four variables to this test case. It is possible that certain combinations of [A], [B], [C] and [D] will cause the screen to re-enable during a call and cause the test case to fail. Have fun with this one. There are in fact combinations that induce proximity failure on iPhone 4 regardless of the version of iOS, including 4.1. Finally, an important part of test design is the inclusion of negative test cases. Chernak (2001) writes, “A test case is negative if it exercises abnormal conditions by using either invalid data input or the wrong user action.” For a device like iPhone, tapping the screen constantly while it is disabled, making a call while holding it upside down, or using a faulty docking cable can all be considered negative test cases. Testing complex systems, regardless of physical size, is an incredibly difficult task. Some of this can be performed by humans and some through automated systems. Finding failures in highly integrated systems requires a combination of fixed test suites, test cases that reflect real usage scenarios, and the introduction of test escapes through creative randomization. References Chernak, Y. (2001). Validating and improving test case effectiveness. IEEE Software, January/February 2001. Meyer, B. (2008). Seven principles of software testing. Computer, August 2008. Savor, T. (2008). Testing feature-rich reactive systems. IEEE Software, July/August 2008. ...

September 23, 2010 · 7 min · 1340 words · Jim Thario

XML's Role in Creating and Solving Information Security Problems

XML provides a means to communicate data across networks and among heterogeneous applications. XML is a common information technology acronym in 2010 and is supported in a large variety of applications and software development tooling. XML’s wide adoption into many technologies means it is likely being used in places not originally imagined by its designers. The resulting potential for misuse, erroneous configuration or lack of awareness of basic security issues is compounded by the speed and ease with which XML can be incorporated into new software systems. This paper presents a survey of the security and privacy issues related to XML technology use and deployment in an information technology system. The XML Working Group was established in 1996 by the W3C. It was originally named the SGML Editorial Review Board. (Eastlake, 2002). Today XML has ten working groups, focused on areas including the core specifications, namespaces, scripting, queries and schema, and service modeling. XML is an ancestor of SGML, and allows the creation of entirely new, domain-specific vocabularies of elements, organized in hierarchical tree structures called documents. XML elements can represent anything related to data or behavior. An XML document can represent a customer’s contact information. It can represent a strategy to format information on a printer or screen. It can represent musical notes in a symphony. XML is being used today for a variety of purposes, including business-to-business and business-to-consumer interactions. It is used for the migration of data from legacy repositories to modern database management systems. XML is used in the syndication of news and literary content, as in the application of ATOM and RSS feeds by web sites. The flexibility and potential of XML use in information technology received increasing attention when web services technology was introduced. Web services communicate using XML. They can be queried by client programs to learn their methods, parameters and return data. They are self-describing, which means the approach of security-through-obscurity cannot apply if a web service is discovered running on a publicly accessible server. An attacker can ask the service for its method signatures and it will respond with specifications of how to invoke it. This does not mean the attacker will have the necessary information, such as an authentication credential or special element of data to gain access to the web service. Treese (2002) summarizes the primary security concerns involved with deploying any communications system that must transmit and receive sensitive data. Confidentiality, to ensure that only the sender and receiver can read the messageAuthentication, to identify the sender and receiver of a messageIntegrity, to ensure that the message has not been tampered withNon-repudiation, to ensure that the sender cannot later deny having sent a messageAuthorization, to ensure that only “the right people” are able to read a messageKey management, to ensure proper creation, storage, use, and destruction of sensitive cryptographic keysWeb services are a recent technology, but fall prey to similar attacks used in past and current Internet technologies. Web services are vulnerable to many of the same attacks as browser-based applications according to Goodin (2006). Parsing and validation of data provided inside a transmitted XML document must be performed regardless of the source of the transmission. DTDs or XML schemas that are not strict enough in their matching constraints can leave an open path for parsing attacks. Goodin (2006) details the primary attacks on web services as: Injection attacks that use XML to hide malicious content, such as using character encoding to hide the content of stringsBuffer overflow vulnerabilities in SOAP and XML parsers running on the system providing the web serviceXML entity attacks, where input references an invalid external file such as a CSS or schema, causing the parser or other part of the application to crash in unexpected waysLawton (2007) details a similar problem with AJAX technology. AJAX stands for Asynchronous JavaScript and XML. It not so much a specific technology, but a technique to reduce the number of whole page loads performed by a browser. An AJAX-enabled application can update portions of a browser page with data from a server. The data transmitted between browser and server in an AJAX communication is formatted in XML. The server-side of an AJAX application can be vulnerable to the same attacks described for web services above - overflows, injection of encoded data, and invalid documents. Goodin (2006) recommends IT staff scan the publicly facing systems of an enterprise periodically for undocumented web services, and scanning known web services and applications with analysis tools such as Rational’s AppScan (2009). Lawton (2007) also recommends the use of vulnerability scanners for source code and deployed systems. A common mistake made even today in the deployment of web services or web applications is a lack of use of HTTPS or TLS protocol in securing the transmission of data between the client and server. All data transmitted across the Internet passes through an unknown number of routers and hosts before arriving at the destination. The format of an XML document makes it easy for eavesdroppers to identify and potentially capture a copy of this data as it passes through networking equipment. The easiest solution to this problem is to host the web service or web application over the HTTPS protocol. HTTPS is HTTP over SSL, which encrypts the data during transmission. HTTPS will not protect data before leaving the source or after arriving at the destination. Long, et al. (2003) discusses some of the challenges of bringing XML-encoded transactions to the financial services industry. Privacy is a primary concern for electronic financial transactions. Long states that simply using SSL to encrypt transmissions from system to system is not enough to satisfy the security needs of the financial sector. There also exists a need to encrypt portions of an XML document differently, so that sensitive content has different visibility depending on the system or person accessing it. The XML Encryption Syntax and Processing standard allows any portion or an entire XML document to be encrypted with a key, and then placed within an XML document for transmission or storage. The encrypted document remains a well-formed XML document. Eastlake (2002) describes the Encryption Syntax Processing and Signature Syntax Processing recommendations for XML. Using the ESP recommendation, portions of the document can be encrypted with different keys, thus allowing different people or applications to read the portions of the document for which they have keys. This approach provides a form of multi-level security within a single XML document. With web services comes the problem of knowing which ones to trust and use. Even more difficult is the problem of giving that determination to a computer. Carminati, Ferrari and Hung (2005) describe a problem of automating the evaluation of privacy policies of web services in today’s world of data storage, cloud, banking and financial institutions and multi-player gaming businesses that exist entirely on the Internet. They reason that systems discovered in web services directories may not operate with compatible privacy policies required by the consumer’s organization or local laws. They propose three solutions for handling this problem. The first is basic access control from a third party that evaluates and quantifies the privacy policy for a service provider. The next is cryptography in the services directory so that the consumer decodes only compatible services. The final solution is a hash solution, which looks for flags supplied by the web services provider describing their support of specific aspects of privacy policy. As with the problem of transmitting sensitive XML data across the Internet unencrypted, there is also a problem of authenticating the source of an XML document. How does a person or system verify the document’s originator? The Signature Syntax Processing recommendation briefly mentioned above provides a method to enclose any number of elements in a digital signature. This method uses public key cryptography to sign a portion of the document’s data. The originator of the document provides a public key to the recipient through a secure channel (on a flash drive) in advance of transmitting the data. The originator uses their secret key to sign the document data, which produces a new smaller block of data called a digital signature. The signature is embedded in XML around the protected elements. The signature and the XML data are used by the recipient to determine if the data was changed in transmission. The signature is also used to verify the identity of the signer. Both authentication steps require the recipient to have the sender’s public key. The problem of securing documents through path-based access control was addressed early in XML’s lifetime. Damiani et al (2001) describe an access control mechanism specifically designed for XML documents. Their Access Control Processor for XML uses XPath to describe the target location within a schema for access along with the rights associated to groups or specific users of the system. Additionally, Böttcher and Hartel (2009) describe the design of an auditing system to determine if confidential information was accessed directly or indirectly. They use a patient records system as an example scenario for their design. Their system is unique in that it can analyze “[…] the problem of whether the seen data is or is not sufficient to derive the disclosed secret information.” The authors do not discuss whether their design is transportable to non-XML data sources, such as relational databases. In 2010, we have technologies to use with XML in several combinations to secure document content during transmission and in long-term storage. The use of SSL, Encryption Syntax Processing and Signature Syntax Processing recommendations provide a rich foundation to create secure XML applications. The maturity of web servers, the availability of code analyzers and the increasing sophistication of IT security tools decreases the risk of infrastructure falling to an XML-centric attack. With the technical problems of securing XML addressed through various W3C recommendations, code libraries and tools, a new problem of education, precedence in use and organizational standards for their application becomes the new security issue in XML-related technologies. This is a recurring problem in many disruptive technologies called awareness. Goodin (2006) says, “[…] the security of web services depends on an increased awareness of the developers who create them, and that will require a major shift in thinking.” XML has introduced and solved many of its own security problems - through application of its technology. It becomes important now for the industry to document and share the experiences and practices of deploying secure XML-based Internet applications using the technologies recommended by the W3C and elsewhere. References Böttcher, S., Hartel, R. (2009). Information disclosure by answers to XPath queries. Journal of Computer Security, 17 (2009), 69-99. Carminati, B., Ferrari, E., Hung, P. C. K. (2005). Exploring Privacy Issues in Web Services Discovery Agencies. IEEE Security and Privacy, 2005, 14-21. Damiani, E., Samarati, P., De Capitani di Vimercati, S., Paraboschi, S. (2001). Controlling access to XML documents. IEEE Internet Computing, November-December 2001, 18-28. Eastlake, D. E. III., Niles, K. (2002). Secure XML: The New Syntax for Signatures and Encryption. Addison-Wesley Professional. July 19, 2002. ISBN-13: 978-0-201-75605-0. Geer, D. (2003). Taking steps to secure web services. Computer, October 2003, 14-16. Goodin, D. (2006). Shielding web services from attack. Infoworld.com, 11.27.06, 27-32. Lawton, G. (2007). Web 2.0 creates security challenges. Computer, October 2007, 13-16. Long, J, Yuan, M. J., Whinston, A. B. (2003). Securing a new era of financial services. IT Pro, July-August 2003, 15-21. 1520-9202/03. Naedele, M. (2003). Standards for XML and web services security. Computer, April 2003, 96-98. Rational AppScan. (2009). IBM Rational Web application security. Retrieved 14 February 2009 from http://www-01.ibm.com/software/rational/offerings/websecurity/webappsecurity.html. Treese, W. (2002). XML, web services, and XML. NW, Putting it together, September 2002, 9-12. ...

March 14, 2010 · 10 min · 1934 words · Jim Thario

Creating Tools that Create Art

I recently developed and installed a creation called Short Attention Span Collaborative Imagery in the Annex at Core New Art Space in Denver. Some people have called it art, while I call it a tool for generating art. The SASCI piece runs on two Internet-connected computers in the gallery. It uses Twitter trends and specific search terms to drive the continuous creation of collages of images and text on two wall-facing projectors. Input from Twitter, specifically the current and daily trends and a search for the words Denver and Art is the source of the imagery. It uses the Stanford Natural Language Parser, Creative Common-licensed images from Flickr and text from Wikipedia. I wrote the programs in Java and JavaFX. About every 30 minutes, background tasks collect the latest terms and matching messages from Twitter. A different program using the Stanford NLP parses the messages looking for interesting nouns, and collects images and text associated with the source words from Flickr and Wikipedia. Each collage takes anywhere from 2-5 minutes to build in front of the audience. It is never the same. The collages abstractly reflect people’s conversations on Twitter as recent as the last 30 minutes. If you are in the area, please check it out. Core New Art Space is located at 900 Santa Fe Drive in Denver. Call or browse the web site for gallery hours. 303-297-8428. http://corenewartspace.com. ...

August 23, 2009 · 2 min · 231 words · Jim Thario

Security Benefits and Liabilities of Virtualization Technology

This paper provides a broad discussion of the security issues related to virtualization technology, such as the offerings by VMware, Microsoft and IBM. It presents an overview of virtualization, the various types of virtualization, and a detailed discussion of full computer virtualization technology. The benefits of virtualization technology are provided from a position of security, convenience and cost. The paper continues with a discussion of the security liabilities of virtualization. It provides examples of recent attempts by security researchers to design attacks directed at the virtual machine manager also known as the hypervisor. A look at trends in the application of virtualization technology concludes the discussion. Virtualization is a type of abstraction of resources. In computer technology, virtualization can be used to simulate the presence of memory, disk, video or entire computers where they exist partially or not at all. The first virtualization technology dates back into 1960, when IBM and other computing pioneers created operating systems and storage systems that presented an isolated environment to the user that appeared as a single-user system. Today our desktop operating systems use memory virtualization to provide a larger runtime space for applications than there is random access memory. Our operating system uses a combination of solid-state memory and a paging file on disk to move data blocks between to two media depending on their frequency of use. Enterprise storage virtualization, such as solutions provided by IBM, EMC and Sun create an illusion of massive consolidated storage space available from solid-state, magnetic disk and streaming tape into a single logical direct access image. Less frequently accessed data blocks are migrated to slower media while often-accessed data blocks are maintained on faster access media. All storage appears online and ready to access. The recent the popularity of virtual machines for running Java and .NET software allow a common runtime environment regardless of the actual hardware and operating system hosting the virtual machine. This approach reduces the work required by the software provider to create a solution capable of running on a variety of platforms. Cardwell (2007) defines computer virtualization as a computer within a computer. Virtualization software simulates a computer, including the processor and hardware components, and BIOS to the guest operating system. The guest operating system running within the virtualized environment should not know or care that its hardware resources are not physical resources, but instead simulated through software. The two types of computer virtualization are called full virtualization and para-virtualization. Wong (2005) discusses the differences of full virtualization and para-virtualization. Full virtualization does not require changes to the guest operating system. Products such as VMware provide full virtualization. This type of virtualization requires support in the host system’s processor to trap and help emulate privileged instructions executed by the guest operating system. Para-virtualization requires modifications to the guest OS to run on the virtual machine manager. Open source operating systems, such as Linux can be modified to support a para-virtualized environment. This type of virtualization often performs better than full virtualization, but is restricted to guest operating systems that have been modified to run in this specific environment. Today there are many popular, contemporary and affordable virtualization products on the market. VMware is the most widely known, but IBM has the longest history with virtualization technologies. As mentioned previously, virtualization for mainframe systems dates back to 1960. VMware has targeted Intel platform virtualization since the 1990s. Microsoft acquired Virtual PC as the market for virtualization grew from VMware’s popularity. Xen is an open source virtualization solution. Xen supports full and para-virtualized systems. It is popular with Linux distributions, which often provide para-virtualized kernels ready to deploy as guest operating systems. IBM’s two primary virtualization platforms are the System-z mainframe and Power systems. “The latest version of z/VM […] will now support up to 32 processors and offer users 128 GB of memory, which will allow the software to host more than 1,000 virtual […] Linux servers.” (Ferguson, 2007). Virtualization technology, which was originally used on centralized systems to share resources and provide a partitioned view to a single user, is popular on server and workstation platforms running Intel x86 hardware. Cardwell (2007) presents several use cases of virtualization benefits, including consolidation of servers, quick enterprise solutions, software development, and sales demonstrations. Separate physical servers running periodically accessed services can be virtualized and run together on a single physical system. Short-lived server systems, such as those for conferences, could be created as virtual machines without the need for acquiring physical servers to host the solution. Software developers often need multiple systems to develop server-based solutions, or they require several versions tools that may conflict when installed together. Sales demonstrations can be configured and distributed to customer-facing staff as virtual machines. Many different configurations can be created and options demonstrated to customers on demand to see how various solutions can apply to their environment. As processing capability increases on the desktop and virtualization providers offer cost-effective software to create virtualized environments, this is a primary growth area for the technology. Burt (2006) says the benefit of mobility of virtual machines for users is a huge benefit of desktop virtualization. Virtual machines stored on portable media such as USB hard disks or flash storage. They can be paused on a host system at an office, taken on plane to the customer’s location and then resumed on a new host. This can happen while keeping the virtualized operating system completely oblivious to its actual location and host hardware. Testing and quality assurance has had large adoption of virtualization technology. According to Tiller (2006), the benefits of virtualization include the ability to react and test vulnerabilities and patches in a much shorter timeframe. Single virtualized systems can be dedicated to an individual task in a network of systems. Upgrading or relocating any virtualized system can be performed without affecting other parts of the entire solution. There is a large benefit to security and availability with virtualization technology. Virtual machines are separated from the host operating system. Viruses, malware and software defects that affect the virtualized system are restricted and, in most cases cannot spread to the host operating system. Disaster recovery planning has the potential for simplification under a virtualized infrastructure. Virtual machines images, such as those used by VMware, are stored on the host operating system as files. Backing-up or relocating virtual machines from one host to another can be as simple as suspending the running virtual machine, moving the set of files across the network and resuming the virtual machine. Virtual machine images can be shortly suspended and stored to tape or mirrored to a remote location as a disaster recovery process. Duntemann (2005) points out that a virtual machine with the operating system and installed applications are commonly stored as disk files and can be archived, distributed, or restored to an initial state using the virtual machine manager. These files are also subject to attack and potential modification if the host system is compromised. A successful attack against the host system can make the virtual machines vulnerable to modification or other penetration. Virtualization is also known as a system multiplier technology. “It is very likely that IT managers will have to increase the number and expertise of security personnel devoted to security policy creation and maintenance as the percentage of VMs increase in the data center.” (Sturdevant, 2008). Where a virus would previously attack a single operating system running on a physical host, a virus can land on the host or any of its virtualized guests. The potential of creating an army of infected systems is possible now with just a single physical host. A Windows operating system running in a virtual machine is just as vulnerable to flaws and exploits as the same operating system running on a physical host. “At a broad level, virtualized environments require the same physical and network security precautions as any non-virtualized IT resource.” (Peterson, 2007). “[…] because of the rush to adopt virtualization for server consolidation, many security issues are overlooked and best practices are not applied.” There are fundamental problems for IT administrators adopting virtualization technology within their labs and data centers. Products such as VMware have internal virtual networks that exist only within the host system. This network allows the virtualized systems and the host to communicate without having the use the external, physical network. The difficulty is that monitoring the internal, virtual network requires the installation of tools that are designed for virtualized systems. Edwards (2009) points out the need for management tools to monitor communication among virtual machines and their host operating system in detail. Each host would require monitoring tools versus a single installation on a network of only physical systems. Discovery and management of virtualized systems will place more burdens on IT staff according to Tiller (2006). The ease with which virtual machines can be instantiated, relocated and destroyed will require a “quantum shift in security strategy and willingness to adapt." As the popularity of virtualization on a smaller scale has increased, a new class of attack on virtual machines and their host virtual machine managers has received more attention. Virtual machines have unique hardware signatures that can be used to identify them and help an attacker tailor an exploit. “As it is, virtualization vendors have some work to do to protect virtual machine instances from being discovered as virtual.” (Yager, 2006). The CPU model and various device drivers loaded by the operating system can identify a virtualized system. In fact, many virtualization vendors supply device drivers for guest operating systems to take better advantage of the virtualized environment. These device drivers are just as susceptible to flaws and vulnerabilities as their non-virtualized counterparts are. The host virtual machine managers, also known as hypervisors are being targeted as well by new types of attacks. Vijayan (2007) points out that dedicated hypervisors, running directly above the hardware of a computer can be used to attack the operating systems and applications it hosts with little or no possibility of detection. The SubVirt research project by University of Michigan and Microsoft uses virtual machine technology to install a rootkit to take control of multiple virtual machines. Finally, attacks using virtualization technology does not require hypervisor or virtual machine manager software at all. Technology present in today microprocessors that is utilized by hypervisors can also be utilized by malware, such as rootkits and viruses to take over a machine at the lowest level of control possible. “Security researcher Joanna Rutkowska presented a proof of concept attack known as ‘blue pill’ in 2006, that she said virtualized an operating system and was undetectable. […] Rutkowska and other have continued with such research, and this year she posited a new attack focusing on hypervisors.” (Bradbury, 2008). Virtualization is not a new to information technology. It dates back to over four decades to the early mainframes and large storage systems to protect and better utilize available computing resources. As this paper discussed virtualization technology, it detailed the kinds, benefits and security liabilities of the technology. Information about the nature of attacks against hosts and guests in a virtualized infrastructure was presented. New virtualization products for modern powerful servers and desktop hardware are helping satisfy the renewed interest in making better use of resources during tightening budgets. The benefits of this updated technology must be weighed against the challenges of securing and protecting the proliferation of virtual machines. Adaptation and transformation of policies and approach within IT organizations must be proactive to stay ahead of the disruptive change currently taking place with virtualization. References Bradbury, D. (2008). Virtually secure? Engineering & Technology. 8 November - 21 November, 2008. Pg. 54. Burt, J., Spooner, J. G. (2006). Virtualization edges toward PCs. eWeek. February 20, 2006. Pg. 24. Cardwell, T. (2007). Virtualization: an overview of the hottest technology that is changing the way we use computers. www.japaninc.com. November/December, 2007. Pg. 26. Duntemann, J. (2005). Inside the virtual machine. PC Magazine. September 20, 2005. Pg. 66. Edwards, J. (2009). Securing your virtualized environment. Computerworld. March 16, 2009. Pg. 26. Ferguson, S. (2007). IBM launches new virtualization tools. eWeek. February 12/19, 2007. Pg. 18. Peterson, J. (2007). Security rules have changed. Communications News. May, 2007. Pg. 18. PowerVM. (2009). IBM PowerVM: The virtualization platform for UNIX, Linux and IBM i clients. Retrieved July 25, 2009 from http://www-03.ibm.com/systems/power/software/virtualization/index.html. Sturdevant, C. (2008). Security in a virtualized world. eWeek. September 22, 2008. Pg. 35. Tiller, J. (2006). Virtual security: the new security tool? Information Systems Security. July/August, 2006. Pg. 2. Wong, W. (2005). Platforms strive for virtual security. Electronic Design. August 4, 2005. Pg. 44. Yager, T. (2006). Virtualization and security. Infoworld. November 20, 2006. Pg. 16. Vijayan, J. (2007). Virtualization increases IT security pressures. Computerworld. August 27, 2007. Pg. 14. ...

August 1, 2009 · 10 min · 2125 words · Jim Thario

Use of Cryptography in Securing Database Access and Content

This research paper explores the use of cryptography in database security. It specifically covers applications of encryption in authentication, transmission of data between client and server, and protection of stored content. This paper begins with an overview of encryption techniques, specifically symmetric and asymmetric encryption. It follows with a specific discussion about the use of cryptography in database solutions. The paper concludes with a short summary of commercial solutions intended for increasing the security of database content and client/server transactions. Whitfield Diffie, a cryptographic researcher and Sun Microsystems CSO says, “Cryptography is the most flexible way we know of protecting [data and] communications in channels that we don’t control.” (Carpenter, 2007). Cryptography is “the enciphering [encryption] and deciphering [decryption] of messages in secret code or cipher; the computerized encoding and decoding of information.” (CRYPTO, 2009). There are two primary means of encryption in use today. They are symmetric key encryption and asymmetric key encryption. Symmetric key encryption uses a single key to encrypt and decrypt information. Asymmetric key encryption, also known as public key cryptography uses two keys - one to encrypt information and a second key to decrypt information. In addition to encryption and decryption, public-key cryptography can be used to create and verify digital signatures of blocks of text or binary data without encrypting them. A digital signature is a small block of information cryptographically generated from content, like an email message or an installation program for software. The private key in the asymmetric solution can be used to create a digital signature of data, while the public key verifies the integrity of data and related digital signature that was created using the private key. The main advantage of public key cryptography over the symmetric key system is that the public key can be given away, as the name implies - made public. Anyone with a public key can encrypt a message and only the holder of the matching private key can decrypt that message. In the symmetric system, all parties must hold the same key. Public key cryptography can be used to verify the identity of an individual, application or computer system. As a simple example, let us say I have an asymmetric key pair and provide you with my public key. You can be a human or a software application. As long as I keep my private key protected so that no one else can obtain it, only I can generate a digital signature that you can use with my public key to prove mathematically that the signature only came from me. This approach is much more robust and less susceptible to attack than the traditional username and password approach. Application of cryptography does not come without the overhead of ongoing management of the technology. In a past interview (Carpenter, 2007), Whitfield Diffie, a co-inventor of public key cryptography says the main detractor from widespread adoption of strong encryption within I.T. infrastructures is key management - the small strings of data that keep encrypted data from being deciphered. Proper integration of cryptographic technologies into a database infrastructure can provide protection beyond username and password authentication and authorization. It can absolutely prevent anyone from reading sensitive data during transmission or stored on media. Some U.S. government standards require the use of encryption for stored and transmitted personal information. Grimes (2006) details the recent laws passed in the United States requiring the protection of personal data. These laws include the Gramm-Leach-Bliley Act for protection of consumer financial data, the Health Insurance Portability and Accountability Act for personal health-related data, and the Electronic Communications Privacy Act, which gives broad legal protection to electronically transmitted data. As discussed above, public key cryptography can be used to authenticate a person, application or computer using digital signature technology. A database management system enhanced to use public keys for authentication would store those keys and associate them with specific users. The client would use their private key to sign a small block of data that was randomly chosen by the server. The client would return a digital signature of that data, which the server could verify using the stored public keys of the various users. A verification match would identify the specific user. The second application of encryption technology in database security is used to protect transmission of data between a client and server. The client may be a web-based application running on a separate server and communicating over a local network, or it may be a fat-client located in another department or at some other location on the Internet. A technology called TLS can be used to provide confidentiality of all communications between the client and server, i.e. the database connection. “Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols that provide security and data integrity for communications over networks such as the Internet.” (TLS, 2009). Web servers and browsers use the TLS protocol to protect data transmissions such as credit card numbers or other personal information. The technology can be used to protect any data transmission for any type of client-server solution, include database systems. TLS also has authentication capability using public key cryptography. This type of authentication would only allow known public keys to make a connection. This approach is not integrated at a higher level in the solution, such as the application level. Finally, cryptography can be used to protect the entire content of database storage, specific tables or columns of table data. Encrypting stored content can protect sensitive data from access within the database management system, through loss of the storage media, and an external process that reads raw data blocks from the media. The extent to which stored content is encrypted must be weighed against the overhead of encrypting and decrypting data for transaction-intense systems. Britt (2006) stresses the importance of selectively encrypting only those portions of the content that are evaluated to be a security risk if released into the public. He says a “[…] misconception is that adding encryption will put a tremendous strain on database performance during queries and loads.” This type of protection often uses symmetric key encryption because it is much faster than the public key solution. Marwitz (2008) describes several levels of database content encryption available in Microsoft SQL Server 2005 and 2008. SQL Server 2008 provides the ability to use public key authentication directly in the access control subsystem. Additionally, the entire database server storage, individual databases and table columns can be encrypted using public key encryption. (SQLS, 2009). Table columns, such as those used to store social security numbers, credit card number, or any other sensitive personal information are a good choice for performance sensitive systems. Use of this capability means that the only way to obtain access to the unencrypted data within a column of a database table protected in this manner is to use the private key of an individual who has been granted access. The user’s private key is used to authenticate and gain access to information in the database. Extra protection is gained since the private key is never co-located with the encrypted data. IBM’s DB2 product supports a number of different cryptographic capabilities and attempts to leverage as many of those capabilities that are present in the hosting operating system - Intel-based, minicomputer or mainframe. Authentication to the database from a client can be performed over a variety of encrypted connection types or using Kerberos key exchange. DB2 also supports the concept of authentication plug-ins that can be used with encrypted connections. After authentication has succeeded, DB2 can provide client-server data transmission over a TLS connection and optionally validate the connection using public key cryptography. Like Microsoft SQL Server, the most recent releases of DB2 can encrypt the entire storage area, single databases, or specific columns within the database. (DB2, 2009). This paper provided a broad survey of how cryptographic technologies can raise the security posture of database solutions. Cryptography is becoming a common tool to solve many problems of privacy and protection of sensitive information in growing warehouses of online personal information. This paper described the use of cryptography in database client authentication, transmission of transaction data, and protection of stored content. Two commercial products’ cryptographic capabilities were explored in the concluding discussion. There are more commercial, free and open source solutions for protecting database systems not mentioned in this paper. As citizens and government continue to place pressure on institutions to protect private information, expect to see the landscape of cryptographic technologies for database management systems expand. References Britt, P. (2006). The encryption code. Information Today. March 2006, vol. 23, issue 3. Carpenter, J. (2007). The grill: an interview with Whitfield Diffie. Computerworld. August 27, 2007. Page 24. CRYPTO. (2009). Definition of cryptography. Retrieved 18 July 2009 from http://www.merriam-webster.com/dictionary/cryptography. DB2. (2009). DB2 Security Model Overview. Retrieved 18 July 2009 from http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.admin.sec.doc/doc/c0021804.html. Grimes, R. A. (2006). End-to-end encryption strategies. Infoworld. September 4, 2006. Page 31. Marwitz, C. (2008). Database encryption solutions: protect your databases - and your company - from attacks and leaks. SQL Server Magazine. September 2008. SQLS. (2009). Cryptography in SQL Server. Retrieved 18 July 2009 from http://technet.microsoft.com/en-us/library/cc837966.aspx. TLS. (2009). Transport layer security. Retrieved 18 July 2009 from http://en.wikipedia.org/wiki/Transport_Layer_Security. ...

July 22, 2009 · 8 min · 1533 words · Jim Thario

Applicability of DoDAF in Documenting Business Enterprise Architectures

As of 2005, the Department of Defense employed over 3 million uniformed and civilian people and it had a combined $400 billion fiscal budget (Coffee, 2005). The war-fighting arm of the government has had enormous buying power since the cold war and the complexity of technologies used in military situations continues to increase. To make the most optimal use of its dollars spent, reduce rework and delays in delivery of complex solutions, the DoD needed to standardize the way providers described and documented their systems. The DoD also needed to promote and enhance the reuse of existing, proven architectures for new solutions. The Department of Defense Architecture Framework (DoDAF) is used to document architectures of systems used within the branches of the Department of Defense. “The DoDAF provides the guidance and rules for developing, representing, and understanding architectures based on a common denominator across DoD, Joint, and multinational boundaries.” (DODAF1, 2007).DoDAF has roots in other enterprise architecture frameworks such as Zachman Framework for Information System Architecture (Zachman, 1987) and Scott Bernard’s EA-Cubed framework described in (Bernard, 2005). Zachman and Bernard’s architecture frameworks have been largely adopted by business organizations to document IT architectures and corporate information enterprises. Private sector businesses supplying solutions to the DoD must use the DoDAF to document the architectures of those systems. These suppliers may not be applying concepts of enterprise architecture to their own business, or they may be applying a different framework internally with an established history of use in the business IT sector. The rigor defined in DoDAF version 1.5 is intended for documenting war fighting and business architectures within the Department of Defense. The comprehensive nature of DoDAF including the required views, strategic guidance, and data exchange format also makes it applicable to business environments. For those organizations in the private sector that must use the DoDAF to document their deliverables to the DoD, it makes sense to approach adoption of DoDAF in a holistic manner and extend the use of DoDAF into their own organization if they intend to adopt any enterprise architecture framework for this purpose.The Department of Defense Architecture Framework is the successor to C4ISR. “The Command, Control, Communications, Computers, and Intelligence, Surveillance, and Reconnaissance (C4ISR) Architecture Framework v1.0 was created in response to the passage of the Clinger-Cohen Act and addressed in the 1995 Deputy Secretary of Defense directive that a DoD-wide effort be undertaken to define and develop a better means and process for ensuring that C4ISR capabilities were interoperable and met the needs of the war fighter.” (DODAF1, 2007). In October 2003, DoDAF Version 1.0 was released and replaced the C4ISR framework. Version 1.5 of DoDAF was released in April of 2007. DoDAF solves several problems with the acquisition and ongoing operations of branches within the Department of Defense. Primarily it serves to reduce the amount of misinterpretation in both directions of communication by system suppliers outside of the DoD and consumers within the DoD. The DoDAF defines a common language in the form of architectural views for evaluating the same solution from multiple vendors. The framework is regularly refined through committee and supports the notion of top-down architecture that is driven from a conceptual viewpoint down to the technical implementation.Version 1.5 of DoDAF includes transitional improvements to support the DoD’s Net-Centric vision. “[Net-Centric Warfare] focuses on generating combat power from the effective linking or networking of the war fighting enterprise, and making essential information available to authenticated, authorized users when and where they need it.” (DODAF1, 2007). The Net-Centric Warfare initiative defines simple guidance within DoDAF 1.5 to support the vision of the initiative and guide qualities of the architecture under proposal. The guidance provided within DoDAF includes a shift toward a Services-Oriented Architecture, which we often read about in relationship to the business sector. It also encourages architectures to accommodate unexpected but authorized users of the system. This is related to scaling the solution and loose coupling of system components used in communication of data. Finally, the Net-Centric guidance encourages the use of open standards and protocols such as established vocabularies, taxonomies of data, and data interchange standards. These capabilities will help promote integrating systems into larger, more information intensive solutions. As this paper is written, Version 2.0 of DoDAF is being developed. There is currently no timeline defined for release.DoDAF defines a layered set of views of a system architecture. The view progress from conceptual to technical. Additionally a standards view containing process, technical, and quality requirements constrain the system being described. The topmost level of view is the All Views. This view contains the AV-1 product description and the AV-2 integrated dictionary. AV-1 can be thought of as the executive summary of the system’s architecture. It is the strategic plan that defines the problem space and vision for the solution. The AV-2 is the project glossary. It is refined throughout the life of the system as terminology is enhanced or expanded. The next level of view is the Operational Views. This level can be thought of as the business and data layer of the DoDAF framework. The artifacts captured within this view include process descriptions, data models, state transition diagrams of significant elements, and inter-component dependencies. Data interchange requirements and capabilities are defined within this view. Example artifacts from the operational view include the High-Level Operational Concept Graphic (OV-1), Operational Node Connectivity Description (OV-2), and Operational Activity Model (OV-5). The third level of views of Systems and Services View. This view describes technical communications and data interchange capabilities. This level of the architecture is where network services (SOA) are documented. Physical technical aspects of the system are described in this level as well, including those components of the system that have a geographical requirement. Some artifacts from the Systems and Services View include Systems/Services Interface Description (SV-1), Systems/Services Communications Description (SV-2), Systems/Services Data Exchange Matrix (SV-6), and Physical Schema (SV-11).DoDAF shares many of the beneficial qualities of other IT and enterprise architecture frameworks. A unique strength of DoDAF is the requirement of a glossary as a top-level artifact in describing the architecture of a system. (RATL1, 2006). Almost in tandem with trends in the business IT environment toward Service-Oriented Architectures, DoDAF 1.5 has shifted more focus to a data-centric approach and network presence in the Net-Centric Warfare initiative. This shift is motivated by the need to share operational information with internal and external participants who are actors in the system. This need is also motivated by the desire to assemble and reuse larger systems-level components to build more complex war fighting solutions. As with other frameworks, DoDAF’s primary strength is in the prescription of a common set of views to compare capabilities of similar systems. The views enable objective comparisons between two different systems that intend to provide the same solution. The views enable faster understanding and integration of systems delivered from provider to consumer. The view also allows for cataloging and assembling potentially compatible systems into new solutions perhaps unforeseen by the original provider. The DoDAF view can effect a reduction of deployment costs and lower possibility of reinventing the same system due to lack of awareness about existing solutions. A final unique strength of DoDAF is that it defines a format for data exchange between repositories and tools used in manipulating the architectural artifacts. The (DODAF2, 2007) specification defines with each view the data interchange requirements and format to be used when exporting the data into the common format. This inclusion in the framework supports the other strengths, most importantly automation of discovery and reuse of existing architectures.Some weaknesses of DoDAF can be found when it is applied outside of its intended domain. Foremost, DoDAF was not designed as a holistic, all encompassing enterprise architecture framework. DoDAF does not capture the business and technical architecture of the entire Department of Defense. Instead it captures the architectures of systems (process and technical) that support the operations and strategy of the DoD. This means there may be yet another level of enterprise view that relates the many DoDAF-documented systems within the DoD into a unified view of participating components. This is not a permanent limitation of the DoDAF itself, but a choice of initial direction and maximum impact in the early stages of its maturity. The focus of DoDAF today is to document architectures of complex systems that participate in the overall wartime and business operations of the Department of Defense. A final weakness of DoDAF is the lack of business-financial artifacts such as a business plan, investment plan and return-on-investment plan.It is the author’s observation that the learning curve for Zachman is potentially smaller than DoDAF. Zachman’s basic IS architecture framework method is captured in a single paper of less than 30 pages, while the DoDAF specification spans several volumes and exceeds 300 pages. Zachman’s concept of a two-dimensional grid with cells for specific subjects of documentation and models is easier for an introduction to enterprise architecture. It has historically been developed and applied in business information technology situations. Zachman’s experience in sales and marketing at IBM motivated him to develop a standardized IS documentation method. There are more commonalities than differences in the artifacts used in both DoDAF and Zachman methods. Zachman does not explicitly recommend a Concept of Operations Scenario, which is an abstract flow of events, a cartoon board, or artistic rendering of the problem space and desired outcome. This does not mean a CONOPS (Bernard, 2005) view could not be developed for a Zachman documentation effort. Business process modeling, use-case modeling, and state transition modeling are all part of DoDAF, Zachman, and Bernard’s EA-cubed frameworks. (Bernard, 2005).The EA-cubed framework developed by Scott A. Bernard was heavily influenced by Zachman’s Framework for Information Systems Architecture. Bernard scaled the grid idea to support enterprise architecture for multiple lines of business with more detail than was possible with a two-dimensional grid. The EA-cubed framework uses a grid similar to Zachman’s with an additional dimension of depth. The extra dimension allows each line of business within the enterprise to have its own two-dimensional grid to document their business and IT architecture. Cross-cutting through the cube allow architects to identify potentially common components to all lines of business - a way to optimize cost and reduce redundant business processes and IT systems. The EA-cubed framework includes business-oriented artifacts for the business plan, investment case, ROI, and product impact of architecture development. As mentioned above, DoDAF does not include many business-specific artifacts, specifically those dealing with financials. Both Zachman and EA-cubed have more layers and recommended artifacts than DoDAF. EA-cubed has specific artifacts for physical network level and security crosscutting components, as an example. The Systems and Services view of DoDAF recommends a Physical Schema artifact to capture this information if needed. In the case of DoDAF, vendors may not know in advance the physical communication medium deployed with their system such as satellite, microwave or wired networks. In these cases, the Net-Centric Warfare guidance within DoDAF encourages the support of open protocols and data representation standards.DoDAF is not a good starting point for beginners to enterprise architecture concepts. The bulk of the volumes of the specification can be intimidating to digest and understand without clear examples and case studies to reference. Searching for material on Zachman on the Internet produces volumes of information, case studies, extensions and tutorials on the topic. DoDAF was not designed as a business enterprise architecture framework. The forces driving its development include standardizing documentation of systems proposed or acquired through vendors, enabling reuse of existing, proven architectures, and reduce time to deploy systems-of-systems built from cataloged systems already available. Many of the documentation artifacts that Zachman and EA-cubed include in their frameworks are also prescribed in DoDAF, with different formal names but essentially the same semantics. The framework recommends more conceptual-level artifacts than Zachman. This could be attributed to the number of stakeholders involved in deciding if a solution meets the need. DoDAF includes a requirement for glossary and provides architectural guidance with each view based on current DoD strategy. Much of the guidance provided in DoDAF is directly applicable to the business world. The Net-Centric Warfare strategy, which is discussed in within the guidance, is similar to the Service-Oriented Architecture shift happening now in the private sector. Lack of business-strategic artifacts such as business plan, investment plan, and ROI estimates would force an organization to supplement prescribed DoDAF artifacts with several of their own or from another framework. The Department of Defense Architecture Framework was designed to assist in the acquisition of systems from suppliers. There are many point-in-time similarities between Zachman and DoDAF in terms of DoDAF’s level of refinement for use with large enterprises. DoDAF could potentially benefit from a similar approach as Bernard’s, in that the flat tabular view is scaled up with depth. A extension of DoDAF with a third dimension could be used to document the architectures of multiple lines of business within an enterprise with more detail than is possible with a single artifact set. With minor enhancements, the DoDAF is a viable candidate for business enterprise architecture efforts. ReferencesArmour, F.J., Kaisler, S.H., Liu, S.Y. (1999). A Big-Picture Look at Enterprise Architectures, IT Professional, vol. 1, no. 1, pp. 35-42. Retrieved from http://doi.ieeecomputersociety.org/10.1109/6294.774792.Bernard, S.A. (2005). An introduction to enterprise architecture. (2nd ed.) Bloomington, IN: Author House.Coffee, P. (2005). Mastering DODAF will reap dividends. eWeek, 22(1), 38-39. Retrieved August 3, 2008, from Academic Search Premier database.Dizard, W. P. (2007). Taking a cue from Britain: Pentagon’s tweaked data architecture adds views covering acquisition, strategy. Government Computer News, 26, 11. p.14(1). Retrieved August 02, 2008, from Academic OneFile via Gale: http://find.galegroup.com.dml.regis.edu/itx/start.do?prodId=AONEDoDAF1. (2007). DoD Architecture Framework Version 1.5. Volume I: Definitions and Guidelines. Retrieved 31 July 2008 from http://www.defenselink.mil/cio-nii/docs/DoDAF_Volume_I.pdf.DoDAF2. (2007). DoD Architecture Framework Version 1.5. Volume II: Product Descriptions. Retrieved 31 July 2008 from http://www.defenselink.mil/cio-nii/docs/DoDAF_Volume_II.pdf.IBM. (2006). An IBM Rational Approach to the Department of Defense Architecture Framework (DoDAF). Retrieved 2 August 2008 from ftp://ftp.software.ibm.com/software/rational/web/whitepapers/G507-1903-00_v5_LoRes.pdf.Leist, S., Zellner, G. (2006). Evaluation of current architecture frameworks. In Proceedings of the 2006 ACM Symposium on Applied Computing (Dijon, France, April 23 - 27, 2006). SAC ‘06. ACM, New York, NY, 1546-1553. DOI= http://doi.acm.org/10.1145/1141277.1141635.RATL1 (2006). An IBM Rational approach to the Department of Defense Architecture Framework (DoDAF) Part 1: Operational view. Retrieved 1 August 2008 from http://www.ibm.com/developerworks/rational/library/mar06/widney/.RATL2 (2006). An IBM Rational approach to the Department of Defense Architecture Framework (DoDAF) – Part 2: Systems View. Retrieved 1 August 2008 from http://www.ibm.com/developerworks/rational/library/apr06/widney/.Zachman, J.A. (1987). A framework for information systems architecture. IBM Systems Journal, Vol. 26, No. 3, 1987. Retrieved July 2008 from http://www.research.ibm.com/journal/sj/263/ibmsj2603E.pdf. ...

August 9, 2008 · 12 min · 2423 words · Jim Thario

Issues of Data Privacy in Overseas Outsourcing Arrangements

Outsourcing is a business concept that has been receiving much attention in the new millennium. According to Dictionary.com (2008) the term outsourcing means to obtain goods or services from an outside source. The process of outsourcing a portion of a business’ work or material needs to an outside provider or subcontractor has been occurring for a long time. The information technology industry and outsourcing have been the focus of editorials and commentaries regarding the movement of technical jobs from the United States to overseas providers. The globalization of business through expanding voice and data communication has forged new international partnerships and has increased the amount of outsourcing happening today. Businesses in the U.S and Europe spend billions in outsourcing agreements with overseas service providers. According to Sharma (2008), spending for outsourcing in the European Union is almost $150 billion (GBP) in 2008. The overriding goal in outsourcing work to a local or overseas provider is to reduce the operations cost for a particular part of the business. Many countries, such as India and China have lower wages and businesses in the U.S. and Europe can save money by hiring an overseas contractor to perform a portion of their work. Outsourcing is gaining popularity in the information age by assisting information technology companies in performing some of their business tasks. This can include data processing, and call routing and handling. With the growth of the technology industry also comes the problem of maintaining and protecting private information about the details of individuals, such as medical history or financial data. Many countries such as the United States and Europe have mandatory personal data privacy laws. These laws do not automatically translate to national laws where the outsourcing service provider is located, or potentially the service provider’s subcontractors. This paper discusses the issues of outsourcing work to an overseas provider when personal data is involved in the outsourced tasks. It presents several solutions to help manage the risk of data breaches caused by disparate laws in countries currently popular for information technology outsourcing. The most common types of work outsourced to overseas service providers include bulk data processing, call center handling, and also paralegal outsourcing. The last example of overseas outsourcing can include work such as legal research, contract and brief writing, and transcription. Outsourcing firms typically do not have a U.S. law license, which limits the extent of their involvement in legal work. The United States is expanding national information protection laws. Two of the most common laws are the Health Insurance Portability and Accountability Act (HIPAA) and the Gramm-Leach-Bliley Act (GLB). The U.S. Congress enacted the HIPAA act in 1996. It is related to the protection of health information that can be used to identify someone or disclose a medical condition. “The data privacy and security requirements of HIPAA apply to health plans, health care providers, and health care clearinghouses. Among its many requirements, HIPAA mandates the creation and distribution of privacy policies that explain how all individually identifiable health information is collected, used, and shared.” (Klosek, 2005). U.S. Congress enacted the GLB act in 1999. The Financial Privacy Rule of the Act is related to documenting and auditing the processes used by an organization for assuring privacy of information that can identify persons, as in HIPAA, and private data about their finances. Both HIPAA and GLB require the organization to publish the information privacy policy and notify the consumer each time it changes. “[…] The GLB Act focuses upon privacy of the non-public information of individuals who are customers of financial institutions.” (Klosek, 2005). The U.S. is not considered at the forefront of privacy protection laws. Likewise, many countries have absolutely no privacy protection laws for their citizens. The European Union is one of the strictest regions with respect to data privacy and outsourcing work that handles private information. The privacy directive for the entire EU was passed in 1998. It specifies a minimum standard for all member countries to follow in handling private personal data and transferring it between companies inside and outside of the European Union. “The EU privacy directive 1998 aims to protect the privacy of citizens when their personal data is being processed. […] One of the provisions of this directive […] addresses the transfer of personal data to any country outside of the EU.” (Balaji, 2005). In most cases, European companies transferring personal data to an overseas outsourcing provider would need to assure the contractor follows the EU rules for handling and processing the data. The EU is also in the process of pre-certifying certain countries for properly handling personal data according the directive standards. Businesses in the Philippines have been providing outsourcing solutions for information technology businesses for over a decade. Estavillo (2006) states the government has increased its focus on keeping the outsourcing landscape fertile in the Philippines. It has created an optional certification program for local businesses based on the government’s own guidelines for protection of information used in data processing and communications systems. The government hopes to continue to expand its reach into enforcing data protection by penalizing unlawful activities such as data breaches and unauthorized access to data intensive systems. Recently ISO has started an international certification effort called ISO 27001. The purpose of the certification is to prove a company documents and follows information security practices and controls. Ely (2008) points out that an ISO 27001 audit is against the processes of the outsourcing provider’s choosing, and to make sure the outsourcing firm follows the industry’s best practices and compliance guidelines of the home country and it deeply understands them. Often an overseas company will adopt HIPAA or Payment Card Industry (PCI) standards for handling of personal data and be certified against that standard for ISO 27001. Any size company can be certified under this standard, and there are no international restrictions regarding who may be certified. Outsourcing work in the information technology industry almost always includes the access or transfer of data between the client organization and the outsourcing provider. Voice conversations and movement of data over an international connection can be subject to interception and monitoring by U.S. and foreign surveillance programs. Ramstack (2008) finds that “[…] paralegal firms in India are doing a booming business handling the routine legal work of American law firms, such as drafting contracts, writing patents, indexing documents or researching laws.” A lawsuit was filed in May of 2008 that requests a hold on new legal outsourcing work until outsourcing companies can provide assurances that data transferred overseas can be protected against interception by U.S. and foreign intelligence collection agencies. The fear is that private legal information about citizens could be transferred from intelligence agencies to law enforcement agencies in the same or allied countries. The mix of international standards and laws offer little hope of legal action across borders when personal data is misused or illegally accessed. The flood of competition among overseas outsourcing companies does offer some hope that reputations are extremely important for sensitive outsourcing agreements. Once an outsourcing provider has been tainted with a bad reference for bulk data processing of foreign citizen’s medical information, for example, it will limit the firm’s financial upside until its reputation can be rebuilt. All of the focus should not be only on the outsourcing provider. It is important for an organization to define and understand its own processes involving data privacy internally before beginning an outsourcing agreement. People within the business who work around and regularly handle private data should be included early in the process of defining the requirements about outsourcing information-related work. These contributors can include the IT and business controls staff members and staff supporting the efforts of the CIO’s office. A cross-company team should define the conditions needed to work with private data regardless of the outsourcing group - local or overseas. They can also help define constraints placed on the outsourcing service provider. “Ensure that the contractual arrangement covers security and privacy obligations. Include language in the contract to articulate your expectations and stringent penalties for violations. Review your provider’s organizational policies and awareness training for its employees.” (Balaji, 2004). Large outsourcing providers may chose to outsource their work to smaller companies in their local country. It is important to be able to control the primary outsourcing company’s ability to subcontract work to other providers or to require that data handling standards in the contract are transitive to all subcontractors who may become involved, at the risk of the original outsourcing provider. In this case it is also important to have the outsourcing service provider identify in advance all or most of the subcontractors involved to obtain references. It is important to define in the outsourcing contract what happens when the relationship terminates. The transition plan for the end of the outsourcing agreement must include a process for obtaining control of data transferred to the outsourcing provider from the customer organization. There should be a way to return the data to the customer organization or assure its destruction on the outsourcing provider’s information systems. Although it has been a part of business for as long as there has been business, outsourcing in the information age brings with it new risks as well as opportunities for business cost optimization and scaling. Risks in outsourcing information services for private data can be mitigated partially through a detailed contract in addition to outsourcing vendor transparency. The best way to ensure compliance to contractual terms is to be sure the customer organization understands their own data privacy standards and treats all outsourcing situations with the same requirements followed internally. The customer organization should perform or obtain third-party audit reports of the outsourcing provider’s processes and systems for ongoing reassurance of proper handling of private data. References Balaji, S. (2004). Plan for data protection rules when moving IT work offshore. Computer Weekly. 30 November 2004, Pg. 26. Ely, A. (2008). Show Up Data Sent Offshore. INFORMATIONWEEK, Tech. Tracker. 2 June 2008, Pg. 37. Estavillo, M. E., Alave, K. L. (2006). Trade department prods outsourcing services to improve data security. BusinessWorld. 9 August 2006, Pg. S1/1. Klosek, J. (2005). Data privacy and security are a significant part of the outsourcing equation. Intellectual Property & Technology Law Journal. June 2005, 17.6, Pg. 15. Outsourcing. (n.d.). Dictionary.com Unabridged. Retrieved June 23, 2008, from Dictionary.com website: http://dictionary.reference.com/browse/outsourcing. Ramstack, T. (2008). Legal outsourcing suit spotlights surveillance fears. The Washington Times. 31 May 2008, Pg. 1, A01. Sharma, A. (2008). Mind your own business. Accountancy Age. 14 February 2008, Pg. 18. ...

June 28, 2008 · 9 min · 1754 words · Jim Thario

Research Essay on Signaling System 7

This research paper describes a telecommunications standard called Signaling System 7 (SS7). This technology defines a signaling system for control and routing of voice calls between telephone switches and switching locations. SS7 uses out-of-band signaling to place and control calls. It replaces an older system of in-band signaling to control telephone equipment. In-band signaling means the audio channel is used as a control channel for telephone switches. Operators would use tones over the audio channel to connect switches and open paths to the call destination. The use of out-of-band signaling means that control of creating an audio path through telephone switches is performed through a separate data channel that connects the switches together. The caller does not have access to this signaling channel, as they do for in-band signaling. SS7 can also carry data to switching locations about the calls they route. This data can include information for purposes of billing network time back to the call’s originating network and the caller’s account. “Signaling System 7 (SS7) is a set of telephony signaling protocols that are used to set up and route a majority of the world’s land line and mobile public switched telephone network (PSTN) telephone calls.” (Ulasien, 2007). SS7 provides more efficiency and reliability for call handling than in-band signaling. SS7 controlled calls can verify that the audio path for a call is ready to initiate, for example, and not create the audio path until the call is answered at the other end. Another example is if the destination phone number returns a busy signal, no audio path needs to be established and the switch directly connected to the caller can generate the busy sound. The strategy of delaying the creation of the audio path until the last moment prevents wasted bandwidth within the switching infrastructure. This scenario would not be possible with in-band signaling, since in-band signaling depends on having an audio path established prior to anyone answering the other end of the call. SS7 allows the creation of innovative customer features and the use of rules-based capabilities for call routing that were previously impossible with in-band signaling technology. Signaling System 7 began development in the 1970s and saw wide deployment beginning in the early 1990s. The technology research and development was sponsored by AT&T and originally named the Common Channel Signaling System (CCSS). AT&T proposed it to the International Telecommunications Union as a standard beginning in 1975. SS7 was issued as a standard in 1980 and has been refined three times since. The ITU Telecommunications Standardization Sector (ITU-TS) develops global SS7 standards. The ITU allows different countries or organizations to make their own refinements and extensions to the global SS7 standard. The American National Standards Institute (ANSI) and Bellcore define a regional SS7 standard for North America and Regional Bell Operating Companies (RBOCs). Before the adoption of Signaling System 7, the only path between telephone switches was the audio channel. Telephone operators would use in-band signaling to set up long distance calls, or route international calls over cable or satellite using touch-tones. Maintenance crews would put telephone switches into special modes using sequences of tones to turn off accounting or allow operations a normal user would not be able to perform. In-band signaling is not just used to control telephone switches. We encounter in-band signaling often through the use of telephone-based services from vendors. Call routing through most of today’s large corporate phone systems require extensive use of the touch-tone keypad. Most voicemail systems require us to enter our personal identification numbers using tones to access messages. Your bank might provide a system to check your balances or transfer money through a phone-based system that uses touch-tones to enter your account information and direct your choices. In-band signaling works well for low-bandwidth situations, such as entering an account code or choosing a menu. Routing instructions to telephone switches can result in a complex series of tones representing access codes and phone numbers. Although it is useful for vendors in providing self-service capabilities to customers, in-band signaling for mission-critical systems such as unprotected telephone switching networks, have been exploited. Exposure of the signaling channel meant that sometimes callers would discover and record the in- band signaling tones used to route calls and control switches. Sometimes the audio signals were discovered completely by accident. During the 1970s and 1980s people such as John Draper (Captain Crunch) were known for their little home-built boxes that could connect to telephone jacks and send sequences of tones to obtain free long distance calls. These were known as black boxes or blue boxes. A whistle that came as a prize in his cereal inspired John Draper’s blue box creation. “The box blasted a 2600-Hz tone after a call had been placed. That emulated the signal the line recognized to mean that it was idle, so it would then wait for routing instructions. The phreaker would put a key pulse (KP) and a start (ST) tone on either end of the number being called; this compromised the routing instructions, and the call could be routed and billed as a toll-free call. Being able to access the special line was the basic equivalent to having root access into Bell Telephone.” (Cross, 2007). Signaling System 7 moves the signaling channel out of the audio channel, and is no longer is accessible to the parties participating in the call. SS7 specifies that telephone switches connect together using a dedicated digital network used only for signaling and managing calls. The signaling network among switches is similar to a traditional computer network. The signaling network can be designed for redundancy and does not need to take the same physical path as the voice data paths. In addition to relocating the signaling channel, the protocol allows for the creation of new and innovative features related to how calls are controlled and routed through the network. The Intelligent Network is a telecommunications industry term and described by Zeichick (1998) as having more reliance on digital technologies, more contextual information about calls in addition to the voice data, and more control provided to the end user for controlling how their telephone experience works. Caller ID works, for example, because the originating caller information is passed from switch to switch through the signaling channels. As mobile phone callers move around, SS7 signaling protocol helps switches find the proper route for calls to this person’s phone. The destination switch for a mobile phone moving in a train or automobile can change quickly. Call routing between switches is optimized with SS7’s definition of shared databases that are accessed through the signaling network. The databases contain rules about how calls should be routed to their destination. Switches on an SS7 network can query shared databases to find out which provider owns a phone number and how to route the call to that number. The databases can also contain feature-specific information. This aspect of the SS7 implementation has been characterized as client-server, meaning the switches act as clients to the shared databases with rules and other information for managing calls. “SS7 links the telephone system with a client-server computer architecture to create a distributed, efficient and easily modified telephone infrastructure. The computers use information from common databases to control call switching and to allow the transfer of messages within the system.” (Krasner, 1997). New technologies are testing the longevity of the Signaling System 7 protocol. Packet switched voice over IP is causing some disruption in SS7 space. However, there is more emphasis on integration and signaling gateways than replacement of existing SS7 infrastructure with something more recent. Session Initiation Protocol (SIP) is a signaling protocol for controlling audio and video connections over Internet Protocol networks. It can be implemented in hardware or software. SIP can be used for voice, video conferencing, and instant messaging and other types of streaming multimedia. H.323 is another streaming multimedia signaling protocol used for audio and video over Internet packet networks. Microsoft’s NetMeeting application uses H.323 as its protocol to connect NetMeeting nodes together in a wide-area conference. H.323 is also a recommendation by the ITU-TS. The business value of SS7 is that it provides opportunities for security, efficiency and optimization of call routing, and it provides the foundation to build innovative features for call handling using contextual information about calls and shared databases. It is a standards-based protocol and has been used throughout the world’s established telecommunications providers for over a decade. The protocol defines the means by which telephone switches exchange call routing and feature information - it does not assume voice data is carried on any particular medium as calls are transferred through the system. This simple abstraction with SS7 allows it to work with new technologies as they arrive in the mainstream. It is possible for SS7 to work within a mixed-technology environment including circuit-switched and packet-switched data networks. Ulasien (2007) says that the extensibility of SS7 allows the incremental migration of an organization from circuit switched to packet switched calls. The voice network is turning into the streaming media network and SS7 will continue to be tested in its role of connection maker and gateway to more recent communication technologies such as VOIP and video conferencing. References Cross, Michael. (2007). Developer’s Guide to Web Application Security. Syngress Publishing 2007. ISBN:9781597490610. Hewett, Jeff. (1996). Signaling System 7: the mystery of instant worldwide telephony is exposed. Electronics Now. 67.n4 (April 1996): 29(7). Krasner, J. L., Hughes, P. & Klapfish, M. (1997). SS7 in transition. Telephony. 233.n14 (October 6, 1997): 54(4). Ulasien, Paul. (2007). Signaling System 7 (SS7) Market Trends. Faulkner Information Services. Document 00011475. July 2007. Zeichick, Alan. (1998). Lesson 125: Signaling System 7. Network Magazine. December 1, 1998: NA. ...

May 30, 2008 · 8 min · 1612 words · Jim Thario

m0n0wall traffic shaping

In this article I will discuss my configuration for traffic shaping using m0n0wall. My goals for traffic shaping include giving priority for VOIP traffic leaving my network and limit the combined incoming traffic speed destined for my servers. Some of my assumptions are that you know how to configure your LAN and WAN networks in m0n0wall, you have NAT configured for your outbound LAN network traffic, and you are using the DHCP server for your LAN. The following image shows my LAN network configuration. From m0n0wall The DHCP server for my LAN network is configured to offer addresses from 192.168.85.100-192.168.85.199. I can’t ever imagine having more than 100 clients on my network. I use the addresses below .100 for static assignments on my LAN. My three servers are configured for static addresses on the LAN - they do not use DHCP. In addition to the three servers, the wireless access points are configured for static LAN addresses and the VOIP telephone adapter uses a fixed DHCP LAN address. I use inbound NAT for my Internet services to redirect HTTP, HTTPS and SMTP from the public firewall IP address to the desired server on the LAN. The following image shows the inbound NAT configuration. You will see HTTP and HTTPS are redirected to one server and SMTP is redirected to another server. In addition to these rules, m0n0wall will add rules to the firewall to allow this traffic to pass. From m0n0wall The VOIP telephone adapter uses DHCP by default and I wanted to maintain the provider’s default configuration for the device. My strategy was to determine the network MAC address of the VOIP device and set the m0n0wall DHCP server to always offer the device the same LAN IP address. The following image shows the settings for the m0n0wall DHCP server for the VOIP adapter. From m0n0wall From this configuration, I can now create rules in the traffic shaper to manage inbound and outbound traffic speed based on the LAN IP address. The first task is to define the pipes that will control inbound and outbound traffic. I have two pipes defined - one for all outbound traffic and one for inbound server traffic. I was able to verify my outbound Internet speed at about 1.5 Mbit. I subtracted about 6% from that and came up with 1434 Kbit. I talk about why you should do this in a previous article. The basic idea is that you only want to queue packets in your m0n0wall and prevent packets from queuing in your ISP router or any other device before the packet leaves your location. The only way to be sure is to throttle-down your outbound speed by a few percent. Your connection may need more or less, and you should experiment and re-test your settings once or twice a year. The second pipe is used to limit the maximum speed of incoming data to the servers. I want to limit the combined inbound traffic to all three of the servers to about 1 Mbit. The traffic that would pass through this pipe includes incoming mail delivery and incoming requests to the web server. This pipe will not impact web server responses, i.e. page content returned. Mail delivery between servers on the Internet happens asynchronously, so the client workstations will not care if a message delivery takes 1 second or 15 seconds to occur. Client workstations are interacting with servers on the local network, so they will not feel any of the shaping. From m0n0wall The strategy for outbound traffic is to give top priority for VOIP, second priority to workstations and last priority to outbound server traffic. To accomplish this I need three queues in the m0n0wall traffic shaper section. The three queues relate to the three outbound priorities previously mentioned. The first queue is for VOIP and has a weight of 50. The second queue is for workstation traffic and has a weight of 40. The last queue is for outbound server traffic and has a weight of 10. The total weight for all three queues adds up to 100 and the weights are completely relative. All three queues are connected to the outbound 1434 Kbit pipe. If there is no outbound VOIP and workstation traffic, the server queue with the weight of 10 will get the entire 1434 Kbit outbound pipe. See the following image for the queues. From m0n0wall The reality is that the VOIP traffic only takes about 100 Kbit of the outbound traffic when in use. Even though the weight of the high priority queue is set to 50, it will never use 50% of the 1434 Kbit outbound pipe, and all it does is guarantee that the VOIP service will get all the outbound bandwidth it needs. The final piece of the traffic shaping strategy is the rules that place outbound packets in a specific queue, or place inbound server traffic into the server pipe. Inbound VOIP and workstation traffic does not get shaped. The rules I use are based on traffic leaving a specific interface. Traffic leaving the WAN interface is traffic sent out to the Internet. Traffic leaving the LAN interface is traffic received from the Internet. With that, see the following image. From m0n0wall The first five rules are for outbound traffic destined for the Internet. Rule 1 places outbound VOIP traffic in the queue with weight 50. Rules 2-4 place outbound server traffic in the queue with weight 10. Rule 5 is a catch-all and places all other outbound traffic in the medium priority queue with weight 40. Rules 6-8 are for traffic leaving the LAN interface, in other words, inbound traffic from the Internet. These rules place traffic destined for my three servers into the 1 Mbit inbound pipe. These rules will constrain the combined inbound traffic to these servers to 1 Mbit. Only the inbound server traffic is shaped. With these pipes, queues and rules, I’ve accomplished my goal - VOIP traffic leaves first, workstation traffic leaves second, and server traffic leaves last, and inbound server traffic is limited to 1 Mbit. How can I tell if these rules are working? m0n0wall has a status.php page and you can see the byte and packet counts on these rules. To see these statistics, sign-in to your m0n0wall web console. Add status.php to the browser address. The page you will see is just a textual dump of various internal statistics. The statistic you want to review is the ipfw show listing. The following image shows the statistics for my traffic shaper rules. From m0n0wall In this image you can see the queue and pipe rules with their packet and bytes counts. Take note of the out via dc0 and out via dc1 parts of the rules, which are my WAN and LAN network adapters. The first two rules and very last rule are automatically added by the m0n0wall software. You can see the queue 1 rule for high priority outbound VOIP traffic, coming from a specific LAN address. The next three rules for queue 3 are for low priority outbound server traffic, again based on LAN address. The queue 2 rule is the catch-all rule for outbound workstation traffic at medium priority. The next three rules are for inbound server traffic that is sent to the 1 Mbit pipe. All other inbound traffic is not shaped and matches the last rule. ...

March 4, 2008 · 6 min · 1231 words · Jim Thario

m0n0wall hardware and bootstrap

In this article I will discuss the hardware used in my home-brewed firewall and what I did to bootstrap the firewall with the m0n0wall software image. My m0n0wall firewall is based on an older Dell Dimension V400. To get an idea of the machine age, this photo shows the original stickers promoting the Pentium II, Windows NT and Windows 98. From m0n0wall This machine has a 400 MHz processor and 128 MB of RAM. I removed the hard disk and disconnected the floppy drive. The older CD-ROM drive was replaced with a spare Sony CD-RW. The tray on the original CD-ROM started to make grinding noises and stopped opening when the button was pressed. The machine started with one network adapter and I added two more Linksys LNE-100 PCI adapters. You can see all three 100 Mb PCI network adapters in the following photo. From m0n0wall The most educational part of the project for me was the installation of the compact flash IDE adapter and memory card. This device plugs directly into the IDE cable connector on the motherboard and can be used in place of a hard disk. A compact flash device won’t suffer a head crash or any other type of physical damage associated with a moving, mechanical hard disk. I wanted to eliminate the primary causes of a firewall crash, so it was this approach or a pair of mirrored hard disks. The memory card solution was much less expensive and provided me with some experience if I wanted to move to a Soekris or LogicSupply solid-state PC later. I used a compact flash IDE adapter from StarTech, model IDE2CFINT. You can find them for less than $20. I bought mine from Amazon with a 2 GB memory card. StarTech’s site has several good close-up images of the adapter. In the following photo, you can see the compact flash IDE adapter plugged into the PC’s motherboard IDE cable connector. Along the right side of the compact flash IDE adapter is the memory card, which is plugged into a pin header. Above the memory card is a floppy drive power cable. The power for the adapter can come from the motherboard or from a floppy drive power cable. There is a jumper on the adapter to specify the source of power. I set it up initially this way, and it worked, so I left it. From m0n0wall This machine has two IDE channels, the first is used by the compact flash IDE adapter. The second channel is used by the CD-RW drive. You can see in the blurry background of the above photo the CD-RW cable connected to the motherboard’s second IDE channel below the compact flash IDE adapter. The cable comes up to the left of the compact flash IDE adapter and continues up to the CD-RW. The next step was to power up the machine and see what the Dell’s BIOS thought of these hardware changes. After I the powered the machine and entered the setup screen, the BIOS automatically detected the compact flash IDE adapter and memory card as a 2 GB hard disk. It also recognized the Sony CD-RW. That’s it! Save settings and exit. The next interesting task was to write the m0n0wall software image to the memory card in the PC. I have the one compact flash IDE adapter, so my approach to load the software was somewhat improvisational based on the machine I was using and the resources I had available to me the evening I decided to take this on. For me to be able to load the m0n0wall software, I had to boot the machine with an operating system from the CD-RW and then transfer the m0n0wall image directly from some media to the compact flash IDE adapter. I decided the easy approach would be to boot a FreeBSD or Linux installation disk, enter a rescue mode and get to a command prompt where I would have the basic tools available. For example, the CentOS 5.1 rescue mode on disk 1 has the dd and gunzip utilities I need to write the m0n0wall software image to the memory card. What media would I get the m0n0wall software image from? At this point it was sitting on my PowerBook’s file system after downloading it from the m0n0wall web site. The Dell PC I am using for the firewall has two USB connectors on the back. Since I didn’t want to create a custom boot CD, I decided to try to boot from the CentOS 5.1 disk in the CD-RW and use a USB memory stick with a FAT file system to contain the m0n0wall software image file. I formatted a USB memory stick with a FAT file system and simply copied the m0n0wall generic PC image to it. I plugged the USB memory stick into the back of the Dell and booted the CentOS 5.1 disk 1 from the CD-RW. I selected the rescue mode and made my way to a Bash command prompt after a couple of questions. Once at a command line, I used the dmesg command to see if the kernel had recognized USB memory stick during boot and if it had been assigned a device name. The kernel did find it and created it as a pseudo-SCSI device. The next step was to mount the FAT file system of the USB stick into the rescue file system. The root of the CentOS rescue file system is a RAM disk so this was no problem. I created a directory called /tmp/usb and mounted the USB device there. I could see the m0n0wall image file now. Section 3.2.2 of the m0n0wall handbook provides the basic template for the dd command in Linux to write the image to the memory card. I needed to take note of the different device names and location of the file containing the m0n0wall image. gunzip -c /tmp/usb/generic-pc-1.2XX.img | dd of=/dev/hdX bs=16k This took just a few seconds to complete. During the transfer of data, I could see the activity light on the StarTech IDE2CFINT flickering, so I knew something was really happening. I got a prompt back and summary from dd of how much data was written. I pulled the CentOS disk from the CD-RW and removed the USB stick from the back of the PC, and rebooted. I watched the Dell POST complete and soon after I saw the familiar spinning cursor of the FreeBSD boot loader, followed by kernel messages, and finally a m0n0wall console menu. The Dell PC booted from a compact flash memory card and m0n0wall was ready to be configured. ...

March 2, 2008 · 6 min · 1102 words · Jim Thario