AT&T's HSPA in Denver is Fast

The following measurement was taken today with Speakeasy’s speed test page while connected to the Internet over AT&T’s HSPA network in Denver. My system was using a Mercury USB adapter in a notebook PC running Windows 7. I am about a mile outside of the downtown area and the speed is good. The following screen-snip represents the best of about five runs while trying to time the image capture. Each run had inbound speed at 3.5 Mbps and faster. I also believe doing this test early on Saturday morning helped avoid some network congestion and get me favorable results. From at&t At about 11 AM, I checked the speeds again this time with speedtest.net. From at&t Here is the general location for these results. View Larger Map My testing conditions are a stationary location in a second floor office of a 100+ year old brick structure. ...

June 26, 2010 · 1 min · 146 words · Jim Thario

XML's Role in Creating and Solving Information Security Problems

XML provides a means to communicate data across networks and among heterogeneous applications. XML is a common information technology acronym in 2010 and is supported in a large variety of applications and software development tooling. XML’s wide adoption into many technologies means it is likely being used in places not originally imagined by its designers. The resulting potential for misuse, erroneous configuration or lack of awareness of basic security issues is compounded by the speed and ease with which XML can be incorporated into new software systems. This paper presents a survey of the security and privacy issues related to XML technology use and deployment in an information technology system. The XML Working Group was established in 1996 by the W3C. It was originally named the SGML Editorial Review Board. (Eastlake, 2002). Today XML has ten working groups, focused on areas including the core specifications, namespaces, scripting, queries and schema, and service modeling. XML is an ancestor of SGML, and allows the creation of entirely new, domain-specific vocabularies of elements, organized in hierarchical tree structures called documents. XML elements can represent anything related to data or behavior. An XML document can represent a customer’s contact information. It can represent a strategy to format information on a printer or screen. It can represent musical notes in a symphony. XML is being used today for a variety of purposes, including business-to-business and business-to-consumer interactions. It is used for the migration of data from legacy repositories to modern database management systems. XML is used in the syndication of news and literary content, as in the application of ATOM and RSS feeds by web sites. The flexibility and potential of XML use in information technology received increasing attention when web services technology was introduced. Web services communicate using XML. They can be queried by client programs to learn their methods, parameters and return data. They are self-describing, which means the approach of security-through-obscurity cannot apply if a web service is discovered running on a publicly accessible server. An attacker can ask the service for its method signatures and it will respond with specifications of how to invoke it. This does not mean the attacker will have the necessary information, such as an authentication credential or special element of data to gain access to the web service. Treese (2002) summarizes the primary security concerns involved with deploying any communications system that must transmit and receive sensitive data. Confidentiality, to ensure that only the sender and receiver can read the messageAuthentication, to identify the sender and receiver of a messageIntegrity, to ensure that the message has not been tampered withNon-repudiation, to ensure that the sender cannot later deny having sent a messageAuthorization, to ensure that only “the right people” are able to read a messageKey management, to ensure proper creation, storage, use, and destruction of sensitive cryptographic keysWeb services are a recent technology, but fall prey to similar attacks used in past and current Internet technologies. Web services are vulnerable to many of the same attacks as browser-based applications according to Goodin (2006). Parsing and validation of data provided inside a transmitted XML document must be performed regardless of the source of the transmission. DTDs or XML schemas that are not strict enough in their matching constraints can leave an open path for parsing attacks. Goodin (2006) details the primary attacks on web services as: Injection attacks that use XML to hide malicious content, such as using character encoding to hide the content of stringsBuffer overflow vulnerabilities in SOAP and XML parsers running on the system providing the web serviceXML entity attacks, where input references an invalid external file such as a CSS or schema, causing the parser or other part of the application to crash in unexpected waysLawton (2007) details a similar problem with AJAX technology. AJAX stands for Asynchronous JavaScript and XML. It not so much a specific technology, but a technique to reduce the number of whole page loads performed by a browser. An AJAX-enabled application can update portions of a browser page with data from a server. The data transmitted between browser and server in an AJAX communication is formatted in XML. The server-side of an AJAX application can be vulnerable to the same attacks described for web services above - overflows, injection of encoded data, and invalid documents. Goodin (2006) recommends IT staff scan the publicly facing systems of an enterprise periodically for undocumented web services, and scanning known web services and applications with analysis tools such as Rational’s AppScan (2009). Lawton (2007) also recommends the use of vulnerability scanners for source code and deployed systems. A common mistake made even today in the deployment of web services or web applications is a lack of use of HTTPS or TLS protocol in securing the transmission of data between the client and server. All data transmitted across the Internet passes through an unknown number of routers and hosts before arriving at the destination. The format of an XML document makes it easy for eavesdroppers to identify and potentially capture a copy of this data as it passes through networking equipment. The easiest solution to this problem is to host the web service or web application over the HTTPS protocol. HTTPS is HTTP over SSL, which encrypts the data during transmission. HTTPS will not protect data before leaving the source or after arriving at the destination. Long, et al. (2003) discusses some of the challenges of bringing XML-encoded transactions to the financial services industry. Privacy is a primary concern for electronic financial transactions. Long states that simply using SSL to encrypt transmissions from system to system is not enough to satisfy the security needs of the financial sector. There also exists a need to encrypt portions of an XML document differently, so that sensitive content has different visibility depending on the system or person accessing it. The XML Encryption Syntax and Processing standard allows any portion or an entire XML document to be encrypted with a key, and then placed within an XML document for transmission or storage. The encrypted document remains a well-formed XML document. Eastlake (2002) describes the Encryption Syntax Processing and Signature Syntax Processing recommendations for XML. Using the ESP recommendation, portions of the document can be encrypted with different keys, thus allowing different people or applications to read the portions of the document for which they have keys. This approach provides a form of multi-level security within a single XML document. With web services comes the problem of knowing which ones to trust and use. Even more difficult is the problem of giving that determination to a computer. Carminati, Ferrari and Hung (2005) describe a problem of automating the evaluation of privacy policies of web services in today’s world of data storage, cloud, banking and financial institutions and multi-player gaming businesses that exist entirely on the Internet. They reason that systems discovered in web services directories may not operate with compatible privacy policies required by the consumer’s organization or local laws. They propose three solutions for handling this problem. The first is basic access control from a third party that evaluates and quantifies the privacy policy for a service provider. The next is cryptography in the services directory so that the consumer decodes only compatible services. The final solution is a hash solution, which looks for flags supplied by the web services provider describing their support of specific aspects of privacy policy. As with the problem of transmitting sensitive XML data across the Internet unencrypted, there is also a problem of authenticating the source of an XML document. How does a person or system verify the document’s originator? The Signature Syntax Processing recommendation briefly mentioned above provides a method to enclose any number of elements in a digital signature. This method uses public key cryptography to sign a portion of the document’s data. The originator of the document provides a public key to the recipient through a secure channel (on a flash drive) in advance of transmitting the data. The originator uses their secret key to sign the document data, which produces a new smaller block of data called a digital signature. The signature is embedded in XML around the protected elements. The signature and the XML data are used by the recipient to determine if the data was changed in transmission. The signature is also used to verify the identity of the signer. Both authentication steps require the recipient to have the sender’s public key. The problem of securing documents through path-based access control was addressed early in XML’s lifetime. Damiani et al (2001) describe an access control mechanism specifically designed for XML documents. Their Access Control Processor for XML uses XPath to describe the target location within a schema for access along with the rights associated to groups or specific users of the system. Additionally, Böttcher and Hartel (2009) describe the design of an auditing system to determine if confidential information was accessed directly or indirectly. They use a patient records system as an example scenario for their design. Their system is unique in that it can analyze “[…] the problem of whether the seen data is or is not sufficient to derive the disclosed secret information.” The authors do not discuss whether their design is transportable to non-XML data sources, such as relational databases. In 2010, we have technologies to use with XML in several combinations to secure document content during transmission and in long-term storage. The use of SSL, Encryption Syntax Processing and Signature Syntax Processing recommendations provide a rich foundation to create secure XML applications. The maturity of web servers, the availability of code analyzers and the increasing sophistication of IT security tools decreases the risk of infrastructure falling to an XML-centric attack. With the technical problems of securing XML addressed through various W3C recommendations, code libraries and tools, a new problem of education, precedence in use and organizational standards for their application becomes the new security issue in XML-related technologies. This is a recurring problem in many disruptive technologies called awareness. Goodin (2006) says, “[…] the security of web services depends on an increased awareness of the developers who create them, and that will require a major shift in thinking.” XML has introduced and solved many of its own security problems - through application of its technology. It becomes important now for the industry to document and share the experiences and practices of deploying secure XML-based Internet applications using the technologies recommended by the W3C and elsewhere. References Böttcher, S., Hartel, R. (2009). Information disclosure by answers to XPath queries. Journal of Computer Security, 17 (2009), 69-99. Carminati, B., Ferrari, E., Hung, P. C. K. (2005). Exploring Privacy Issues in Web Services Discovery Agencies. IEEE Security and Privacy, 2005, 14-21. Damiani, E., Samarati, P., De Capitani di Vimercati, S., Paraboschi, S. (2001). Controlling access to XML documents. IEEE Internet Computing, November-December 2001, 18-28. Eastlake, D. E. III., Niles, K. (2002). Secure XML: The New Syntax for Signatures and Encryption. Addison-Wesley Professional. July 19, 2002. ISBN-13: 978-0-201-75605-0. Geer, D. (2003). Taking steps to secure web services. Computer, October 2003, 14-16. Goodin, D. (2006). Shielding web services from attack. Infoworld.com, 11.27.06, 27-32. Lawton, G. (2007). Web 2.0 creates security challenges. Computer, October 2007, 13-16. Long, J, Yuan, M. J., Whinston, A. B. (2003). Securing a new era of financial services. IT Pro, July-August 2003, 15-21. 1520-9202/03. Naedele, M. (2003). Standards for XML and web services security. Computer, April 2003, 96-98. Rational AppScan. (2009). IBM Rational Web application security. Retrieved 14 February 2009 from http://www-01.ibm.com/software/rational/offerings/websecurity/webappsecurity.html. Treese, W. (2002). XML, web services, and XML. NW, Putting it together, September 2002, 9-12. ...

March 14, 2010 · 10 min · 1934 words · Jim Thario

Automated Dynamic Testing

In researching some testing solutions for my own work, I found an article in the IEEE library from a group of Microsoft researchers about automating the software testing process. (Godefroid, et al, 2008). They are taking the concepts of static analysis to the next level by researching and prototyping methods of generating harnesses for automated dynamic testing. They discuss four different projects for test automation, but the most interesting one for me in the article was a project called SAGE (scalable, automated, guided execution). The SAGE project is based on white box fuzz testing and is intended to help reduce the number of defects related to security. “Security vulnerabilities (like buffer overflows) are a class of dangerous software defects that can let an attacker cause unintended behavior in a software component by sending it particularly crafted inputs.” The solution is white box because the program under test is running under a debugger-like monitor. The monitor observes and catches runtime exceptions generated by the program as the testing suite is exercising it with a variety of dynamically generated invalid input data. The tester and monitor programs are able to record, pause and replay for engineers the history of events up to the exception causing the program to crash. An early version of SAGE was able to find a defect in a Windows kernel-level library responsible for parsing animated cursor image files. The tool generated over 7,700 test cases based on sample input data from testers and exercised the library for a little more than seven hours before the defect was uncovered. After analysis of the SAGE data, a fix for the defect was released as a out-of-band security patch for Windows. The authors write, “SAGE is currently being used internally at Microsoft and has already found tens of previously unknown security-related bugs in various products." Reference Godefroid, P., de Halleux, P., Levin, M. Y., Nori, A. V., Rajamani, S. K., Schulte, W., Tillmann, N. (2008). Automating Software Testing Using Program Analysis. IEEE Software. 0740-7459/08. ...

December 23, 2009 · 2 min · 333 words · Jim Thario

Easing into Agile

The article I found this week was written by two individuals working for Nokia Networks. They were involved in training product development staff in agile practices. Vodde and Koskela (2007) discussed Nokia’s environment for the past decades and their experiences in introducing test-driven development into the organization. The implication in the article is that because of the size and amount of retraining necessary to move toward agile development, Nokia is adopting agile practices a piece at a time (small bites) versus dropping the waterfall approach entirely and throwing the development teams into a completely new and unfamiliar situation. Vodde and Koskela also point out the benefit they found in using hands-on instruction for TDD versus lecture-based education. The authors make a few observations during the time they were teaching TDD to experienced software developers. One important observation was, “TDD is a great way to develop software and can change the way you think about software and software development, but the developer’s skill and confidence still play a big role in ensuring the outcome’s quality.” The exercise the authors used in their course was to develop a program to count lines of code in source files and tests to verify the program’s operation. Each session would add a new requirement in the form of a new type of source file. The students were forced into an evolutionary/emergent situation in which the design had to change a little as the current and new problems of each requirement were solved. What the students’ speculated as a design at the beginning and what they actually ended with were different. The authors conclude with some recommendations for successful TDD adoption with other agile practices or as an isolated practice in a legacy environment: Removing external dependencies helps improve testabilityReflective thinking promotes emergent designA well-factored design and good test coverage also help new designs emerge Reference Vodde, B., Koskela, L. (2007). Learning Test-Driven Development by Counting Lines. IEEE Software. 0740-7459/07. ...

December 23, 2009 · 2 min · 324 words · Jim Thario

Software Engineering

Many of us in the IT industry aspire to create a Software Engineering discipline. We work continually to mature our understanding of what it is and should become, and work to increase the external trust of the profession. Are we there yet in relation to other engineering disciplines? Probably not. Whether or not it is there today does not matter as much to me. What matters to me is that at this time we are trying to take it there. My feeling is that Software Engineering is a pursuit, not an endpoint. I also believe software craftsmanship exists, but there is a place for it. I do not want a craftsman designing my antilock brakes, getting creative with my future (hopefully distant) artificial heart, liver or whatever code, or the algorithm for measuring the carbon monoxide levels in my home. I would like an engineer knowledgeable in precedence and predictability to create these things. Denning and Riehle (2009) point out some interesting areas where Software Engineering is weak compared to other disciplines: Predictable outcomes (principle of least surprise)Design metrics, including design to tolerancesFailure tolerancesSeparation of design from implementationReconciliation of conflicting forces and constraintsAdapting to changing environments I think an additional challenge we deal with in developing a Software Engineering discipline is that software - code - is unlike any material previously available to us. Add to this the various forms and structures the material can take changes every five to ten years - Java, C#, client/server, web services, hosted, distributed, etc. We are trying to build a stable practice around an unstable material. For example, our environment is beginning an architectural shift toward large multi-core processors. (Merritt, 2008). Our tools, thinking and education may require a refresh to adapt our software design approaches to deal with this change. (See http://clojure.org/state). In short, I believe in Software Engineering. It is out there and we are chasing it down. We make some right and wrong turns along the way. Each time we get a little closer to it, our world of technology changes dramatically and it just slips out of our grasp. The longer we hunt for it, the more mature, disciplined and predictable our profession becomes. References Denning, P., & Riehle, R. (2009). The Profession of IT: Is Software Engineering Engineering?. Communications of the ACM, 52(3), 24-26. Merritt, R. (2008). CPU designers debate multi-core future. EE Times. Retrieved 24 October 2009 from http://www.eetimes.com/showArticle.jhtml?articleID=206105179 ...

October 25, 2009 · 2 min · 402 words · Jim Thario

Creating Tools that Create Art

I recently developed and installed a creation called Short Attention Span Collaborative Imagery in the Annex at Core New Art Space in Denver. Some people have called it art, while I call it a tool for generating art. The SASCI piece runs on two Internet-connected computers in the gallery. It uses Twitter trends and specific search terms to drive the continuous creation of collages of images and text on two wall-facing projectors. Input from Twitter, specifically the current and daily trends and a search for the words Denver and Art is the source of the imagery. It uses the Stanford Natural Language Parser, Creative Common-licensed images from Flickr and text from Wikipedia. I wrote the programs in Java and JavaFX. About every 30 minutes, background tasks collect the latest terms and matching messages from Twitter. A different program using the Stanford NLP parses the messages looking for interesting nouns, and collects images and text associated with the source words from Flickr and Wikipedia. Each collage takes anywhere from 2-5 minutes to build in front of the audience. It is never the same. The collages abstractly reflect people’s conversations on Twitter as recent as the last 30 minutes. If you are in the area, please check it out. Core New Art Space is located at 900 Santa Fe Drive in Denver. Call or browse the web site for gallery hours. 303-297-8428. http://corenewartspace.com. ...

August 23, 2009 · 2 min · 231 words · Jim Thario

Follow-up: Qwest VDSL2 Service in Denver

Rock solid, fast, affordable, get it if you can. I had VDSL2 installed by Qwest this past August 3rd. I am a work-at-home IT Specialist. This means I live and die by my Internet connection to communicate with co-workers, gain access to the corporate network, design software and deploy it to servers in different parts of the country. Since the VDSL2 installation almost two weeks ago, the service has been used for web browsing, email, connecting to work through my employer’s VPN service, screen sharing with co-workers, backing up computers via Jungle Disk and Tivoli Storage Manager, listening to Pandora Radio, watching some TV through our Roku player and playing a couple of games of BZFlag. To recap, we are getting 20 Mbps downstream and 5 Mbps upstream. Our residence is in the 80205 zip code and less than 0.5 km from the fiber node. We are qualified for 40 Mbps downstream in this location. The connection has been up continuously since installation and we have yet to experience any network congestion during the day or evening. Here are some metrics from the Q1000: From Jim’s Software Engineering Blog Today I performed a new speed test from Denver to Dallas: ...

August 15, 2009 · 1 min · 200 words · Jim Thario

Privacy Issues Related to DNS and Service Providers

This research paper details some recent concerns regarding DNS services and consumer privacy. This paper summarizes the concepts of DNS. IT discusses how DNS is used on the Internet. It discusses how DNS services are provided to consumers and what types of entities provide the service for daily use. This paper continues with a discussion of how DNS has been and is currently being used as a mechanism to collect and profile the behavior of users on the Internet and how these mechanisms can be abused. The alternatives available to consumers for DNS are presented in closing and suggestions for methods for finding a balance to privacy and utility Internet service are made. DNS is an acronym for Domain Name System. It is one of the most fundamental and important services provided throughout the Internet. Nearly every networked client that uses a symbolic name to access a web server, email server or any other service depends on DNS. The domain name system translates symbolic names like www.ibm.com or mail.google.com into 32-bit Internet Protocol (IP) addresses. DNS also translates IP addresses back into domain names. The translations process from a name to an address is called forward lookup. The translation process from an address back into a symbolic name is called reverse lookup. Forward lookup is used more often than reverse lookup. The DNS concept dates back to 1987. RFC 1034 and RFC 1035 define the concepts, specification and implementation of the domain name system and protocol we use today on the Internet. According to (RFC1034, 2009) the DNS has three major components: domain name space and resource records, which are specifications for a tree structured name space and data associated with the names,name servers are server programs which hold information about the domain tree’s structure and set information and resolvers are programs that extract information from name servers in response to client requests In the simplest form, the servers providing resolution of domain names and addresses are organized into a hierarchy. Resolving a name to an IP address may take many queries across several domain name servers located in different places on the Internet to complete the process. Resolving a domain name to an IP address happens from right to left. For a name such as www.gap.com, the server or servers handling the root domain for .com are queried first. They are queries for the servers of the next component to the left. The .com root servers are queried for the .gap name. The .com servers will return one or more servers that handle the sub-domains for the gap.com domain. The gap.com servers are queried for an address of www within the domain. Through recursive querying of servers from root domain to specific sub-domain, the IP address of www.gap.com is found. Some details have been left out in this example, but this is in essence what happens. Performing this query each time a client asks for the IP of www.gap.com would place too much burden on the communications infrastructure of the Internet, so caching of DNS information happens as well. Domain resolution includes the amount of time from a few seconds to days for that information to remain current. Clients and servers can retain this resolution data in memory until it expires, and then query for it again from the source servers. Caching allows repeated queries for the same domain name to resolve almost instantaneously. Caching of DNS information can happen at several levels of scale, starting at the workstation, the local network and up to the Internet service provider. As mentioned above there are nameservers and resolvers. Nameservers are queried that provide translation from name to address or from address to name. Resolvers are built into our workstations and other Internet-capable devices. A resolver knows the client-side of the DNS protocol that can ask a nameserver to perform a translation. Caching nameservers are a hybrid server that includes both the ability to provide services to resolvers - DNS clients - and act as resolvers to query servers upstream from them to perform forward or reverse resolution. Caching nameservers can be found in consumer firewall devices we use in our homes. They are very often used by large organizations, including Internet service providers as a convenience to their subscribers. The main purpose of caching nameservers is to provide a resolution service closer to the client and reduce the number of queries traveling across the Internet. Caching nameservers are a performance optimization. Internet service providers are the most common providers of caching DNS services that consumers use to query and resolve domain names to IP addresses. You employer, if they have a large enough IT department, may elect to run their own caching DNS system for performance reasons. Your workstation or notebook at the office may be using a DNS server that runs on the local area network. That server queries other servers on the Internet as needed to perform forward and reverse resolution. Recently, several alternative, value-added DNS providers have increased their presence. One of the more popular services is called OpenDNS. In addition to providing name and address resolution services for free, they maintain a system that prevents name resolution of sites known to distribute malware and viruses. They also allow a customer of OpenDNS to tailor what categories of sites on the Internet they will resolve. For example, a parent of a family with young children can elect to prevent OpenDNS from resolving sites with violent or sexually explicit content. Instead of providing an address for the objectionable site, the user’s browser is redirected to page within OpenDNS’ network explaining why they have arrived there. What is important to note here is that a consumer must elect to use OpenDNS and it is implied they understand how the service will behave. Not all consumers are informed or understand how their provider’s DNS service will perform for them. Most consumer DSL and cable routers will pull their configuration from the service provider. That configuration will include one or more addresses of DNS servers. DSL and cable routers will also act as Dynamic Host Configuration Protocol servers for internal networks. The router will provide IP addresses to each client. The router will also do one of two things: provide the DNS addresses to each client that it was provided, or the router will act as a caching nameserver and provide its address to each client as the DNS server. Unless you have taken action to use a different DNS server, there is a good chance you are using the DNS servers supplied by your Internet service provider. The privacy issues for DNS are different depending on whose services are used. Let us assume in a consumer is at home and their default configuration for their Internet connection uses the DNS servers provided by their ISP. The ISP may also be the telephone company and television company of this user. The ISP issues the IP address to the consumer’s cable or DSL router. When queries are made to the ISP’s DNS servers, the source IP address will be that of the customer’s router. Using relational database technology, the sites queried from the home router can be stored and analyzed to form a behavioral profile of this customer’s interests. That information can be used to market new telecommunications products to them, or it can be sold to other businesses or potentially provided to government entities to help understand this family’s patterns of Internet usage. This is possible because of the ability to relate key elements of information - DNS queries, router address, and existing personal data on file - back to a customer and others in the customer’s home. Recently Internet service providers have tried a new approach in using DNS to help generate revenue streams. “Several consumer ISPs such as Cablevision’s Optimum Online, Comcast, Time Warner, Rogers, and Bell Sympatico have also started the practice of DNS hijacking on non-existent domain names, for the purpose of making money by displaying advertisements. This practice violates the RFC standard for DNS (NXDOMAIN) responses, and can potentially open users to cross-site scripting attacks.” (HIJACK, 2009). This technique redirects a user’s browser from an error page to a search page or advertisement page when a non-existent domain name is requested through DNS. There have been documented cases of redirecting legitimate addresses to an alternate web site as well. Most of these approaches require the manipulation of established Internet protocols such as DNS. Not surprisingly, they are met with consumer hostility. According to Kirk (2009), “ISPs are trying to find revenue streams other than simply providing Internet access to subscribers for a monthly fee. Some have investigated behavioral advertising systems, which monitor a person’s Web surfing in order to deliver targeted ads. Those systems have largely failed to take hold due to privacy concerns.” Because the deployments of these DNS and web-based redirection systems require the manipulation of Internet protocols on several levels, some have been found to be vulnerable to manipulation for client exploit and attack. “Kaminsky demonstrated [a] vulnerability by finding a way to insert a YouTube video from 80s pop star Rick Astley into Facebook and PayPal domains. But a black hat hacker could instead embed a password-stealing Trojan. The attack might also allow hackers to pretend to be a logged-in user, or to send e-mails and add friends to a Facebook account.” (Singel, 2008). The unfortunate reality is that there are not many alternatives for DNS available to consumers. The most complicated and method of least disclosure is to run a professional caching DNS server on your local area network and have it query root domains directly. Software such as BIND under UNIX, Linux or BSD, or Microsoft’s domain name server as part of IIS on Windows Server can provide this solution. This approach would eliminate all third-party DNS services from the hierarchy of queries. The next alternative is to research and find the least offensive DNS provider for your needs. This may in fact be your Internet service provider. Research their privacy policy. Test your ISP’s DNS resolution behavior. If you enter a bad domain name in your browser and you are redirected to a “suggestion” page, be suspicious and find out more details. As mentioned above, OpenDNS generates revenue from the profile data it collects from its customers’ use. Their privacy policy (OPENDNS, 2009) is documented on the web site. Additionally, they provide customizable filtering services to protect your network from malware or offensive content. This paper detailed some recent concerns regarding DNS and privacy. In addition to discussing the concepts of DNS, it detailed how and who provides DNS services to consumers. A discussion of how DNS can be leveraged as a mechanism to collect and profile consumer behavior followed with alternatives available to consumers to limit the collection of their behavioral data. Internet service providers are under pressure to increase and discover new avenues of income. Consumers are likewise under constant pressure to maintain their guard against subtle privacy violations. Consumers maintain the ability for now to limit manipulation of Internet standards to prevent unknowingly leaking personal and behavioral information to a wider audience. As discussed in this paper, methods are available to reduce the risk of privacy invasion of consumers without their complete knowledge. References HIJACK. (2009). DNS hijacking. Retrieved August 9, 2009 from http://en.wikipedia.org/wiki/DNS_hijacking. Kirk, J. (2009). Comcast Redirects Bad URLs to Pages With Advertising. PC World, Business Center. Retrieved August 8, 2009 from http://www.pcworld.com/businesscenter/article/169723/comcast_redirects_bad_urls_to_pages_with_advertising.html RFC1034. (2009). Request for Comments: 1034, DOMAIN NAMES - CONCEPTS AND FACILITIES. Retrieved August 8, 2009 from http://www.ietf.org/rfc/rfc1035.txt. RFC1035. (2009). Request for Comments: 1035, DOMAIN NAMES - IMPLEMENTATION AND SPECIFICATION. Retrieved August 8, 2009 from http://www.ietf.org/rfc/rfc1035.txt. OPENDNS. (2009). OpenDNS Privacy Policy. Retrieve August 7, 2009 from http://www.opendns.com/privacy/. Singel, R. (2008). ISPs’ Error Page Ads Let Hackers Hijack Entire Web, Researcher Discloses. Privacy, Crime and Security Online. Wired. April 19, 2008. Retrieved August 7, 2009 from http://www.wired.com/threatlevel/2008/04/isps-error-page/. ...

August 15, 2009 · 10 min · 1985 words · Jim Thario

Quantifying Risk and Return for IT Security Investments

This research paper explores the issues related to defining and quantifying risk and return for capital investments in security solutions for information technology. This work begins by defining some of the most common types of attacks and breaches occurring against commercial and institutional information technology systems. It follows with a discussion of approaches to analyze and estimate the level of financial, legal and reputation risk around IT security events. Finally, the paper concludes by providing guidelines for estimating a budget for IT security initiatives, reporting results and relating the security initiatives to the strategic goals of the organization. There are several types of common security breaches and events in commercial and institutional IT systems. Defacement of web sites involves the compromise of servers responsible for providing web pages. This breach can be caused from improperly configured web server software or flaws in the software responsible for generating dynamic web pages. Web page defacement is often in response to a corporate or political policy. A denial of service attack does not cause of breach in systems, but floods the resources of the target organization. The result of a denial of service attack is to prevent legitimate users from accessing the target’s network and services. A denial of service attack can occur against the networking infrastructure, web servers, database servers or any other finite resource of the organization. A distributed denial of service attack is a network attack that floods the target organization’s network with packets. Like web page defacement, this attack is often in response to a corporate or political policy. Systemic malware attacks involve the spreading of a virus, worm or other malware throughout the workstation resources of an organization. This type of attack is less likely to be directly targeted at a specific organization. It may occur because of a “zero day” vulnerability in workstation software that has not yet been patched by the vendor or blocked by the security software provider. Corruption of information, theft or accidental release of information has the potential for the most attention and the most liability for an organization. This type of breach may involve the release intellectual property, private information about individuals working for the organization, or customers of the organization. Several factors contribute to the decision or requirement for publicizing a security breach. If personal information of employees or clients was released, the organization may be legally required to notify the individuals affected by the breach. In the case of a denial of service attack, customers or business partners of the organization may not be able interact with the IT systems as expected. “[…] unless there is some publicly observable consequence such as shutdown of a Web site or litigation, the press may not become aware of a breach. Thus, some breaches with the most potentially severe economic consequences (such as employee initiated breaches that may compromise proprietary information) may not be reported in a timely fashion.” (Campbell, 2003). There is no established formula and process of determining in advance the amount of risk potential or financial exposure for a security breach. Braithwaite (2002) contrasts the traditional loss estimate model for replacement or recovery of resources with that of today. There is much higher dependence on information technology systems today. In many cases, those systems are the business. The loss from downtime or breach is much larger than the just replacement cost of the physical systems and their corresponding software. It was estimated in 2002 that losses to an online brokerage system could be as high as $6.5 million (US) per hour. A credit-card service bureau could lose as much as $2.5 million per hour. Garg (2003) estimates financial losses to a publicly traded company through decreased trust could be from 0.5 to as much as 1.0% of annual revenues. Based on this simple formula, a company with $1 billion (US) in annual revenues could experience as much as $10 million in loss from a single incident. The cost of a security-related event is far reaching. Repair of the organization’s reputation, legal responsibilities and hardening of IT systems addresses only the issues at the surface. Garg’s estimate includes the cost of the breach plus the resulting impact to the perception of trust by partners, investors and customers. The additional risk to publicly traded companies is the spillover effect to the company’s stock price and long-term investment outlook. Cavusaglu (2004) estimates that an organization can lose as much as 2.1% of its market value on average within two days of reporting a breach to the public. For example, a company with a market capitalization of $100 billion (US) could lose as much as $2 billion in value within a few days after reporting the theft of customer personal information. This amount does not include follow-on investment in technology and process development to remedy the problem, legal costs and investments to repair damage to the organization’s reputation. “These potential costs include: (1) lost business (both immediate and long term as a consequence of negative reputation effects), (2) activities associated with detecting and correcting the breaches, and (3) potential legal liability.” (Campbell, 2003). Publicly reporting a breach in general is not something that negatively influences the view of the company or institution. There is a significant negative response from consumers, partners and investors when the security event is related to the release of confidential information. The estimation of risk related to material, legal and market image damage helps scope the problem of determining budget for information security expenditures. There are several areas of investment to reduce security risk. Braithwaite (2002) describes a security investment approach based on a balanced strategy of prevention, detection and response. A recent trend related to prevention and response is the cyber-insurance policy. These policies provide financial relief to an organization following a security breach. Providers of larger policies often require regular security audits by third parties to help establish the level of risk of a future security problem. “According to the 2006 C5I/FBI Computer Crime and Security Survey, 29 percent of U.S. companies say they have external insurance policies to manage cyber security risks, up from 25 percent in 2005.” (Brody, 2007). However, John Pescatore of Gartner states, “[…] the price of the policies is too close to the cost of an actual event. You may be better off just spending the money to avoid an incident." In determining a budget for IT security expenditures, it is important to identify and place a value on non-quantifiable assets and processes such as intellectual property and customer data. The executive staff needs to be involved in this process and help adjust and agree on the valuation. The valuation needs to be revisited as the organization changes scope and size. Additionally, it is important to identify and place a value on the company’s reputation from a security and trust standpoint. Braithwaite (2002) recommends two areas for consideration that include the adverse impact of publicized incidents involving the company, and how the organization is judged by its involvement in support of national and industry security concerns. As mentioned earlier, Garg’s (2003) estimate of potential revenue loss to the business can be used as a coarse-grained starting point to gauge financial commitment to IT security initiatives. Brandel (2006) makes several recommendations on how to present and maintain funding levels for an IT security budget. Avoid scare tactics with executives. Use past security incidents as reference points within a business case for funding. Plan the organization’s funding requirements for 12 to 24 months into the future. Avoid repeated tactical requests for each security project as that could give an impression of reactionary versus proactive planning. Explain the investments in terms of the business goals and initiatives versus the technical language of security. Estimating and reporting the results of security initiatives can be difficult to articulate. Benefits from security expenditures are indirect. There are no revenue streams from installing firewalls, compartmentalizing network segments or auditing workstations for compliance to IT policies. Brandel (2006) claims, “Investing in security rarely yields a return on investment, so promising [a] ROl will sound ill-informed to a senior executive. […]It’s possible to discuss other benefits of security spending, such as protecting the company’s ability to generate revenue, keep market share or retain its reputation.” Reporting on benefits from past security investments maintains the attention of executive sponsorship. Consider developing metrics using measurements like attacks stopped at the firewalls, viruses scrubbed from inbound emails, the ratio of an outbreak of malware on the Internet compared to the corporate Intranet. Choose metrics carefully and be sure they reflect the business’ goals and language. Investing in and reporting on IT security does not need to be solely focused on preventing exploits, spread of malware or unintended release of confidential information. It can also include high-availability of IT systems, reliability of communications and ensuring integrity of critical business information for ongoing operations. According to Drugescu (2006) metrics must measure organizationally meaningful things, be reproducible and consistent, be objective and unbiased, and measure some type of progression toward the identified strategic goal. This paper analyzed the issues, recent opinions and research related to estimating and quantifying risk and return for IT security solutions. The most common types of security attacks and breaches against commercial and institutional information technology systems were described. A discussion of approaches to analyze and estimate the level of financial, legal and reputation risk around IT security events was provided. This paper provided guidelines for estimating a budget for IT security initiatives, and recommended regular reporting of security metrics and relating those metrics to the business goals of the organization. Day-to-day industry is becoming more dependent on information technology. As each year passes, the transformation of worldwide business to a platform of high-speed connectivity, data storage and Internet service exchanges expands the need to accurately quantify risk from downtime and loss. It is vital to gauge the level of investment in security prevention, detection and response for an organization’s survival in the online, interconnected world. References Brandel, M. (2006). Avoid spending fatigue. Computerworld. April 17, 2006. Pg. 34. Braithwaite, T. (2002). Executives need to know: The arguments to include in a benefits justification for increased cyber security spending. Security Management Practices. September/October 2002. Pg. 35. Brody, D. (2007). Full coverage: how to hedge your cyber risk. Inc. Magazine. April 2007. Pg. 47. Campbell, K., Gordon, L. A., Loeb, M. P., Zhou, L. (2003). The economic cost of publicly announced information security breaches: empirical evidence from the stock market. Journal of Computer Security. 11 (2003) 431–448. Cavusoglu, H., Mishra, B., Raghunathan, S. (2004). A model for evaluating IT security investments. Communications of the ACM. July 2004/Vol. 47, No. 7. Drugescu, C., Etges, R. (2006). Maximizing the return on investment of information security programs: program governance and metrics. Information Systems Security. December 2006. Pg. 30. Garg, A., Curtis, J., Halper, H. (2003). The financial impact of IT security breaches: What do investors think? Information Systems Security. March/April 2003. Pg. 22. Roberds, W., Schreft, S. L. (2009). Data security, privacy, and identity theft: the economics behind the policy debates. Federal Reserve Bank of Chicago. 1Q/2009, Economic Perspectives. Pg. 22. ...

August 10, 2009 · 9 min · 1853 words · Jim Thario

Qwest VDSL2 Service in Denver

Today our home Internet service was upgraded to VDSL2 from Qwest. We are located in the Whittier neighborhood of Denver - specifically in the 80205 zip code. I was told by Qwest this area is qualified now for up to 40 Mbps downstream and 5 Mbps upstream VDSL2 service. I started with ADSL service about 2 months ago at 7 Mbps downstream and 896 Kbps upstream. I chose to move to the 20 Mbps downstream and 5 Mbps upstream tier. The ADSL service was cut off this morning while work was performed at the fiber node and before 3 PM the installer came by to hook up the new modem. The Qwest installer said my home was less than three blocks from the cabinet. From Jim's Software Engineering Blog The modem connected at the correct speed immediately. Below is a screen snip of the SNR numbers from the Q1000 modem. These numbers are more than double the SNR reported by the M1000 ADSL modem. The ADSL link had a much longer haul over copper than the VDSL2 link. I was surprised to see the 0 dB attenuation in both directions. I had 20-30 dB attenuation with ADSL. From Jim’s Software Engineering Blog The final step before letting the installer loose was to speed test the link back to Qwest. I would call it a success. From Jim's Software Engineering Blog ...

August 3, 2009 · 2 min · 230 words · Jim Thario