Testing the Proximity Sensor of iPhone 4

The proximity sensor problem with iPhone 4 is a topic of much debate on discussion boards, blogs and news sites. The proximity sensor is used by the phone to determine if the user is holding the phone to her ear during a call. The phone uses input from the proximity sensor to decide whether to activate the screen and allow touch input. Many owners of the phone have reported the screen re-enabling while holding the phone to their ear during a call, while others have reported no problems. I am one of the unfortunate owners of the phone that has inadvertently placed a caller on hold or interrupted other callers with touch tones emanating from my end of the call. As of today I am on my second iPhone 4 and disappointed to report my experience has not improved. There are plenty of emotional calls for Apple to quickly address this problem. I want to take a different approach. In this essay, I will provide a discussion about testing approaches and what that means for complex systems. I use the proximity sensor as a real-world example to demonstrate the problem many have experienced and the difficulty involved in testing for it. Inside iPhone is a complex hardware system arranged in a hierarchy of command and control: a microprocessor, memory, storage, transceivers for wi-fi, cellular, and bluetooth networks. It has touch, light, sound and proximity sensor input. It has external interfaces for the dock, a headset, the SIM card. It has a single display integrated with the touch sensor input. The software distributed through these components is a system of collaborating state machines, each one working continuously to keep the outside world pleased with the experience of interfacing with the phone. It is not just a single human the iPhone must keep satisfied. The cellular networks, wi-fi access points, bluetooth devices, iTunes and other external systems are part of this interactive picture as well. This is oversimplified, but you can begin to appreciate the enormous burden of testing such a small, complex device used by millions of people. How does a team even start to tackle such a problem? Meyer (2008) presents seven principles in the planning, creation, execution, analysis and assessment of a testing regimen. Meyer writes, above and beyond any other reason for the testing process “is to uncover faults by triggering failures.” The more failures are triggered and fixed before delivery of a product to the end user, the less expensive it will be than to fix them later. Humans are a required yet flawed variable in the planning and execution of test suites for complex systems like iPhone. Identifying all possible triggers for failure can be nearly impossible. Savor (2008) argues that, “The number of invariants to consider [in test design] is typically beyond the comprehension of a human for a practical system.” How do we test the multitude of scenarios and their variations in complex systems without fully comprehending usage patterns and subtle timing requirements for failure in advance? Meyer (2008) argues that testing time can be more important a criteria than absolute number of tests. When combining time with random testing, also called test escapes, there is a possibility of uncovering more faults than just using a huge, fixed suite of tests continuously repeated without deviation. Test escapes as defined by Chernak (2001) are defects that the fixed testing suite was not able to find, but instead found later by chance, an unassociated test, or by an end-user after the project was delivered to production (e.g. introduction of randomness). Now that we have some background information and terminology, let’s design a test that could make iPhone’s proximity sensor fail to behave correctly. Consider an obvious test case for the proximity sensor: Initiate or accept a call.Hold the phone against ear. Expect the screen to turn off and disable touch input.Hold the phone away from ear. Expect the screen to turn on and enable touch input.End call. This test case can be verified in a few seconds. Do you see a problem with it? It is a valid test, but not a terribly realistic one. The problem with this test case is that it does not reflect what really happens during a call. We do not sit frozen with all of our joints locked into place, refusing to move until the call has completed. To improve the test case, we add some physical action during the call: Initiate or accept a call.Hold the phone against ear. Expect the screen to turn off and disable touch input.Keep the phone still for 30 seconds.Change rotation, angle and distance of phone to ear while never exceeding 0.25 inches from the side of the caller’s head. Expect the screen to remain off and touch input remain disabled.Return to step 3 if call length is less than ten minutes.Hold the phone away from ear. Expect the screen to turn on and enable touch input.End call. Now the test case is reflecting more reality. There are still some problems with it. When I am on a call, I often transfer the phone between ears. Holding a phone to the same ear for a long time gets uncomfortable. During lulls in the conversation, I pull the phone away from my ear to check the battery and signal levels, and then I bring it back to my ear. These two actions need to be added to the test case. Additionally, all of our timing in the test case is fixed. Because of the complex nature of the phone, small variations in timing anywhere can have an impact in successful completion of our test case. Introducing some variability to the test case may raise the chances of finding a failure. In other words, we will purposely create test escapes through random combinations of action and timing. Initiate or accept a call.Hold the phone against ear. Expect the screen to turn off and disable touch input.Keep the phone still for [A] seconds.Randomly choose step 5, 6 or 7:Change rotation, angle and distance of phone to ear while never exceeding 0.25 inches from the side of the caller’s head. Expect the screen to remain off and touch input remain disabled.Pull phone away from ear for [B] seconds and return phone to ear. Expect the screen to turn on and then off at the conclusion of the action.Move phone to opposite ear. Do no exceed [C] seconds during the transfer. Expect the screen to turn on during the transfer and then off at the conclusion of the transfer.Return to step 3 if call length is less than [D] minutes.Hold the phone away from ear. Expect the screen to turn on and enable touch input.End call. There are four variables to this test case. It is possible that certain combinations of [A], [B], [C] and [D] will cause the screen to re-enable during a call and cause the test case to fail. Have fun with this one. There are in fact combinations that induce proximity failure on iPhone 4 regardless of the version of iOS, including 4.1. Finally, an important part of test design is the inclusion of negative test cases. Chernak (2001) writes, “A test case is negative if it exercises abnormal conditions by using either invalid data input or the wrong user action.” For a device like iPhone, tapping the screen constantly while it is disabled, making a call while holding it upside down, or using a faulty docking cable can all be considered negative test cases. Testing complex systems, regardless of physical size, is an incredibly difficult task. Some of this can be performed by humans and some through automated systems. Finding failures in highly integrated systems requires a combination of fixed test suites, test cases that reflect real usage scenarios, and the introduction of test escapes through creative randomization. References Chernak, Y. (2001). Validating and improving test case effectiveness. IEEE Software, January/February 2001. Meyer, B. (2008). Seven principles of software testing. Computer, August 2008. Savor, T. (2008). Testing feature-rich reactive systems. IEEE Software, July/August 2008. ...

September 23, 2010 · 7 min · 1340 words · Jim Thario

Creating Tools that Create Art

I recently developed and installed a creation called Short Attention Span Collaborative Imagery in the Annex at Core New Art Space in Denver. Some people have called it art, while I call it a tool for generating art. The SASCI piece runs on two Internet-connected computers in the gallery. It uses Twitter trends and specific search terms to drive the continuous creation of collages of images and text on two wall-facing projectors. Input from Twitter, specifically the current and daily trends and a search for the words Denver and Art is the source of the imagery. It uses the Stanford Natural Language Parser, Creative Common-licensed images from Flickr and text from Wikipedia. I wrote the programs in Java and JavaFX. About every 30 minutes, background tasks collect the latest terms and matching messages from Twitter. A different program using the Stanford NLP parses the messages looking for interesting nouns, and collects images and text associated with the source words from Flickr and Wikipedia. Each collage takes anywhere from 2-5 minutes to build in front of the audience. It is never the same. The collages abstractly reflect people’s conversations on Twitter as recent as the last 30 minutes. If you are in the area, please check it out. Core New Art Space is located at 900 Santa Fe Drive in Denver. Call or browse the web site for gallery hours. 303-297-8428. http://corenewartspace.com. ...

August 23, 2009 · 2 min · 231 words · Jim Thario

Research Project Proposal: Model-Driven Information Repository Transformation and Migration

This project will apply Unified Modeling Language for the visual definition of data transformation rules for directing the execution of data migration from one or more source information repositories to a target information repository and will result in a UML profile optimized for defining data transformation and migration among repositories. I believe that a visual approach to specifying and maintaining the rules of data movement between the source and target repositories will decrease the time required to define these rules, enable less technical individuals to adopt, and provide a motivation to reuse these models to accelerate future migration and consolidation efforts.Problem Statement and BackgroundMy role in this project includes project planning and task management, primary researcher and developer of the deliverables of the project. My technical background includes being a certified OOAD designer in Unified Modeling Language by IBM and a software engineer for nearly two decades. I recently have been involved in the migration of several custom knowledge data repositories to an installation of IBM Rational Asset Manager.This project will use a constructive ontology and epistemology to create a new solution in the problem space of the project. This is the most appropriate research ontology and epistemology because there is little precedence available in the exactly this area of research. Visually modeling program specifications have been studied in other problem domains and continue to be an area of interest. This particular problem space is unique, relatively untouched, and in an area of considerable interest to me. A possible constraint of the project includes shortcomings of the UML metamodel rules to allow the extension and definition of an effective rules-based data transformation and migration language. A second constraint of the project may be identification of one or more source repositories as candidates for moving to a new system. For the second constraint, one or more simulated repositories may need to be created.This study is relevant to software engineering practitioners, information technology professionals, database administrators and enterprise architects who wish to consolidate data repositories to a single instance. Unified Modeling Language (UML) is primarily used today in information technology to visually specify requirements, architectures and designs of systems, to verify and create test scenarios, and to perform code generation. The UML metamodel was designed to make the language extensible, with the ability to support profiles that allow the language to be customized to support specific problem domains. Researchers and practitioners are finding innovative uses for UML as a visual specification language. Zulkernine, Graves, Umair and Khan (2007) recently published their results in using UML to visually specify rules for a network intrusion detection system. Devos and Steegmans (2005) also published their results in using Unified Modeling Language in tandem with Object Constraint Language to specify business process rules with validation and error checking.This project will contribute to at least two fields of information technology: visual modeling languages, and information consolidation and management. This project will make a unique contribution to the subject area of domain-specific visual languages for the definition of rules. Additionally, a successful outcome from this project will contribute to knowledge in the area of lowering complexity of consolidating repositories to save operations costs and increase modernization of data access systems. An opposing approach to this project would be a federated solution to data consolidation. A federated solution would continue to maintain multiple data repositories and connect their operations via programming interfaces so that clients could access them and combine their data to create the appearance of a unified source.The project area of focus was motivated by my desire to create a visual system for complete migration of a source repository of technical data, such as a technical support knowledge base, to a new product called Rational Asset Manager. My overall goal was to drive the entire migration visually using a single model specification. This specification would visually specify the rules in migrating and transforming data from one system to another as well as visually select the technical mechanisms used to communicate with each information repository, such as SQL databases, web services, XML translation, etc. In addition, I wanted to generate some executable code from the models that would carry out some or all of the movement of data between repositories. In scaling this broad problem area down, I decided to focus on using the model as a specification that would be read by an existing program to carry out the instructions in the model. This program already exists, but does not yet know how to read models. Finally, in focusing on a specific part of the visual specification, I decided to focus on an aspect of the model that locates data from one system, potentially re-maps it or transforms it, and places it into the target system. The final initial research focus would take the form of a UML profile that could be used to specify this aspect of the solution and extend the existing migration program to use the model to perform its work.Project Approach and MethodologyThis project will use a design science methodology to iteratively create, test, and refine the deliverables of the project’s outcome. The design science methodology defines five process steps in achieving the outcome of a research project: awareness of problem, suggestion, development, evaluation, and conclusion. This project is currently at the awareness of the problem phase. The inputs to this phase have been my experiences in working within the problem space for the last several years and the secondary research into the problem area performed thus far. I have encountered shortcomings in automation to help accelerate solutions in this problem space. At the same time, I have observed closely related problems overcome using visual and declarative technologies. Additional secondary research is being conducted to understand the body of knowledge associated with this area of visual modeling. The output at this phase is this proposal for a project to develop a visual language to help accelerate solutions in this problem space. Significant elements of the proposal include the overall vision of the project, the risks of the project, tools and resources required to carry out the project, and the initial schedule to complete the project. Following an accepted proposal, the next phase of this methodology is the suggestion phase, which involves a detailed analysis and design of the proposed solution. During the suggestion phase, several project artifacts will be created and updated with new information. Updated artifacts include the project risks and a refined schedule for completion of the project. New artifacts produced at this phase include early UML and migration tool prototypes to explore various technical alternatives, detailed test and validation plans, and most importantly the design plans for the following phase of the project. A significant activity performed at this phase is the acquisition and readiness of the project resources, such as physical labs, input test data from candidate repositories, access to networked systems to acquire the test data, and installation of hardware and software tools.The development phase of the project uses the design plans established in the suggestion phase to focus on construction of the first iteration of the solution. Experiences during this phase also drive refinements to the project schedule, detailed test and validation plans, risks, and the design plan of the solution. The deliverable of this phase is the first generation of the UML profile and extensions to the existing migration tool to support parsing and using models created with the profile. The test specification models are used to move a larger portion of the candidate source repositories to the target repository. After conclusion of this phase, the project may return to an earlier phase to refine plans or project scope based on what is learned during the development of the solution. If acceptable progress is demonstrated at the conclusion of this phase, the project will continue to the evaluation phase.The evaluation phase focuses most of its effort on formal testing and validation of the solution produced in the development phase. The evaluation of the work against the thesis includes working with specific individuals to determine if this is indeed an approach that will save time and simplify the specification of data migration and transformation rules. Documentation of the testing outcome and comparison to the anticipated outcome may cause the project to return to an earlier phase to adjust scope or expectations. If it is decided the project has met its goals, or the goals are not achievable by the project’s approach, the effort will conclude.The conclusion of this project will involve final documentation of the outcome and packaging of all the project’s artifacts for future research studies. The project’s artifact package will be placed in the public location for others to review and use.As mentioned above, this project will require several physical resources and cooperation from technical experts. The study will require access to two or more legacy data repositories as sources for information. The source repositories should ideally utilize different underlying database technologies and implement different information schemas to test variations of the proposed modeling language as it is developed and tested. Access to the technical administrators of the source repositories will be necessary to understand the repositories’ schema and obtain read-only access or a copy of their information. It would be preferred that the repositories be accessed read-only and utilized via a network, or the information is relocated to a computing system directly available to the research project. The study will require at least one server system running IBM’s Rational Asset Manager. This system will act as the target data repository. Data transformed from the source repositories will migrate into Rational Asset Manager, driven by a migration application that uses the visual specifications as direction. The study will also require a single workstation with IBM Rational Software Architect for development of the visual modeling language and extension of the existing migration programs to read the visual models and perform the migration work from the source to target repositories.A requirement of the project’s determination of success is the need to measure the savings in the time to build a migration solution with and without visual specifications. The migration problems need to be varied as well, from simple one-to-one mappings from a single source repository to a single target repository, to more exotic migration scenarios, such as consolidating multiple source repositories to a single target repository and re-mapping values from the source to the target. Additionally, the reusability of previous solutions will be measured as well. This aspect of the project’s outcome will quantify how easily a specification model can be reused from a previous solution.Definition of the End Product of ProjectThis project will produce several artifacts during the project’s life and at conclusion. Most importantly, a UML profile will be developed that can be imported into Rational Software Architect or Rational Software Modeler. The profile will include usage documentation and example models that demonstrate various types of rules that may be specified in a visual model and how that model is read and executed by the migration program. The migration program will be a reference-implementation of an existing tool program that can read the model configured with the UML profile and generates events for extension points on which to act.In addition to technical deliverables, all project planning and process artifacts, such as the project plan, design plan, risks and mitigation notes, test criteria and test result data will be made available. The project will conclude with the development of at least one article or paper for submission to a research journal to document this project’s challenges and achievements, and an annotated bibliography of secondary research related to the project will be provided.If successful, this project will contribute to simplifying part of the process of developing a migration solution without having to recreate the existing tool used today. The project will add a new component to the migration tool and consumers of the tool can choose to use this new component. An assumption made in this research project is that the UML profile developed as a deliverable will be an approachable alternative for less experienced IT professionals and software engineers. This will be a challenge for the project’s results.ReferencesDevos, F., Steegmans, E. (2005). Specifying business rules in object-oriented analysis. Softw Syst Model (2005) 4: 297–309 / Digital Object Identifier (DOI) 10.1007/s10270-004-0064-z.Zulkernine, M., Graves, M., Umair, M., Khan, A. (2007). Integrating software specifications into intrusion detection. Int. J. Inf. Secur. (2007) 6:345–357. DOI 10.1007/s10207-007-0023-0. ...

December 10, 2008 · 10 min · 2066 words · Jim Thario

What are the elements of a good Web page design?

I think this can be answered from the user’s perspective and from the developer’s perspective. I think a page can be considered well designed if it looks good, works with many browsers, and can be maintained by others than the original author. From the user’s perspective, I was able to come up with the following list: Accessibility - the page is compatible with screen readers and alternate input devices. At work we recently went through a remediation process with one of our web sites. We needed to assure HR the site was compatible with accessibility utilities. I think about 75% of this can be handled by writing good HTML source. In addition to this, testing tools such as WebKing can help identify other problems that can prevent the web code from working in certain situations. Navigation - the page is easy to leave. Another way to say it is the page should have the necessary links to navigate away to other major areas, if it is part of a larger web site. Placement - the page is easy to find in the site and navigate to. Compatibility - the page can be loaded and properly displayed in popular browsers. I think in e-commerce, it is important to give this item some amount of priority. You want to encourage visitors to browse and buy regardless of the specific brand or version of their technical resources. This is also important to consider if your viewer base consists of users with handhelds or Internet-capable cell phones. Organization - information on the page is presented in a visually appealing way, including text style choice and page positioning. From the developer’s perspective: Documentation - comments in the code or a short design note helps the author remember what they did and helps other maintain the page later. Organization - the page’s source is consistently organized and formatted into blocks. I think with today’s tools that can reformat source code, this is less of a problem. ...

August 7, 2005 · 2 min · 330 words · Jim Thario

Name two differences between designing for a Web page and for print-based media

The difference that draws my attention is that print media is static - ink or other compound is bonded to a page and is permanently fixed. Unlike a web site, there is no hope of that print jumping up and rearranging itself if the user wants to see a different layout. The first difference is that web publishing has the possibility of introducing dynamic content to the user in a number of different ways. Web sites used for e-commerce have the ability to show customized content based on the user’s past purchase history, or if they have a particular preference for how the page is arranged. My Yahoo is another example, where each user can have a customized view of information they choose. The other primary difference I can think of between web and print media is that designing for a newspaper, for example, is a controlled process from end to end, unlike a web page in which the rendering and quality of the final product is out of the control of the publisher of the content. The newspaper publisher chooses layout, fonts and other aspects of style just list a web published would, but the similarity stops there. A print publisher also chooses the rendering mechanism and the paper it is printed on. In web publishing, that last step is somewhat variable in that browser differences have the possibility of producing different output with the same HTML code. ...

August 7, 2005 · 2 min · 239 words · Jim Thario