Metrics

Views
122

In This Chapter

Mobile Crowd Sensing

Authored by: Manoop Talasila , Reza Curtmola , Cristian Borcea

Handbook of Sensor Networking

Print publication date:  January  2015
Online publication date:  January  2015

Print ISBN: 9781466569713
eBook ISBN: 9781466569720
Adobe ISBN:

10.1201/b18001-5

 

Abstract

Mobile sensors such as smartphones and vehicular systems represent a new type of geographically distributed sensing infrastructure that enables mobile people-centric sensing (Riva and Borcea, 2007; Sensor Lab at Dartmouth, 2013; Urban Sensing at UCLA, 2013). According to a forecast for global smartphone shipments from 2010 to 2017, more than 1.5 billion phones are expected to be shipped worldwide (Statista, 2014). Smartphones already have several sensors: camera, microphone, GPS, accel-erometer, digital compass, light sensor, and Bluetooth as proximity sensor (Mardenfeld et al., 2010; Reality Mining Project, 2014), and in the near future, they are envisioned to include health and pollution monitoring sensors (Garmin, 2014; MIT News, 2014). Vehicular systems have access to several hundred sensors embedded in cars, and recent vehicles come equipped with new types of sensors such as radar and camera. Compared to the tiny, energy-constrained sensors of static sensor networks, smartphones and vehicular systems can support more complex computations, have significant memory and storage, and offer direct access to the Internet. Therefore, mobile people-centric sensing can be a scalable and cost-effective alternative to deploying static wireless sensor networks for dense sensing coverage across large areas.

 Add to shortlist  Cite

Mobile Crowd Sensing

3.1  Introduction

Mobile sensors such as smartphones and vehicular systems represent a new type of geographically distributed sensing infrastructure that enables mobile people-centric sensing (Riva and Borcea, 2007; Sensor Lab at Dartmouth, 2013; Urban Sensing at UCLA, 2013). According to a forecast for global smartphone shipments from 2010 to 2017, more than 1.5 billion phones are expected to be shipped worldwide (Statista, 2014). Smartphones already have several sensors: camera, microphone, GPS, accel-erometer, digital compass, light sensor, and Bluetooth as proximity sensor (Mardenfeld et al., 2010; Reality Mining Project, 2014), and in the near future, they are envisioned to include health and pollution monitoring sensors (Garmin, 2014; MIT News, 2014). Vehicular systems have access to several hundred sensors embedded in cars, and recent vehicles come equipped with new types of sensors such as radar and camera. Compared to the tiny, energy-constrained sensors of static sensor networks, smartphones and vehicular systems can support more complex computations, have significant memory and storage, and offer direct access to the Internet. Therefore, mobile people-centric sensing can be a scalable and cost-effective alternative to deploying static wireless sensor networks for dense sensing coverage across large areas.

Smartphones have already enabled a plethora of mobile sensing applications (Abdelzaher et al., 2007; Campbell et al., 2006; Honicky et al., 2008; Mohan et al., 2008) in gaming, smart environments, surveillance, emergency response, and social networks. Specially, activity recognition through mobile sensing and wearable sensors has led to many health-care applications, such as fitness monitoring, elder care support, and cognitive assistance (Choudhury et al., 2008). The expanding sensing capabilities of mobile phones have gone beyond the sensor networks’ focus on environmental and infrastructure monitoring where people are now the carriers of sensing devices, the sources, and the consumers of sensed events (Azizyan et al., 2009; Kansal et al., 2007; Lu et al., 2009; Miluzzo et al., 2008; Siewiorek et al., 2003).

Despite its benefits, mobile people-centric sensing has two main issues: (1) incentivizing the participants and (2) reliability of the sensed data. Mobile crowd sensing has been proposed as a solution for the first issue. A mobile crowd sensing platform plays a similar role with the one played by Amazon’s Mechanical Turk (MTurk) (Amazon Mechanical Turk, 2013) or ChaCha (2013) in crowdsourcing (Gupta et al., 2012; Narula et al., 2011): it allows individuals and organizations (clients) to access a sheer number of people (providers) willing to execute simple sensing tasks for which they are paid. Unlike the MTurk’s tasks that are executed on personal computers and always require human work, mobile sensing tasks are executed on mobile devices that satisfy certain context/sensing requirements (e.g., location, time, and specific sensors) and many times do not require human work (i.e., automatic sensing tasks). Many organizations and individuals could act as crowd sensing clients. For example, local, state, and federal agencies could greatly benefit from this new sensing infrastructure as they will have access to valuable data from the physical world. Commercial organizations may be very interested in collecting mobile sensing data to learn more about customer behavior. Researchers in many fields of science and engineering could collect large amounts of sensed data for various experiments. Ultimately, all of us could act as clients through many mobile apps (e.g., find out the traffic conditions ahead on the highway).

Regarding the data reliability issue, the sensed data submitted by participants in crowd sensing are not always reliable as they can submit false data to earn money without executing the actual task. This problem will be illustrated in Section 3.4, and our solution will be discussed extensively in this chapter.

3.2  Mobile Crowd Sensing Applications

In the following, we present several application domains that can benefit from mobile crowd sensing as well as a number of applications (some of them already prototyped) for each domain.

3.2.1  Smart Cities

Worldwide, cities with high population density and a very large number of interconnected issues make effective city management a challenging task. Therefore, several significant government and industrial research efforts are currently underway to exploit the full potential of the sensing data by initiating smart city systems to improve city efficiency by deploying smarter grids, water management systems (London’s Water Supply Monitoring, 2012), and ultimately the social progress (IBM Smarter Planet, 2014). Around the world, the government of South Korea is building the Songdo Business District, a green low-carbon area that aims at becoming the first full-scale realization of a smart city (Songdo Smart City, 2014). Despite their potential benefits, many of these efforts could be costly. Crowd sensing can reduce the costs associated with large-scale sensing and, at the same time, provide additional human-related data. For example, our recent work on ParticipACTION (Cardone et al., 2013) proposes to leverage crowd sensing to directly engage citizens in the management of smart cities; people can actively participate in sensing campaigns to make their cities safer and cleaner.

3.2.2  Road Transportation

Departments of transportation can collect fine-grain and large-scale data about traffic patterns in the country/state using location and speed data provided by GPS sensors embedded in cars. These data can then be used for traffic engineering, construction of new roads, etc. Drivers can receive real-time traffic information based on the same type of data collected from smartphones (Mobile Millennium Project, 2014). Drivers can also benefit from real-time parking data collected from cars equipped with ultrasonic sensors (Mathur et al., 2010). Transportation agencies or municipalities can efficiently collect pothole data using GPS and accelerometer sensors (Eriksson et al., 2008) in order to quickly repair the roads. Similarly, photos (i.e., camera sensor data) taken by people during/after snowstorms can be analyzed automatically to prioritize snow cleaning and removal.

3.2.3  Health Care and Well-Being

Wireless sensors worn by people for heart rate monitoring (Garmin, 2014) and blood pressure monitoring (MIT News, 2014) can communicate their information to the owners’ smartphones. Typically, this is done for both real-time and long-term health monitoring of individuals. Mobile sensing can leverage these existing data into large-scale health-care studies that seamlessly collect data from various groups of people, which can be selected based on location, age, etc. A specific example involves collecting data from people who eat regularly fast food. Phones can perform activity recognition and determine the level of physical exercise done by people, which was proven to directly influence people’s health. As a result of such study in a city, the municipality may decide to create more bike lanes to encourage people to do more physical activities. Similarly, phones can determine the level of social interaction of certain groups of people (e.g., using Bluetooth scanning, GPS, or audio sensor). For example, a university may discover that students (or students from certain departments) are not interacting with each other enough; consequently, it may decide to organize more social events on campus. The same mechanism coupled with information from human sensors can be used to monitor the spreading of epidemic diseases.

3.2.4  Marketing/Advertising

Real-time location or mobility traces/patterns can be used by vendors/advertisers to target certain categories of people (Google Mobile Ads, 2014; Mobads, 2014). Similarly, they can run context-aware surveys (function of location, time, etc.). For example, one question in such a survey could ask people attending a concert what artists they would like to see in the future.

3.3  Mobile Crowd Sensing Applications and Platforms

Recently, several mobile crowdsourcing projects tried to leverage traditional crowdsourcing platforms for mass adoption of people-centric sensing: Twitter (2014) has been used as a publish/subscribe medium to build a crowdsourced weather radar and a participatory noise-mapping application (Demirbas et al., 2010); mCrowd (Yan et al., 2009) is an iPhone-based platform that was used to build an image search system for mobile phones, which relies on Amazon’s MTurk (Amazon Mechanical Turk, 2013) for real-time human validation (Yan, 2010). This has the advantage of leveraging the popularity of existing crowdsourcing platforms (tens of thousands of available workers) but does not allow for truly mobile sensing tasks to be performed by workers (i.e., tasks that can only be performed using sensors on mobile phones). PEIR is an application for participatory sensing that exploits mobile phones to evaluate if users have been exposed to airborne pollution, enables data sharing to encourage community participation, and estimates the impact of individual user/community behaviors on the surrounding environment (Mun et al., 2009). Medusa is a mobile crowd sensing framework that uses a high-level domain-specific programming language to define sensing tasks and workflows that are promoted with monetary incentives to encourage user participation (Ra et al., 2012). So far, none of these existing applications and platforms have addressed the reliability of the sensed data.

3.4  Data Reliability Issues in Sensed Data

By leveraging smartphones, we can seamlessly collect sensing data from various groups of people at different locations using mobile crowd sensing. As the sensing tasks are associated with monetary incentives, participants may try to fool the mobile crowd sensing system to earn money. Therefore, there is a need for mechanisms to efficiently validate the collected data. In the following, we motivate the need for such a mechanism by presenting several scenarios involving malicious behavior.

3.4.1  Traffic Jam Alerts

Suppose that the Department of Transportation uses a mobile crowd sensing system to collect alerts from people driving on congested roads and then distributes the alerts to other drivers (Herrera et al., 2010; White et al., 2011). In this way, drivers on the other roads can benefit from real-time traffic information. However, the system has to ensure the alert validity because malicious users may try to proactively divert the traffic on roads ahead in order to empty these roads for themselves.

3.4.2  Citizen Journalism

Citizens can report real-time data in the form of photos, video, and text from public events or disaster areas (Photo Journalism, 2013; Toyama et al., 2003). In this way, real-time information from anywhere across the globe can be shared with the public as soon as the event happens. But malicious users may try to earn easy money by claiming that an event is happening at a certain location while being somewhere else.

3.4.3  Environment

Environment protection agencies can use pollution sensors installed in the phones to map with high accuracy the pollution zones around the country (Intel Labs, 2010; Sensordrone, 2013). The participants may claim fake pollution to hurt business competitors by submitting the sensed pollution data associated with false locations.

Ultimately, the validation of sensed data is important in a mobile crowd sensing system to provide confidence to its clients who use the sensed data. However, it is challenging to validate each and every sensed data point of each participant because sensing measurements are highly dependent on context. One approach to handle this issue is to validate the location associated with the sensed data point in order to achieve a certain degree of reliability on the sensed data. Still, we need to overcome a major challenge: how to validate the location of data points in a scalable and cost-effective way without help from the wireless carrier? Let us note that wireless carriers may not help with location validation for legal reasons related to user privacy or even commercial interests.

To achieve reliability on participants' location data, there are a few traditional solutions such as using Trusted Platform Modules (TPMs) (2013) on smartphones or duplicating the tasks among multiple participants. However, these solutions cannot be used directly for a variety of reasons. For example, it is not cost effective to have TPMs on every smartphone, while task replication may not be feasible at some locations due to a lack of additional users there. Another solution is to verify location through the use of secure location verification mechanisms (Capkun and Hubaux, 2005; Capkun et al., 2006; Sastry et al., 2003; Talasila et al., 2010) in real time when the participant is trying to submit the sensing data location. Unfortunately, this solution requires infrastructure support or adds significant overhead on user phones if it is applied for each sensed data point.

The rest of the chapter is organized as follows: Section 3.5 presents the overview of McSense, our mobile crowd sensing platform, and its prototype implementation. Section 3.6 describes our improving location reliability (ILR) scheme to achieve data reliability in McSense and analyzes ILR’s security. The experimental evaluation and simulation results for ILR are presented in Sections 3.7 and 3.8, respectively. In Section 3.9, we discuss a number of lessons learned from our McSense field study as well as potential improvements for ILR. Finally, Section 3.10 concludes the chapter.

3.5  McSense: A Mobile Crowd Sensing Platform

We have designed and implemented McSense (Talasila et al., 2013), a mobile crowd sensing platform that allows clients to collect many types of sensing data from smartphones carried by mobile users. The interacting entities in our mobile crowd sensing architecture are as follows:

  • McSense: A centralized mobile crowd sensing system that receives sensing requests from clients and delivers them to providers; these entities are defined next.
  • Client: The organization or group who is interested in collecting sensing data from smartphones using the mobile crowd sensing system.
  • Provider: A mobile user who participates in mobile crowd sensing to provide the sensing data requested by the client.

3.5.1  System Architecture and Processes Involved

The architecture of McSense, illustrated in Figure 3.1, has two main components: (1) the server platform that accepts tasks from clients and schedules the individual tasks for execution at mobile providers and (2) the mobile platform (at the providers) that accepts individual tasks from the server, performs sensing, and submits the sensed data to the server. The communication among all these components takes place over the Internet. Next, we discuss the overall process in more detail.

McSense architecture.

Figure 3.1   McSense architecture.

3.5.1.1  User Registration

The McSense application on the smartphones shows a registration screen for first-time users, prompting them to enter an e-mail address and a password. During the registration process, the user phone’s mobile equipment identifier (MEID) is captured and saved in the server’s database along with the user’s e-mail address and password. We chose to store the phone’s MEID in order to restrict one user registration per device. In addition, the server also avoids duplicate registrations when users try registering with the same e-mail address again.

3.5.1.2  Posting New Sensing Tasks

New sensing tasks can be posted by clients using a web interface that’s running on the McSense server. The sensing task details are entered on this web page by the client and submitted to the server’s database. Once a new task is posted, the background notification service running on the provider’s phone identifies the new available tasks and notifies the provider with a vibrate action on the phone. Providers can check the notification and can open the McSense application to view the new available tasks. When the application is loaded, the providers can see four tabs (Available, Accepted, Completed, and Earnings). The providers can view the list of tasks in the respective tabs (Figure 3.2a) and can click on each task from the list to view the entire task details (type, status, description, accepted time, elapsed time, completion time, expiration time, and payment amount).

3.5.1.3  Life Cycle of a Task

The life cycle starts from the Available tasks tab. When a provider selects an available task and clicks on the Accept button, the task is moved to the Accepted tab. Once a task is accepted, then that task is not available to others anymore (Figure 3.2b). When the accepted task is completed according to its requirements, the task is moved to the Completed tasks tab. Finally, the providers view their aggregated total dollars earned for successfully completed tasks under the Earnings tab. If the accepted task expires before completing successfully according to its requirements, it is moved to the Completed tasks tab and marked as unsuccessfully completed. The providers do not earn money for the tasks that are completed unsuccessfully.

McSense android application showing tabs (a) and task screen for a photo task (b).

Figure 3.2   McSense android application showing tabs (a) and task screen for a photo task (b).

3.5.1.4  Background Services on Phone

When the network is not available, a completed task is marked as pending upload. A background service on the phone periodically checks for the network connection. When the connection becomes available, the pending data are uploaded, and finally, these tasks are marked as successfully completed. If the provider phone is restarted manually or due to the mobile OS crash, then all the in-progress sensing tasks are automatically resumed by the Android’s Broad cast Receiver service registered for the McSense application. Furthermore, the Accepted and the Completed tabs' task lists are cached locally and are synchronized with the server. If the server is not reachable, the users can still see the tasks that were last cached locally.

3.5.2  Prototype Implementation

The McSense application, shown in Figure 3.2, has been implemented in Android and is compatible with smartphones running Android OS 2.2 or higher. The application was tested successfully using Motorola Droid 2 phones, which have 512 MB RAM, 1 GHz processor, Bluetooth 2.1, Wi-Fi 802.11 b/g/n, 8 GB on-board storage, and 8 GB microSD storage. The McSense (2013) Android application was deployed to Google Play (2014) to make it available for campus students. The server side of McSense is implemented in Java/J2EE using the model–view–controller (MVC) framework. The Derby database is used to store the registered user accounts and assigned task details. The server side Java code is deployed on the GlassFish Application Server, which is an open-source application server.

3.5.3  User Study and Tasks Developed for McSense

To evaluate McSense and its data reliability protocol (see Section 3.7), we ran a user study at our campus for approximately 2 months. Over 50 students have participated in this study. Participants have been asked to download the McSense application from the Android market and install it on their phones. On the application server, we periodically posted various tasks. Some tasks have a monetary value associated with the task, which is paid on the task’s successful completion; a few other tasks do not offer monetary incentives just to observe the participation of providers when collecting free sensing data. As tasks are submitted to the application server, they also appear on the phones where our application has been installed. Each task contains a task description, its duration, and a certain amount of money. The students use their phones to sign up to perform the task. Upon successful completion of the task, the students accumulate credits (payable in cash after the study terminated). The sensing tasks that we choose to use for this study fall into two categories:

  1. Manual tasks, for example, photo tasks
  2. Automated tasks, for example, sensing tasks using accelerometer and GPS sensors and sensing tasks using Bluetooth

3.5.3.1  Manual Photo Sensing Task

Registered users are asked to take photos from events on campus. Once the user captures a photo, she needs to click on the Complete Task button to upload the photo and to complete the task. When the photo is successfully uploaded to the server, the task is considered successfully completed. These uploaded photos can be used by the university news department for current news articles.

3.5.3.2  Automated Sensing Task Using Accelerometer and GPS Sensors

The accelerometer sensor readings and GPS location readings are collected at 1 min intervals. The sensed data are collected along with the user ID and timestamp, and they are stored into a file in the phone’s internal storage, which can be accessed only by the McSense application. These data are then uploaded to the application server on task completion (which consists of many data points). Using the collected sensed data of accelerometer readings and GPS readings, we can identify user activities such as walking, running, and driving or locations that are important to the user. By observing the daily activities, we could find out how much exercise each student is getting daily and derive interesting statistics such as which departments have the most active and healthy students.

3.5.3.3  Automated Sensing Task Using Bluetooth Radio

In this automated sensing task, the user’s Bluetooth radio is used to perform periodic Bluetooth scans (every 5 min) until the task expires; on its completion, the task reports the discovered Bluetooth devices with their location back to the McSense server. The sensed data from Bluetooth scans can provide interesting social information such as how often McSense users are near to each other. Also, it can identify groups who are frequently together to determine the level of social interaction of certain people (Mardenfeld et al., 2010).

3.5.3.4  Automated Resources Usage Sensing Task

In this automated sensing task, the usage of user’s smartphone resources is sensed and reported back to the McSense server. Specifically, the report contains the mobile applications’ usage, the network usage, the periodic Wi-Fi scans, and the battery level of the smartphone. While logging the network usage details, this automated task also logs the overall device network traffic and per-application network traffic.

3.6  ILR in Crowd-Sensed Data

This section presents ILR, a scheme which improves the location reliability of mobile crowd-sensed data with minimal human efforts. We also describe the validation process used by McSense to detect false location claims from malicious providers.

3.6.1  Assumptions

We assume that the sensed data are already collected by McSense from providers at different locations. However, these sensed data are awaiting validation before being sent to the clients who requested these data. We assume that every provider performs Bluetooth scans at each location where it is collecting sensing data. We also assume that the sensed data reported by providers for a given task always include location, time, and a Bluetooth scan. Note that Bluetooth scans can have a much lower frequency than the sensor sampling frequency.

3.6.2  Adversarial Model

We assume all the mobile devices are capable of determining their location using GPS. We also assume McSense is trusted and the communication between mobile users and McSense is secure. In our threat model, we consider that any provider may act maliciously and may lie about their location.

A malicious provider can program the device to spoof a GPS location (Humphreys et al., 2008) and start providing wrong location data for all the crowd sensing data requested by clients. Regarding this, we consider three threat scenarios, where (1) the provider does not submit the location and Bluetooth scan with a sensing data point, (2) the provider submits a Bluetooth scan associated with a sensing task but claims a false location, and (3) the provider submits both a false location and a fake Bluetooth scan associated with a sensing data point. In Section 3.6.4, we will discuss how these scenarios are addressed by ILR.

We do not consider colluding attack scenarios, where a malicious provider colludes with other providers to show that she is present in the Bluetooth colocation data of others. In practice, it is not easy for a malicious provider to employ another colluding user at each sensing location. Additionally, these colluding attacks can be reduced by increasing the minimum node degree requirement, in the colocation data of each provider (i.e., a provider P must appear in the Bluetooth scans of at least a minimum number of other providers at her claimed location and time). Therefore, it becomes difficult for a malicious provider to create a false high node degree by colluding with real colocated people at a given location and time.

Finally, the other class of attacks that is out of scope for our current scheme is attacks in which a provider submits the right location and Bluetooth scan associated with this sensing task but is able to fool the sensors to create false readings (e.g., using the flame of a lighter to create the false impression of a high temperature).

3.6.3  ILR Design

The main idea of our scheme is to corroborate data collected from manual (photo) tasks with colocation data from Bluetooth scans. We describe next an example of how ILR uses the photos and colocation data.

3.6.3.1  Example of ILR in Action

Figure 3.3 maps the data collected by several different tasks in McSense. The figure shows 9 photo tasks (marked A–I) and 15 sensing tasks (marked 1–15) performed by different providers at different locations. For each of these tasks, providers also report neighbors discovered through Bluetooth scans. All these tasks are grouped into small circles using colocation data found in Bluetooth scans within a time interval t. For example, photo task A and sensing tasks 1, 2, and 3 are identified as colocated and grouped into one circle because they are discovered in each other’s Bluetooth scans.

In this example, McSense does not need to validate all the photo tasks mapped in the figure. Instead, McSense first considers the photo tasks with the highest node degree (NodeDegree) by examining the colocated groups for photo task providers who have seen the highest number of other providers in Bluetooth scans around them. In this example, we consider Node Degree ≥ 3. Hence, we see that photo tasks A, B, C, D, and G have discovered the highest number of providers around their location. Therefore, McSense chooses these five photo tasks for validation. These selected photo tasks are validated either manually or automatically (we discuss this in detail in Section 3.6.3.2). When validating these photo tasks, invalid photos are rejected, and McSense ignores the Bluetooth scans associated with them. If the photo is valid, then McSense considers the location of the validated photo as trusted because the validated photo is actually taken from the physical location requested in the task. However, it is not always possible to categorize every photo as a valid or a fake photo. Therefore, some photos are categorized as unknown when a decision cannot be made.

In this example, we assume that these five selected photos are successfully validated through manual verification. Next, using the transitive trust property, McSense extends the location trust of validated photos to other colocated providers’ tasks, which are found in the Bluetooth scans of the A, B, C, D, and G photo tasks. For example, A extends the trust to the tasks 1, 2, and 3, while B extends the trust to tasks 4, 5, and 6. Then task 6 extends its trust to tasks 13 and 14. Finally, at the end of this process, McSense has 21 successfully validated tasks out of a total of 24 tasks. In this example, McSense required manual validation for just 5 photo tasks, but using the transitive trust property, it was able to extend the trust to 16 additional tasks automatically. Only 3 tasks (E, F, and 12) are not validated as they lack colocation data around them.

Example of McSense collected photo tasks [A–I] and sensing tasks [1–15] on the campus map, grouped using Bluetooth discovery colocation data.

Figure 3.3   Example of McSense collected photo tasks [A–I] and sensing tasks [1–15] on the campus map, grouped using Bluetooth discovery colocation data.

3.6.3.2  ILR Phases

The ILR scheme has two phases as shown in Figure 3.4. Phase 1, photo selection, selects the photo tasks to be validated. And Phase 2, transitive trust, extends the trust to data points colocated with the tasks selected in Phase 1.

3.6.3.2.1  Phase 1—Photo Selection

Using collected data from the Bluetooth scans of providers, ILR constructs a connected graph of colocated data points for a given location and within a time interval t (these are the same groups represented in circles in Figure 3.3). From these graphs, we select the photo tasks that have node degrees greater than a threshold (Node Degree).

The phases of the ILR scheme.

Figure 3.4   The phases of the ILR scheme.

These selected photo tasks are validated either by humans or by applying computer vision techniques. For manual validation, McSense could rely on other users recruited from Amazon MTurk (Amazon Mechanical Turk, 2013), for example. In order to apply computer vision techniques, first we need to collect ground truth photos to train image recognition algorithms. One alternative is to have trusted people collect the ground truth photos. However, if the ground truth photos are collected through crowd sensing, then they have to be manually validated as well. Thus, reducing the number of photos that require manual validation is an important goal for both manual and automatic photo recognition. Once the validation is performed, the location of the validated photo task is now considered to be reliable because the validated photos have been verified to be taken from the physical location requested in the task. For simplicity, we will refer to the participants who contributed valid photo tasks with reliable location and time as validators.

3.6.3.2.2  Phase 2—Transitive Trust

In this phase, we rely on the transitive trust property and extend the trust established in the validator’s location to other colocated data points. In short, if the photo is valid, the trust is extended to colocated data points found in the Bluetooth scan of the validated photo task. In the current scheme, trust is extended until all colocated tasks are trusted or no other task is found; alternately, McSense can set a time to live (TTL) on extended trust. The following two steps are performed in this phase:

  • Step 1: Mark colocated data points as trusted: For each task colocated with a validated photo task, mark the task’s location as trusted.
  • Step 2: Repeat Step 1 for each newly validated task until all colocated tasks are trusted or no other task is found.

3.6.3.3  Validation Process

After executing the two phases of ILR, all the colocated data points are validated successfully. If any malicious provider falsely claims one of the validated task’s location at the same time, then the false claim will be detected in the validation step. Executing the validation process shown in Algorithm 3.1 will help us to detect the wrong location claims around the already validated location data points. For instance, if we consider task 12 from Figure 3.3 as a malicious provider claiming a false location exactly at photo task A’s location and time, then task 12 will be detected in the validation Process(), as it does not appear in the Bluetooth scans of photo task A. In addition to the validation process, McSense also performs a basic spatiotemporal correlation check to ensure that the provider is not claiming a location at different places at the same time.

Algorithm 3.1: ILR Validation Pseudo-Code

Notation:

  • TList: List of tasks that are not yet marked trusted after completing the first two phases of ILR.
  • T: Task submitted by a provider.
  • L: Location of the photo or sensing task (T).
  • t: Timestamp of the photo or sensing Task (T).
  • hasValidator(L, t): A function that checks if any validated data points already exist at task T’s location and time.

validation Process(): Run to validate the location of each task in TList

  1. for each task T in TList do
  2. if hasValidator(L, t) = = TRUE then
  3. Update task T with false location claim at (L, t)

3.6.4  Security Analysis

The goal of the ILR scheme is to establish the reliability of the sensed data by validating the claimed location of the data points. In addition, ILR seeks to detect false claims made by malicious participants.

ILR is able to handle all the three threat scenarios presented in Section 3.6.2. In the first threat scenario, when there is no location and Bluetooth scan submitted along with the sensed data, the sensed data of that task are rejected, and the provider will not be paid by McSense.

In the second threat scenario, when a provider submits her Bluetooth discovery with a false location claim, ILR detects the provider in her neighbors’ Bluetooth scans at a different location using the spatiotemporal correlation check and rejects the task’s data.

Finally, when a provider submits a fake Bluetooth discovery with a false location claim, ILR looks for any validator around the claimed location, and if it finds anyone, then the sensed data associated with the false location claim are rejected. If there is no validator around the claimed location, then the data point is categorized as unknown.

As discussed in Section 3.6.2, sensed data submitted by malicious colluding attackers could be filtered to a certain extent in McSense by setting the node degree threshold (NodeDegree) to the minimum node degree requirement requested by the client.

3.6.5  Related Work

Trusted hardware represented by the TPM (Dua et al., 2009; Gilbert et al., 2011; Trusted Platform Module, 2013; Xu et al., 2011) has been leveraged to design new architectures for trustworthy software execution on mobile phones (McCune et al., 2008; Nauman et al., 2010; Schneider et al., 2011). Recent work has also proposed architectures to ensure that the data sensed on mobile phones are trustworthy (Gilbert et al., 2010; Saroiu and Wolman, 2010). When untrusted client applications perform transformations on the sensed data, YouProve (Gilbert et al., 2011) is a system that combines a mobile device’s trusted hardware with software in order to ensure the trustworthiness of these transformations and that the meaning of the source data is preserved. YouProve describes three alternatives to combine the trusted hardware with software: The first two require to extend the trusted codebase to include either the code for the transformations or the entire application, whereas the third one requires building trust in the code that verifies that transformations preserve the meaning of the source data.

Relying completely on TPM is insufficient to deal with attacks in which a provider is able to fool the sensors (e.g., using the flame of a lighter to create the false impression of a high temperature). Recently, there have also been reports of successful spoofing of civilian GPS signals (Humphreys et al., 2008).

Orthogonal to the work in ILR, task pricing also helps in improving the data quality. A recent paper (Lee and Hoh, 2010) presents pricing incentive mechanisms to achieve quality data in participatory sensing application. In this work, the participants are encouraged to participate in the sensing system through a reverse auction based on a dynamic pricing incentive mechanism in which users can sell their sensing data with their claimed bid price.

The LINK protocol (Talasila et al., 2010, 2014) was recently proposed for secure location verification without relying on location infrastructure support. LINK can provide stronger guarantees than ILR but has a number of drawbacks if used for mobile sensing. LINK requires a provider to establish Bluetooth connections with her colocated users at each sensing location, which increases latency and consumes more phone battery. In addition, LINK is executed in real time to verify the users’ locations, whereas ILR is executed on the collected data from mobile crowd sensing. Therefore, employing ILR helps providers in submitting sensed data quickly and also consumes less phone battery.

3.7  Experimental Evaluation: Field Study

The providers (students shown in Table 3.1) registered with McSense and submitted data together with their user ID. Both phases of ILR and the validation process are executed on data collected from the providers. In these experiments, we acted as the clients collecting the sensed data.

Table 3.1   Demographic Information of the Students

Total Participants

58

Males

90%

Females

10%

Ages 16–20

52%

Ages 21–25

41%

Ages 26–35

7%

3.7.1  Evaluating the ILR Scheme

The location data are mostly collected from the university campus (0.5 miles radius). The main goal of these experiments is to determine how efficiently can the ILR scheme help McSense validate the location data and detect false location claims. ILR considers the Bluetooth scans found within a 5 min interval of measuring the sensor readings for a sensing task.

Table 3.2 shows the total photo tasks that are submitted by students; only 204 photo tasks have Bluetooth scans associated with them. In this data set, we considered Node Degree ≥ 1; therefore, we used all these 204 photo tasks with Bluetooth scans in Phase 1 to perform manual validation. In Phase 2, we are then able to automatically extend the trust to 148 new location data points through the transitive closure property of ILR.

To capture the ground truth, we manually validated all the photos collected by McSense in this study and identified that we have a total of 45 fake photos submitted to McSense from malicious providers, out of which only 16 fake photo tasks have Bluetooth scans with false location claims. We then applied ILR to verify how many of these 16 fake photos can be detected.

We were able to catch four users who claimed wrong locations to make money with fake photos, as shown in Table 3.3. Since the total number of malicious users involved in the 16 fake photo tasks is 10, ILR was able to detect 40% of them. Finally, ILR is able to achieve this result by validating only 11% of the photos (i.e., 204 out of 1784).

3.7.2  Influence of the Task Price on Data Quality

In the field study performed at New Jersey Institute of Technology (NJIT), a few tasks are posted with a high price (ranging between $2 and $10) in order to observe the impact on the sensing tasks. We have noticed a 15% increase in the task completion success rate for the high-priced sensing tasks compared with the low-priced sensing tasks. In addition, we have noticed an improvement in data quality for the high-priced photo tasks, with clear and focused photos compared with the low-priced photo tasks (the tasks priced with $1 or lower are considered low-priced tasks). Thus, our study confirms that task pricing influences the data quality. This result confirms that various task pricing strategies (Lee and Hoh, 2010) can be employed by McSense in parallel to the ILR scheme to ensure data quality for the sensing tasks.

Table 3.2   Photo Task Reliability

Number of Photo Tasks

Total photos

1784

Number of photos with Bluetooth scans (manually validated in ILR)

204

Trusted data points added by ILR

148

Table 3.3   Number of False Location Claims and Cheating Participants Detected by ILR

Detected by ILR Scheme

Total

Percentage Detected

Tasks with false location claim

4

16

25

Cheating participants

4

10

40

3.8  Simulations

This section presents the evaluation of ILR using the ns-2 network simulator (ns-2 Simulator, 2014). The two main goals of the evaluation are (1) to estimate the right percentage of photo tasks needed in Phase 1 to bootstrap the ILR scheme and (2) to quantify the ability of ILR to detect false location claims at various node densities.

3.8.1  Simulation Setup

The simulation setup parameters are presented in Table 3.4. Given a simulation area of 100 m × 120 m, the node degree (i.e., average number of neighbors per user) is slightly higher than 5. We varied the simulation area to achieve node degrees of 2, 3, and 4. We consider low walking speeds (i.e., 1 m/s) for collecting photos. In these simulations, we considered all tasks as photo tasks. A photo task is executed every minute by each node. Photo tasks are distributed evenly across all nodes. Photo tasks with false location claims are also distributed evenly across several malicious nodes. We assume the photo tasks in ILR’s Phase 1 are manually validated.

After executing the simulation scenarios described in the following, we collect each photo task’s time, location, and Bluetooth scan. As per the simulation settings, we will have 120 completed photo tasks per node at the end of the simulation (i.e., 24,000 total photo tasks for 200 nodes). Over these collected data, we apply the ILR validation scheme to detect false location claims.

3.8.2  Simulation Results

In this set of experiments, we vary the percentage of photo tasks with false location claims. The resulting graph, plotted in Figure 3.5, has multiple curves as a function of the percentage of photo tasks submitting false location. This graph is plotted to gain insights on what will be the right percentage of photo tasks needed in Phase 1 to bootstrap the ILR scheme. Next, we analyze Figure 3.5.

3.8.2.1  Varying Percentage of False Location Claims

3.8.2.1.1  Low Count of Malicious Tasks Submitted

When 10% of total photo tasks are submitting false location, Figure 3.5 shows that the ILR scheme can detect 55% of the false location claims just by using 10% of the total photo tasks validated in Phase 1. This figure also shows that in order to detect more false claims, more photos need to be manually validated: for example, ILR uses up to 40% of the total photo tasks in Phase 1 to detect 80% of the false location tasks. Finally, Figure 3.5 shows that increasing the percentage of validated photo tasks above 40% does not help much as the percentage of detected false tasks remains the same.

Table 3.4   Simulation Setup for ILR

Parameter

Value

Number of nodes

200

% of tasks with false location claims

10, 15, 30, 45, 60

Bluetooth transmission range

10 m

Simulation time

2 h

User walking speed

1 m/s

Node density

2, 3, 4, 5

Bluetooth scan rate

1/min

ILR performance as a function of the percentage of photos manually validated in Phase 1. Each curve represents a different percentage of photos with fake locations.

Figure 3.5   ILR performance as a function of the percentage of photos manually validated in Phase 1. Each curve represents a different percentage of photos with fake locations.

3.8.2.1.2  High Count of Malicious Tasks Submitted

When 60% of the total photo tasks are submitting false locations, Figure 3.5 shows that ILR can still detect 35% of the false claims by using 10% of the total photo tasks in Phase 1. But in this case, ILR requires more validated photo tasks (70%) to catch 75% of the false claims. This is because by increasing the number of malicious tasks, the colocation data are reduced, and therefore, ILR cannot extend trust to more location claims in its Phase 2. Therefore, we conclude that the right percentage of photo tasks needed to bootstrap ILR is proportional to the expected false location claims (which can be predicted using the history of the users’ participation).

3.8.2.2  Impact of Node Density on ILR

In this set of experiments, we assume that 10% of the total photo tasks are submitting false locations. In Figure 3.6, we analyze the impact of node density on the ILR scheme. We seek to estimate the minimum node density required to achieve highly connected graphs to extend the location trust transitively to more colocated nodes.

3.8.2.2.1  High Density

When simulations are run with node density of 5, Figure 3.6 shows that ILR can detect the highest percentage (85%) of the false location claims. The figure also shows similarly high results even for a node density of 4.

3.8.2.2.2  Low Density

When simulations are run with node density of 2, we can see that ILR can still detect 65% of the false location tasks using 50% of the total photo tasks in Phase 1. For this node density, even after increasing the number of validated photo tasks in Phase 1, the percentage of detected false claims does not increase. This is because there are fewer colocated users at low node densities. Therefore, we conclude that ILR can efficiently detect false claims with a low number of manual validations, even for low node densities.

ILR performance as a function of the percentage of photos manually validated in Phase 1. Each curve represents a different network density represented as average number of neighbors per node.

Figure 3.6   ILR performance as a function of the percentage of photos manually validated in Phase 1. Each curve represents a different network density represented as average number of neighbors per node.

3.9  Field Study Insights and Improving the ILR Scheme

In this section, we present our insights from the analysis of the data collected from the field study and discuss possible improvements of the ILR scheme based on these insights. In addition, we present observations from the survey that was administered to participants at the end of the field study to understand their opinion on location privacy and usage of phone resources.

3.9.1  Correlation of User Earnings and Fake Photos

To understand the correlation between the user earnings and the number of fake photos submitted, we plot the data collected from the McSense crowd sensing field study. The experimental results in Figure 3.7 show that the users who submitted most of the fake photos are among the top 20 high earners (with an exception of 4 low-earning users who submitted fake photos once or twice). This is an interesting observation that can be leveraged to improve the ILR scheme.

In the current ILR scheme, there are cases where the validation process cannot make a firm decision on some data points. Those data points fall under the unknown category as described in Section 3.6.3. If these unknown data points are too many (in millions), then it is challenging to validate all of them manually. Therefore, to improve the ILR scheme, we propose that these unknown cases of data points must go through an extra check to find whether the user is a high earner in the sensing system. If the user is a high earner, then there is a high probability that the user submitted data point is fake. Those photos should be manually validated. If the user is a mid or low-range earner, then there is a low probability of his or her data point being faked and the data point should be considered as valid. This method will help in reducing the number of photos that require manual validation.

Correlation of earnings and fake photos.

Figure 3.7   Correlation of earnings and fake photos.

3.9.2  Correlation of Location and Fake Photos

We ask the question “Is there any correlation between the amount of time spent by users on campus and the number of submitted fake photos?” As suspected, the users who spent less time on campus have submitted more fake photos. This behavior can be observed in Figure 3.8.

Figure 3.8 shows the number of fake photos submitted by each user, with the users sorted by the total hours spent on the NJIT campus. The participants’ total hours recorded at the NJIT campus are the hours that are accumulated from the sensed data collected from automated sensing task described in Section 3.5.3. The NJIT location is considered to be a circle with a radius of 0.5 miles. If the user is in circle, then she is considered to be at NJIT. For most of the submitted fake photos with the false location claim, the users claimed that they are at a campus location where the photo task is requested, but they were in fact infrequent campus visitors.

This is an interesting observation, which can also be leveraged to improve the ILR scheme’s validation process when there is a large number of data points classified as unknown. The intuition behind this argument is that users tend to fake the data mostly when they are not around the task’s location. Therefore, to improve the ILR scheme, we propose to use a user’s recorded location trail in the McSense system in order to identify whether the user is or is not a frequent visitor of the task's location. If the user is not a frequent visitor of the claimed location, then there is a high probability that location claim is false, and the unknown data point should be manually checked. On the other hand, if the user is a frequent visitor of the claimed location, then her claim can be trusted. By reducing the number of photos that require manual validation, McSense can improve ILR’s validation process for unknown data points.

Correlation of user location and fake photos.

Figure 3.8   Correlation of user location and fake photos.

3.9.3  Malicious User: Menace or Nuisance?

The photos submitted by malicious users are plotted in Figure 3.9. The data show that malicious users have submitted good photos at a very high rate compared to the fake photos. These malicious users are among the high earners, so they are submitting more data than the average user. Thus, it may not be a good idea to remove the malicious users from the system as soon as they are caught cheating.

Instead, it may be a better idea to identify the validity of the individual data points (which is exactly the same process done in the current ILR scheme discussed in Section 3.6.3). We conclude that malicious users are not a significant menace but may cause some confusion in the collected data. However, this can be filtered out by McSense through correlating the data with location and earnings as discussed earlier in the section.

3.9.4  Influence of Maintaining a Reputation Score

When a fake location claim is detected by ILR, McSense would benefit if the malicious user who submitted the fake claim receives a lower compensation upon completion of a task. In order to perform such a process, McSense should use a reputation module such as in Chu et al. (2010), which maintains a trust score for each user. This is similar with many other systems that rely on the participation of users. The trust score varies between 0 and 1. Initially, the trust score is given a default value for every user, and it evolves depending on the user participation. The trust score is reduced when the user is caught providing fake data and is increased when the user submits good data.

We propose that the McSense system maintains a trust score for every user, and then it uses this score for calculating the user payment upon task completion. For example, for a completed task that is worth $5, a user with a trust score of 0.9 will be paid only $4.50. We envision that by maintaining a reputation score, the users providing fake data will eventually stop making false claims. We have seen earlier in the section that the malicious users also submit a significant amount of good data. But if their trust score drops to 0, then the malicious users will not participate anymore as they do not earn the task amount and will eventually leave the system. As we argued earlier in this section, it is not a good idea to entirely remove the malicious users from the system. Therefore, to avoid eliminating malicious users from the system, the trust score will not decrease anymore after reaching a minimum threshold (e.g., 0.2). Hence, the malicious user will only get 20% of the task dollars until she improves her trust score. Therefore, the McSense system does not need to worry about discarding good data that are submitted by malicious users.

Photo counts of 17 cheating people.

Figure 3.9   Photo counts of 17 cheating people.

3.9.5  User’s Survey Results and Observations

At the end of the field study, we requested each user to fill a survey in order to understand the participants’ opinion on location privacy and usage of phone resources. The survey contains 16 questions with answers on a five-point Likert scale (1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree). Out of 58 participants, 27 filled in the survey. Based on the survey answers, we provide next a few interesting observations that are directly or indirectly relevant in the context of data reliability:

  1. One of the survey questions was “I tried to fool the system by providing photos from other locations than those specified in the tasks (the answer does not influence the payment).” By analyzing the responses for this specific question, we observe that only 23.5% of the malicious users admitted that they submitted the fake photos (4 admitted out of 17 malicious). This shows that the problem stated in this chapter on data reliability is real and it is important to validate the sensed data.
  2. One survey question related to user privacy was “I was concerned about my privacy while participating in the user study.” The survey results show that 78% of the users are not concerned about their privacy. This shows that many participants are willing to trade off their location privacy for paid tasks. The survey results are correlated with the collected McSense data points. We posted a few sensing tasks during weekends, which is considered to be private time for the participants who are mostly not in the campus at that time. We observe that 33% of the participants participated in the sensing and photo tasks, even when spending their personal time in the weekends. We conclude that the task price plays a crucial role (trading the user privacy) to collect quality sensing data from any location and time.
  3. Another two survey questions are related to the usage of phone resources (e.g., battery) by sensing tasks: (1) “Executing these tasks did not consume too much battery power (I did not need to re-charge the phone more often than once a day)”; (2) “I stopped the automatic tasks (resulting in incomplete tasks) when my battery was low.” The responses to these questions are interesting. Most of the participants reported that they were carrying chargers to charge their phone battery as required while running the sensing tasks and were keeping their phone always ready to accept more sensing tasks. This proves that phone resources, such as battery, are not a big concern for continuously collecting sensing data from different users and locations. We describe next the battery consumption measurements in detail.

3.9.6  Battery Consumption

We try to determine the amount of energy consumed by the user’s phone battery for collecting sensing data that are required for ILR. Basically, ILR is executed on the server side over the collected data. But the collected data such as Bluetooth scans at each location are crucial for ILR. Next, we provide measurements for the extra battery usage caused by keeping Bluetooth/Wi-Fi radios ON. We measured the readings using Motorola Droid 2 smartphones running Android OS 2.2:

  • With Bluetooth and Wi-Fi radios ON, the battery life of the Droid 2 phone is longer than 2 days (2 days and 11 h).
  • With Bluetooth OFF and Wi-Fi radio ON, the battery life of the Droid 2 phone is longer than 3 days (3 days and 15 h).
  • For every Bluetooth discovery, the energy consumed is 5.428 J. The total capacity of the Droid 2 phone battery is 18.5 kJ. Hence, over 3000 Bluetooth discoveries can be collected from different locations using a fully charged phone.

3.10  Summary

This chapter presented the concept of mobile crowd sensing and its applications to everyday life. We described the design and implementation of McSense, our mobile crowd sensing platform, which was used to run a user study with over 50 users at the NJIT campus for a period of 2 months. We also discussed the data reliability issues in mobile crowd sensing by presenting several scenarios involving malicious behavior. We presented a protocol for location reliability as a step toward achieving data reliability in sensed data, namely, ILR. ILR also detects false location claims associated with the sensed data. Based on our security analysis and simulation results, we argue that ILR works well at various node densities. The analysis of the sensed data collected from the users in our field study demonstrates that ILR can efficiently achieve location data reliability and detect a significant percentage of false location claims. Therefore, we conclude this chapter with our belief that mobile crowd sensing will become a widespread method for collecting sensing data from the physical world once the data reliability issues are properly addressed.

Acknowledgment

Parts of this chapter were published in the IJBDCN Journal (Talasila et al., 2013. Reused with permission from the publisher).

References

Abdelzaher, T. , Anokwa, Y. , Boda, P. , Burke, J.A. , Estrin, D. , Guibas, L. , and Reich, J. (2007). Mobiscopes for human spaces. Pervasive Computing, IEEE , 6(2), 20–29.
Amazon Mechanical Turk. (2013). Retrieved from http://www.mturk.com. Accessed September 9, 2013.
Azizyan, M. , Constandache, I. , and Roy Choudhury, R. (2009). SurroundSense: Mobile phone localization via ambience fingerprinting. In Proceedings of the 15th Annual International Conference on Mobile Computing and Networking (MobiCom’09) , Beijing, China, ACM, pp. 261–272.
Campbell, A.T. , Eisenman, S.B. , Lane, N.D. , Miluzzo, E. , and Peterson, R.A. (2006). People-centric urban sensing. In Proceedings of the Second Annual International Workshop on Wireless Internet (WICON’06) , Boston, USA, ACM, pp. 18.
Capkun, S. , Cagalj, M. , and Srivastava, M. (2006). Secure localization with hidden and mobile base stations. In Proceedings of IEEE INFOCOM, (INFOCOM’06) , Barcelona, Spain.
Capkun, S. and Hubaux, J. (2005). Secure positioning of wireless devices with application to sensor networks. In 24th Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM’05) , Miami, FL, pp. 1917–1928.
Cardone, G. , Foschini, L. , Bellavista, P. , Corradi, A. , Borcea, C. , Talasila, M. , and Curtmola, R. (2013). Fostering participaction in smart cities: A geo-social crowdsensing platform. IEEE Communications Magazine , 51(6), 112–119.
Cha Cha . (2013). Your mobile bff. Retrieved from http://www.chacha.com. Accessed September 9, 2013.
Choudhury, T. , Consolvo, S. , Harrison, B. , Hightower, J. , LaMarca, A., LeGrand, L., and Haehnel, D. (2008). The mobile sensing platform: An embedded activity recognition system. Pervasive Computing, IEEE , 7(2), 32–41.
Chu, X. , Chen, X. , Zhao, K. , and Liu, J. (2010). Reputation and trust management in heterogeneous peer-to-peer networks. Springer Telecommunication Systems , 44, 191–203.
Demirbas, M. , Bayir, M.A. , Akcora, C.G. , Yilmaz, Y.S. , and Ferhatosmanoglu, H. (2010). Crowd-sourced sensing and collaboration using twitter. In 2010 IEEE International Symposium on a World of Wireless Mobile and Multimedia Networks (WoWMoM) Montreal, QC, Canada, IEEE, pp. 1–9.
Dua, A. , Bulusu, N. , Feng, W. , and Hu, W. (2009). Towards trustworthy participatory sensing. Proceedings of the Usenix Workshop on Hot Topics in Security (HotSec’09) , Montreal, Canada.
Eriksson, J. , Girod, L. , Hull, B. , Newton, R. , Madden, S. , and Balakrishnan, H. (2008). The pothole patrol: Using a mobile sensor network for road surface monitoring. In Proceedings of the Sixth International Conference on Mobile Systems, Applications, and Services (MobiSys’08) , Breckenridge, Colorado, USA, ACM, pp. 29–39.
Garmin . (2014). Edge 305. Retrieved from www.garmin.com/products/edge305/. Accessed May 28, 2014.
Gilbert, P. , Cox, L. , Jung, J. , and Wetherall, D. (2010). Toward trustworthy mobile sensing. In Proceedings of the 11th Workshop on Mobile Computing Systems & Applications (HotMobile’10) Annapolis, Maryland, USA, ACM, pp. 31–36.
Gilbert, P. , Jung, J. , Lee, K. , Qin, H. , Sharkey, D. , Sheth, A. , and Cox, L. (2011). Youprove: Authenticity and fidelity in mobile sensing. In Proceedings of the Ninth ACM Conference on Embedded Networked Sensor Systems (SenSys’11) , Seattle, WA, USA, pp. 176–189.
Google Mobile Ads. (2014). Retrieved from http://www.google.com/ads/mobile/. Accessed May 28, 2014.
Google Play. (2014). Android app store. Retrieved from https://play.google.com/. Accessed May 28, 2014.
Gupta, A. , Thies, W. , Cutrell, E. , and Balakrishnan, R. (2012). mClerk: Enabling mobile crowdsourcing in developing regions. In Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems (CHI’12) , Austin, Texas, USA, ACM, pp. 1843–1852.
Herrera, J. , Work, D. , Herring, R. , Ban, X. , Jacobson, Q. , and Bayen, A. (2010). Evaluation of traffic data obtained via GPS-enabled mobile phones: The mobile century field experiment. Transportation Research Part C: Emerging Technologies , 18(4), 568–583.
Honicky, R. , Brewer, E.A. , Paulos, E. , and White, R. (2008). N-smarts: Networked suite of mobile atmospheric real-time sensors. In Proceedings of the Second ACM SIGCOMM Workshop on Networked Systems for Developing Regions, (NSDR’08) , SEATTLE, WA, USA, ACM, pp. 25–30.
Humphreys, T. , Ledvina, B. , Psiaki, M. , O’Hanlon, B. , and Kintner, Jr. P. (2008). Assessing the spoofing threat: Development of a portable GPS civilian spoofer. In Proceedings of the ION GNSS International Technical Meeting of the Satellite Division, (ION GNSS’08) , Savannah, Georgia.
IBM Smarter Planet. (2014). Retrieved from http://www.ibm.com/smarterplanet/us/en/overview/ideas/. Accessed May 28, 2014.
Intel Labs. (2010). The mobile phone that breathes. Retrieved from http://scitech.blogs.cnn.com/2010/04/22/the-mobilephone-that-breathes/.
Kansal, A. , Goraczko, M. , and Zhao, F. (2007). Building a sensor network of mobile phones. In Proceedings of the Sixth International Conference on Information Processing in Sensor Networks, (IPSN’07) , Cambridge, MA, USA, ACM, pp. 547–548.
Lee, J. and Hoh, B. (2010). Dynamic pricing incentive for participatory sensing. Pervasive and Mobile Computing, Elsevier , 6(6), 693–708.
London’s water supply monitoring. (2012). Retrieved from http://www.ucl.ac.uk/news/news-articles/May2012/240512-.
Lu, H. , Pan, W. , Lane, N.D. , Choudhury, T. , and Campbell, A.T. (2009). SoundSense: Scalable sound sensing for people-centric applications on mobile phones. In Proceedings of the Seventh International Conference on Mobile Systems, Applications, and Services, (MobiSys’09) , Kraków, Poland, ACM, pp. 165–178.
Mardenfeld, S. , Boston, D. , Pan, S.J. , Jones, Q. , Iamntichi, A. , and Borcea, C. (2010). GDC: Group discovery using co-location traces. In Proceedings of Second International Conference on Social Computing (SocialCom’10) , Minneapolis, MN, USA, IEEE, pp. 641–648.
Mathur, S. , Jin, T. , Kasturirangan, N. , Chandrasekaran, J. , Xue, W. , Gruteser, M. , and Trappe, W. (2010). Parknet: Drive-by sensing of road-side parking statistics. In Proceedings of the Eighth International Conference on Mobile Systems, Applications, and Services (MobiSys’10) , San Francisco, CA, USA, ACM, pp. 123–136.
McCune, J. , Parno, B. , Perrig, A. , Reiter, M. , and Isozaki, H. (2008). Flicker: An execution infrastructure for TCB minimization. SIGOPS Operating Systems Review , 42(4), 315–328.
McSense. (2013). Android smart phone application. Retrieved from https://play.google.com/store/apps/details?id=com.mcsense.app. Accessed September 9, 2013.
McSense Project. (2013). Retrieved from http://web.njit.edu/~mt57/mcsense/.
Miluzzo, E. , Lane, N.D. , Fodor, K. , Peterson, R. , Lu, H. , Musolesi, M. , and Campbell, A.T. (2008). Sensing meets mobile social networks: The design, implementation and evaluation of the CenceMe application. In Proceedings of the Sixth ACM Conference on Embedded Network Sensor Systems (SenSys’08) , Raleigh, NC, USA, ACM, pp. 337–350.
MIT News. (2014). Retrieved from http://web.mit.edu/newsoffice/2009/blood-pressure-tt0408.html. Accessed May 28, 2014.
Mobads. (2014). Retrieved from http://www.mobads.com/. Accessed May 28, 2014.
Mobile Millennium Project. (2014). Retrieved from http://traffic.berkeley.edu/. Accessed May 28, 2014.
Mohan, P. , Padmanabhan, V.N. , and Ramjee, R. (2008). Nericell: Rich monitoring of road and traffic conditions using mobile smart phones. In Proceedings of the Sixth ACM Conference on Embedded Network Sensor Systems (SenSys’08) , Raleigh, NC, USA, ACM, pp. 323–336.
Mun, M. , Reddy, S. , Shilton, K. , Yau, N. , Burke, J. , Estrin, D. , Hansen, M. , Howard, E. , West, R. , and Boda, P. (2009). PEIR, the Personal Environmental Impact Report, as a platform for participatory sensing systems research. In Proceedings of the Seventh International Conference on Mobile Systems, Applications, and Services (MobiSys’09) , Kraków, Poland, ACM, pp. 55–68.
Narula, P. , Gutheim, P. , Rolnitzky, D. , Kulkarni, A. , and Hartmann, B. (2011). MobileWorks: A mobile crowdsourcing platform for workers at the bottom of the pyramid. In Human Computation (2011) AAAI Workshop, (HCOMP’11) , San Francisco, CA, USA.
Nauman, M. , Khan, S. , Zhang, X. , and Seifert, J. (2010). Beyond kernel-level integrity measurement: Enabling remote attestation for the android platform. Trust and Trustworthy Computing , 6101. 1–15.
ns-2 Simulator. (2014). Retrieved from http://nsnam.isi.edu/nsnam/index.php/Main_Page. Accessed May 28, 2014.
Photo Journalism. (2013). Retrieved from http://www.flickr.com/groups/photojournalism. Accessed September 9, 2013.
Ra, M.R. , Liu, B. , La Porta, T. , and Govindan, R. (2012). Medusa: A programming framework for crowd-sensing applications. In Proceedings of the 10th International Conference on Mobile Systems, Applications, and Services (MobiSys’12) , Low Wood Bay, Lake District, United Kingdom ACM, pp. 337–350.
Reality Mining Project. (2014). Retrieved from http://reality.media.mit.edu/. Accessed May 28, 2014.
Riva, O. and Borcea, C. (2007). The Urbanet revolution: Sensor power to the people! IEEE Pervasive Computing , 6(2), 41–49.
Saroiu, S. and Wolman, A. (2010). I am a sensor, and I approve this message. In Proceedings of the 11th Workshop on Mobile Computing Systems & Applications (HotMobile’10) , Annapolis, Maryland, USA, ACM, pp. 37–42.
Sastry, N. , Shankar, U. , and Wagner, D. (2003). Secure verification of location claims. In Proceedings of the Second ACM Workshop on Wireless Security (WiSe’03) , San Diego, California, USA, ACM, pp. 1–10.
Schneider, F.B. , Walsh, K. , and Sirer, E.G. (2011). Nexus authorization logic (NAL): Design rationale and applications. ACM Transactions on Information and System Security , 14(1), 1:1–8:28.
Sensor Lab at Dartmouth. (2013). Smart phone sensing research. Retrieved from http://sensorlab.cs.dartmouth.edu/research.html. Accessed September 9, 2013.
Sensordrone. (2013). The 6th sense of your smart phone. Retrieved from http://www.sensorcon.com/sen-sordrone. Accessed September 9, 2013.
Siewiorek, D. , Krause, A. , Moraveji, N. , Smailagic, A. , Furukawa, J. , Reiger, K. , and Shaffer, J. (2003). Sensay: A context-aware mobile phone. In 2012 16th International Symposium on Wearable Computers (ISWC’03) , Sanibel Island, Florida, USA, IEEE Computer Society, pp. 248–248.
Songdo Smart City. (2014). Retrieved from http://www.songdo.com. Accessed May 28, 2014.
Statista. (2014). Global smart phone shipments forecast 2010–2018. Retrieved from http://www.statista.com/statistics/263441/global-smartphone-shipments-forecast/. Accessed May 28, 2014.
Talasila, M. , Curtmola, R. , and Borcea, C. (2010). Link: Location verification through immediate neighbors knowledge. In Proceedings of the Seventh International ICST Conference on Mobile and Ubiquitous Systems, (MobiQuitous’10) , Sydeny, Australia, Springer, pp. 210–223.
Talasila, M. , Curtmola, R. , and Borcea, C. (2013). ILR: Improving location reliability in mobile crowd sensing. International Journal of Business Data Communications and Networking (IJBDCN) , 9(4), 65–85.
Talasila, M. , Curtmola, R. , and Borcea, C. (2014). Collaborative bluetooth-based location authentication on smart phones. Pervasive and Mobile Computing . doi: http://dx.doi.org/10.1016/j.pmcj.2014.02.004.
Toyama, K. , Logan, R. , and Roseway, A. (2003). Geographic location tags on digital images. In Proceedings of the 11th ACM International Conference on Multimedia, (MULTIMEDIA’03) , Berkeley, CA, USA, pp. 156–166.
Trusted Platform Module. (2013). Retrieved from http://www.trustedcomputinggroup.org/developers/trustedplatformmodule. Accessed May 28, 2014.
Twitter. (2014). Retrieved from http://twitter.com/. Accessed May 28, 2014.
Urban Sensing at UCLA. (2013). University of California Los Angeles urban sensing research. Retrieved from http://research.cens.ucla.edu/urbansensing/. Accessed September 9, 2013.
White, J. , Thompson, C. , Turner, H. , Dougherty, B. , and Schmidt, D. (2011). Wreckwatch: automatic traffic accident detection and notification with smart phones. Mobile Networks and Applications , 16(3), 285–303.
Xu, G. , Borcea, C. , and Iftode, L. (2011). A policy enforcing mechanism for trusted ad hoc networks. IEEE Transactions on Dependable and Secure Computing , 8(3), 321–336.
Yan, T. , Kumar, V. , and Ganesan, D. (2010). Crowdsearch: Exploiting crowds for accurate real-time image search on mobile phones. In Proceedings of the Eighth International Conference on Mobile Systems, Applications, and Services (SenSys’09) , Berkeley, California, USA, ACM, pp. 77–90.
Yan, T. , Marzilli, M. , Holmes, R. , Ganesan, D. , and Corner, M. (2009). mCrowd: A platform for mobile crowdsourcing. In Proceedings of the Seventh ACM Conference on Embedded Networked Sensor Systems (SenSys’09) , Berkeley, California, USA, ACM, pp. 347–348.
Search for more...
Back to top

Use of cookies on this website

We are using cookies to provide statistics that help us give you the best experience of our site. You can find out more in our Privacy Policy. By continuing to use the site you are agreeing to our use of cookies.