Facebook
TwitterThis dataset includes the results of a simulation study using the source inversion techniques available in the Water Security Toolkit. The data was created to test the different techniques for accuracy, specificity, false positive rate, and false negative rate. The tests examined different parameters including measurement error, modeling error, injection characteristics, time horizon, network size, and sensor placement. The water distribution system network models that were used in the study are also included in the dataset. This dataset is associated with the following publication: Seth, A., K. Klise, J. Siirola, T. Haxton , and C. Laird. Testing Contamination Source Identification Methods for Water Distribution Networks. Journal of Environmental Division, Proceedings of American Society of Civil Engineers. American Society of Civil Engineers (ASCE), Reston, VA, USA, ., (2016).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Public Data(source Set, Omit Val And Test Set) is a dataset for object detection tasks - it contains No Closeup Sangguo1 annotations for 12,135 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterThe data, source code and scripts included in this dataset are used to generate the results presented in the manuscript "The min-max test: an objective method for discriminating mass spectra" by Moorthy and Sisco. The manuscript explores a new method for objectively discriminating electron ionization mass spectra, a task that is commonplace when compounds are closely eluting in gas chromatography mass spectrometry. The C++ source codes and R analysis scripts can be extended for other application areas.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Developing software test code can be as or more expensive than developing software production code. Commonly, developers use automated unit test generators to speed up software testing. The purpose of such tools is to shorten production time without decreasing code quality. Nonetheless, unit tests usually do not have a quality check layer above testing code, which might be hard to guarantee the quality of the generated tests. An emerging strategy to verify the tests quality is to analyze the presence of test smells in software test code. Test smells are characteristics in the test code that possibly indicate weaknesses in test design and implementation. The presence of test smells in unit test code could be used as an indicator of unit test quality. In this paper, we present an empirical study aimed to analyze the quality of unit test code generated by automated test tools. We compare the tests generated by two tools (Randoop and EvoSuite) with the existing unit test suite of open-source software projects. We analyze the unit test code of twenty-one open-source Java projects and detected the presence of nineteen types of test smells. The results indicated significant differences in the unit test quality when comparing data from both automated unit test generators and existing unit test suites.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The appendix of our ICSE 2018 paper "Search-Based Test Data Generation for SQL Queries: Appendix".
The appendix contains:
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Each file in the dataset contains machine-learning-ready data for one unique tropical cyclone (TC) from the real-time testing dataset. "Machine-learning-ready" means that all data-processing methods described in the journal paper have already been applied. This includes cropping satellite images to make them TC-centered; rotating satellite images to align them with TC motion (TC motion is always towards the +x-direction, or in the direction of increasing column number); flipping satellite images in the southern hemisphere upside-down; and normalizing data via the two-step procedure.
The file name gives you the unique identifier of the TC -- e.g., "learning_examples_2010AL01.nc.gz" contains data for storm 2010AL01, or the first North Atlantic storm of the 2010 season. Each file can be read with the method `example_io.read_file` in the ml4tc Python library (https://zenodo.org/doi/10.5281/zenodo.10268620). However, since `example_io.read_file` is a lightweight wrapper for `xarray.open_dataset`, you can equivalently just use `xarray.open_dataset`. Variables in the table are listed below (the same printout produced by `print(xarray_table)`):
Dimensions: (
satellite_valid_time_unix_sec: 289,
satellite_grid_row: 380,
satellite_grid_column: 540,
satellite_predictor_name_gridded: 1,
satellite_predictor_name_ungridded: 16,
ships_valid_time_unix_sec: 19,
ships_storm_object_index: 19,
ships_forecast_hour: 23,
ships_intensity_threshold_m_s01: 21,
ships_lag_time_hours: 5,
ships_predictor_name_lagged: 17,
ships_predictor_name_forecast: 129)
Coordinates:
* satellite_grid_row (satellite_grid_row) int32 2kB ...
* satellite_grid_column (satellite_grid_column) int32 2kB ...
* satellite_valid_time_unix_sec (satellite_valid_time_unix_sec) int32 1kB ...
* ships_lag_time_hours (ships_lag_time_hours) float64 40B ...
* ships_intensity_threshold_m_s01 (ships_intensity_threshold_m_s01) float64 168B ...
* ships_forecast_hour (ships_forecast_hour) int32 92B ...
* satellite_predictor_name_gridded (satellite_predictor_name_gridded) object 8B ...
* satellite_predictor_name_ungridded (satellite_predictor_name_ungridded) object 128B ...
* ships_valid_time_unix_sec (ships_valid_time_unix_sec) int32 76B ...
* ships_predictor_name_lagged (ships_predictor_name_lagged) object 136B ...
* ships_predictor_name_forecast (ships_predictor_name_forecast) object 1kB ...
Dimensions without coordinates: ships_storm_object_index
Data variables:
satellite_number (satellite_valid_time_unix_sec) int32 1kB ...
satellite_band_number (satellite_valid_time_unix_sec) int32 1kB ...
satellite_band_wavelength_micrometres (satellite_valid_time_unix_sec) float64 2kB ...
satellite_longitude_deg_e (satellite_valid_time_unix_sec) float64 2kB ...
satellite_cyclone_id_string (satellite_valid_time_unix_sec) |S8 2kB ...
satellite_storm_type_string (satellite_valid_time_unix_sec) |S2 578B ...
satellite_storm_name (satellite_valid_time_unix_sec) |S10 3kB ...
satellite_storm_latitude_deg_n (satellite_valid_time_unix_sec) float64 2kB ...
satellite_storm_longitude_deg_e (satellite_valid_time_unix_sec) float64 2kB ...
satellite_storm_intensity_number (satellite_valid_time_unix_sec) float64 2kB ...
satellite_storm_u_motion_m_s01 (satellite_valid_time_unix_sec) float64 2kB ...
satellite_storm_v_motion_m_s01 (satellite_valid_time_unix_sec) float64 2kB ...
satellite_predictors_gridded (satellite_valid_time_unix_sec, satellite_grid_row, satellite_grid_column, satellite_predictor_name_gridded) float64 474MB ...
satellite_grid_latitude_deg_n (satellite_valid_time_unix_sec, satellite_grid_row, satellite_grid_column) float64 474MB ...
satellite_grid_longitude_deg_e (satellite_valid_time_unix_sec, satellite_grid_row, satellite_grid_column) float64 474MB ...
satellite_predictors_ungridded (satellite_valid_time_unix_sec, satellite_predictor_name_ungridded) float64 37kB ...
ships_storm_intensity_m_s01 (ships_valid_time_unix_sec) float64 152B ...
ships_storm_type_enum (ships_storm_object_index, ships_forecast_hour) int32 2kB ...
ships_forecast_latitude_deg_n (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
ships_forecast_longitude_deg_e (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
ships_v_wind_200mb_0to500km_m_s01 (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
ships_vorticity_850mb_0to1000km_s01 (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
ships_vortex_latitude_deg_n (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
ships_vortex_longitude_deg_e (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
ships_mean_tangential_wind_850mb_0to600km_m_s01 (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
ships_max_tangential_wind_850mb_m_s01 (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
ships_mean_tangential_wind_1000mb_at500km_m_s01 (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
ships_mean_tangential_wind_850mb_at500km_m_s01 (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
ships_mean_tangential_wind_500mb_at500km_m_s01 (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
ships_mean_tangential_wind_300mb_at500km_m_s01 (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
ships_srh_1000to700mb_200to800km_j_kg01 (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
ships_srh_1000to500mb_200to800km_j_kg01 (ships_storm_object_index, ships_forecast_hour) float64 3kB ...
ships_threshold_exceedance_num_6hour_periods (ships_storm_object_index, ships_intensity_threshold_m_s01) int32 2kB ...
ships_v_motion_observed_m_s01 (ships_storm_object_index) float64 152B ...
ships_v_motion_1000to100mb_flow_m_s01 (ships_storm_object_index) float64 152B ...
ships_v_motion_optimal_flow_m_s01 (ships_storm_object_index) float64 152B ...
ships_cyclone_id_string (ships_storm_object_index) object 152B ...
ships_storm_latitude_deg_n (ships_storm_object_index) float64 152B ...
ships_storm_longitude_deg_e (ships_storm_object_index) float64 152B ...
ships_predictors_lagged (ships_valid_time_unix_sec, ships_lag_time_hours, ships_predictor_name_lagged) float64 13kB ...
ships_predictors_forecast (ships_valid_time_unix_sec, ships_forecast_hour, ships_predictor_name_forecast) float64 451kB ...
Variable names are meant to be as self-explanatory as possible. Potentially confusing ones are listed below.
Facebook
TwitterThis dataset is historical only and ends at 5/7/2021. For more information, please see http://dev.cityofchicago.org/open%20data/data%20portal/2021/05/04/covid-19-testing-by-person.html. The recommended alternative dataset for similar data beyond that date is https://data.cityofchicago.org/Health-Human-Services/COVID-19-Daily-Testing-By-Test/gkdw-2tgv. This is the source data for some of the metrics available at https://www.chicago.gov/city/en/sites/covid-19/home/latest-data.html. For all datasets related to COVID-19, see https://data.cityofchicago.org/browse?limitTo=datasets&sortBy=alpha&tags=covid-19. This dataset contains counts of people tested for COVID-19 and their results. This dataset differs from https://data.cityofchicago.org/d/gkdw-2tgv in that each person is in this dataset only once, even if tested multiple times. In the other dataset, each test is counted, even if multiple tests are performed on the same person, although a person should not appear in that dataset more than once on the same day unless he/she had both a positive and not-positive test. Only Chicago residents are included based on the home address as provided by the medical provider. Molecular (PCR) and antigen tests are included, and only one test is counted for each individual. Tests are counted on the day the specimen was collected. A small number of tests collected prior to 3/1/2020 are not included in the table. Not-positive lab results include negative results, invalid results, and tests not performed due to improper collection. Chicago Department of Public Health (CDPH) does not receive all not-positive results. Demographic data are more complete for those who test positive; care should be taken when calculating percentage positivity among demographic groups. All data are provisional and subject to change. Information is updated as additional details are received. Data Source: Illinois National Electronic Disease Surveillance System
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Simulated A/B Testing Data for Web User Engagement This dataset contains synthetically generated A/B testing data that mimics user behavior on a website with two versions: Control (con) and Experimental (exp). The dataset is designed for practicing data cleaning, statistical testing (e.g., Z-test, T-test), and pipeline development.
Each row represents an individual user session, with attributes capturing click behavior, session duration, access device, referral source, and timestamp.
Features: click — Binary (1 if clicked, 0 if not)
group — A/B group assignment (con or exp, with injected label inconsistencies)
session_time — Time spent in the session (in minutes), including outliers
click_time — Timestamp of user interaction (nullable)
device_type — Device used (mobile or desktop, mixed casing)
referral_source — Where the user came from (e.g., social, email, with some typos/whitespace)
Use Cases: A/B testing analysis (CTR, CVR)
Hypothesis testing (Z-test, T-test)
ETL pipeline design
Data cleaning and standardization practice
Dashboard creation and segmentation analysis
Notes: The dataset includes intentional inconsistencies (nulls, duplicates, casing issues, typos) to reflect real-world challenges.
Fully synthetic — safe for public use.
Facebook
TwitterThe CERT Division, in partnership with ExactData, LLC, and under sponsorship from DARPA I2O, generated a collection of synthetic insider threat test datasets. These datasets provide both synthetic background data and data from synthetic malicious actors. Datasets are organized according to the data generator release that created them. Most releases include multiple datasets (e.g., r3.1 and r3.2). Generally, later releases include a superset of the data generation functionality of earlier releases. Each dataset file contains a readme file that provides detailed notes about the features of that release. The answer key file answers.tar.bz2 contains the details of the malicious activity included in each dataset, including descriptions of the scenarios enacted and the identifiers of the synthetic users involved.
Facebook
Twitterhttp://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/
These data files contain information about COVID-19 testing rate and test positivity, by country and by region. They are updated weekly.
The figures are based on multiple data sources. The main source is data submitted by Member States to the European Surveillance System (TESSy). When not available, ECDC compiles data from public online sources. EU/EEA Member States report in TESSy all tests performed (i.e. both PCR and antigen tests).
Disclaimer: The data compiled from public online sources have been automatically or manually retrieved (‘web-scraped’) on a daily basis. It should be noted that there are limitations to this type of data including that definitions vary and the data collection process requires constant adaptation to avoid to interrupted time series (i.e. due to modification of website pages, types of data).
European Centre for Disease Prevention and Control
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Na just test upload
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Due to the heterogeneity of existing European sources of observational healthcare data, data source-tailored choices are needed to execute multi-data source, multi-national epidemiological studies. This makes transparent documentation paramount. In this proof-of-concept study, a novel standard data derivation procedure was tested in a set of heterogeneous data sources. Identification of subjects with type 2 diabetes (T2DM) was the test case. We included three primary care data sources (PCDs), three record linkage of administrative and/or registry data sources (RLDs), one hospital and one biobank. Overall, data from 12 million subjects from six European countries were extracted. Based on a shared event definition, sixteeen standard algorithms (components) useful to identify T2DM cases were generated through a top-down/bottom-up iterative approach. Each component was based on one single data domain among diagnoses, drugs, diagnostic test utilization and laboratory results. Diagnoses-based components were subclassified considering the healthcare setting (primary, secondary, inpatient care). The Unified Medical Language System was used for semantic harmonization within data domains. Individual components were extracted and proportion of population identified was compared across data sources. Drug-based components performed similarly in RLDs and PCDs, unlike diagnoses-based components. Using components as building blocks, logical combinations with AND, OR, AND NOT were tested and local experts recommended their preferred data source-tailored combination. The population identified per data sources by resulting algorithms varied from 3.5% to 15.7%, however, age-specific results were fairly comparable. The impact of individual components was assessed: diagnoses-based components identified the majority of cases in PCDs (93–100%), while drug-based components were the main contributors in RLDs (81–100%). The proposed data derivation procedure allowed the generation of data source-tailored case-finding algorithms in a standardized fashion, facilitated transparent documentation of the process and benchmarking of data sources, and provided bases for interpretation of possible inter-data source inconsistency of findings in future studies.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
DESCRIPTIONThe code presented in this repository is a formal representation of the semantics of a system level test run in a keyword-driven testing framework, Robot Framework. The semantics are expressed in Haskell and can be executed using GHC(i).CONTENTS- RobotSemantics: The semantic definitions for a test run in Robot Framework- TypedArgumentsExtension: The semantics of an extension written for Robot Framework- Examples: Three examples, demonstrating the relation between the semantic definitions and actual Robot Framework test cases and keyword libraries.SHORT SUMMARYSystem level acceptance tests should cover a large amount of traces in a system under test, for which many testing paradigms exist. However, system level acceptance tests are required to be understood by many stakeholders, which is not always taken into consideration when a system level testing paradigm is designed. Combining two or more paradigms might yield a system level testing approach which can get the best of both worlds. To see whether this is the case for keyword-driven testing, we consider a triad of system level testing paradigms: Behavior-Driven Testing, Model-Based Testing and Test Data Generation. In the thesis we introduce a formal semantic definition of a keyword-driven testing framework, Robot Framework, to be able to reason about what the considered paradigms would entail for a case study at Canon Production Printing. For each of the three considered paradigms, a conclusion is drawn as to whether the paradigm could benefit from a combination with keyword-driven testing and what the relation between the paradigm and keyword-driven testing would be in such a combination.The executable semantics presented in this repository ought to provide an unambiguous starting point for reasoning about the keyword-driven testing paradigm. Moreover, the executable semantics are used to express semantic implications of additional concepts and extensions for keyword-driven testing with Robot Framework.
Facebook
TwitterLaboratory diagnosis for cryptococcal disease among HIV-infected patients remains a challenge in most low- and middle-income countries (LMIC). Difficulties with sustained access to cryptococcal rapid tests is cited as a major barrier to the routine screening for cryptococcus in many LMIC. Thus, clinicians in these countries often resort to empirical treatment based solely on clinical suspicion of cryptococcosis. To address this challenge, we aim to evaluate the re-introduction of India ink testing for diagnosis of cryptococcosis among HIV-infected patients in southern Mozambique. India ink testing was historically a common first choice, low-cost, laboratory diagnostic tool for cryptococcal infection. This study uses implementation science methods framed by the Dynamic Adaption Process (DAP) and the Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) conceptual frameworks to develop a multi-phase, stepped-wedged trial using mixed-methods approaches. The study will be conducted in six hospitals from southern Mozambique over a period of 15 months and will include the following phases: pre-implementation (baseline assessment), Adaptation-implementation (gradual introduction of the intervention), and post-implementation (post-intervention assessment). This study aims to promote the use of India Ink staining as a cheap and readily available tool for cryptococcosis diagnosis in southern Mozambique. Lessons learned in this study may be important to inform approaches to overcome the existing challenges in diagnosis of cryptococcosis in many LMICs due unavailability of readily diagnostic tools. Trial registration: ISRCTN11882960, Registered 06 August 2024.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this research (publication included in the package), we have used author-assigned keywords as a quantitative data source for understanding the connections between keywords and research topics in software testing research, based on a large sample of studies from Scopus.
We apply co-word analysis to map the topology of testing research as a network where author-assigned keywords are connected by edges indicating co-occurrence in publications. Keywords are clustered based on edge density and frequency of connection. We examine the most popular keywords, summarize clusters into high-level research topics, examine how topics connect, and examine how the field is changing. This package contains the map and network files used to perform our analyses, as well as the publication sample.
Facebook
TwitterNOTE: This dataset has been retired and marked as historical-only.
This dataset contains counts of unique tests and results for COVID-19. This dataset differs from https://data.cityofchicago.org/d/t4hh-4ku9 in that each person is in that dataset only once, even if tested multiple times. In this dataset, each test is counted, even if multiple tests are performed on the same person, although a person should not appear in this dataset more than once on the same day unless he/she had both a positive and not-positive test.
The positivity rate displayed in this dataset uses the method most commonly used by other jurisdictions in the United States.
Only Chicago residents are included based on the home address as provided by the medical provider.
Molecular (PCR) and antigen tests received through electronic lab reporting are included. Individuals may be tested multiple times. Tests are counted on the day the specimen was collected. A small number of tests collected prior to 3/1/2020 are not included in the table.
Not-positive lab results include negative results, invalid results, and tests not performed due to improper collection. Chicago Department of Public Health (CDPH) does not receive all not-positive results.
All data are provisional and subject to change. Information is updated as additional details are received.
Data Source: Illinois Department of Public Health Electronic Lab Reports
Facebook
Twitterhttps://www.nist.gov/open/licensehttps://www.nist.gov/open/license
These data from the laboratory tests of a prototype residential liquid-to-air ground-source air conditioner (GSAC) using CO2 as the refrigerant. The data collection and processing methods are described in detail in this report: Report Title: "Laboratory Tests of a Prototype Carbon Dioxide Ground-Source Air Conditioner", NIST Technical Note 2068 Publication Date: October 2019 DOI: https://doi.org/10.6028/NIST.TN.2068 Authors: Harrison Skye, Wei Wu The tests were performed in an environmental chamber and followed the ISO 13256-1 standard for rating GSHPs. The CO2 GSAC operated either in a subcritical or a transcritical cycle, depending on the entering liquid temperature (ELT). The test results included the coefficient of performance (COP), capacity, sensible heat ratio (SHR), and pressures. The system incorporated a liquid-line/suction-line heat exchanger (LLSL-HX), which was estimated to cause a COP penalty of (0 to 2) % for ELTs ranging (10 to 25) °C, and benefit of (0 to 5) % for ELTs ranging (30 to 39) °C. With ELTs ranging (10 to 39) °C the CO2 system cooling COP ranged (7.3 to 2.4). At the standard rating condition (ELT 25 °C), the CO2 GSAC cooling COP was 4.14, and at part-load conditions (ELT 20 °C) the system had a COP of 4.92.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This file contains the first set of tracer data for the EGS Collab testbed. The first set of tracer tests were conducted during October-November, 2018. We have included tracer data for C-dots, chloride, fluorescein, and rhodamine-B. The details about the tracer test can be found in Background and Methods of Tracer Tests (Mattson et al. (2019)) (also included in this package).
References Mattson, E.D., Neupane, G., Plummer, M.A., Hawkins, A., Zhang, Y. and the EGS Collab Team 2019. Preliminary Collab fracture characterization results from flow and tracer testing efforts. In Proceedings 44th Workshop on Geothermal Reservoir Engineering, edited, Stanford University, Stanford, California.
Facebook
TwitterTo download and view the dashboard and data source, please click the download button on the upper right corner.
A dataset with over half a million rows in the excel file was successfully cleaned and visualized into a fully functional Excel Dashboard with Slicers. Comparisons can be made between Charter Schools and NY State counterparts. Initial findings based on the visualizations show that there is a common trend in the data for females to outperform males in both ELA and Math across Public and Charter Schools. Charter School performance beats statewide performance in all student subgroups-- except for homeless students where the statewide percentage breakdown vs the charter school percentage breakdown shows us that the homeless students in charter schools are underperforming against their statewide counterparts. This raises the question on how should Charter Schools give these students more attention in terms of State Test Preparation.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is designed to be used in evluation studies of regression test prioritization techniques. It includes 20 open-source Java projects from GitHub and over 100,000 logs of real-world build logs from TravisCI. The projects span a wide range with regard to size, number of contributors, and maturity of open-source Java projects available on GitHub.
Futher, the dataset includes the results of baseline approaches to ease the comparison of new techniques applied to the dataset.
Facebook
TwitterThis dataset includes the results of a simulation study using the source inversion techniques available in the Water Security Toolkit. The data was created to test the different techniques for accuracy, specificity, false positive rate, and false negative rate. The tests examined different parameters including measurement error, modeling error, injection characteristics, time horizon, network size, and sensor placement. The water distribution system network models that were used in the study are also included in the dataset. This dataset is associated with the following publication: Seth, A., K. Klise, J. Siirola, T. Haxton , and C. Laird. Testing Contamination Source Identification Methods for Water Distribution Networks. Journal of Environmental Division, Proceedings of American Society of Civil Engineers. American Society of Civil Engineers (ASCE), Reston, VA, USA, ., (2016).