Facebook
TwitterThe Health Information National Trends Survey (HINTS) is a biennial, cross-sectional survey of a nationally-representative sample of American adults that is used to assess the impact of the health information environment. The survey provides updates on changing patterns, needs, and information opportunities in health; Identifies changing communications trends and practices; Assesses cancer information access and usage; Provides information about how cancer risks are perceived; and Offers a testbed to researchers to test new theories in health communication.
Facebook
TwitterThis dataset tracks the updates made on the dataset "Health Information National Trends Survey (HINTS)" as a repository for previous versions of the data and metadata.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
HTTP client hint crawling data of all login pages of the 8M Tranco list websites.
This data set contains the crawled Accept-CH HTTP header values on all Tranco-list-related login pages from August 2022 to December 2023. You can use the data set to reproduce our study results regarding the client hint usage on the Web.
We crawled the data from three different continents (North America: Johnstown, Ohio, USA; Europe: Frankfurt and Biere, Germany; Asia: Singapore) and two different Internet Service Providers (ISP), which were Amazon Web Services (AWS) and Deutsche Telekom (DT).
You can find the crawling data inside the crawl_data_redacted folder of this repository. It is subdivided into our four different crawling regions, which are also the subfolders:
eu_otc: Crawling data from Biere, Germany (Europe), using the DT ISP.eu_aws: Crawling data from Frankfurt, Germany (Europe), using the AWS ISP.ap_aws: Crawling data from Singapore (Asia), using the AWS ISP.us_aws: Crawling data from Johnstown, Ohio, USA (North America), using the AWS ISP.Each folder includes the following files:
crawl_data_login_urls_only.csv: Contains the responses from all crawled login URLscrawl_data_clustered_third_party_urls_only.csv: Contains the responses from requests to third party URLs that were initiated by the login URLscrawl_data_trackerlist_urls_only.csv: Contains the responses from requests to third-party URLs that were identified as trackers and initiated by the login URLs.Each data set file contains the following columns:
| Column | Data Type | Description | Example |
|---|---|---|---|
| date | Timestamp | Point in time when the URL was crawled | 2023-03-03 14:45:25.525 |
| login_url | String | Uniform Resource Locator (URL) of the login URL that should be crawled | https://www.example.com/login.html |
| login_url_hostname | String | Hostname belonging to the crawled login URL | www.example.com |
| url | String | The actual URL that was crawled. In case it differs from login_url, it indicates a third party request. | https://www.example.com/index.html |
| url_hostname | String | Hostname belonging to the URL | www.example.com |
| Accept-CH Values (many columns) | Integer | The column name shows the corresponding value that was present in the Accept-CH HTTP Header (e.g., sec-ch-ua-platform). Its value shows whether this value was present (1) or not (0) | 1 - 0 |
We used the Tranco List from June 21st, 2022 and visited all 8M hostnames of this list with a crawler bot to identify their login pages. We then crawled the login pages on a monthly basis and recorded the Accept-CH HTTP header sent by each website. For technical reasons, we had crawling gaps of one (October 2022) and two months (October/November 2023). However, the impact should be minimal (see Publication).
You can find more details on our conducted study in the following journal article:
A Privacy Measure Turned Upside Down? Investigating the Use of HTTP Client Hints on the Web
Stephan Wiefling, Marian Hönscheid, and Luigi Lo Iacono.
19th International Conference on Availability, Reliability and Security (ARES '24), Vienna, Austria
...
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This artifact accompanies the SEET@ICSE article "Assessing the impact of hints in learning formal specification", which reports on a user study to investigate the impact of different types of automated hints while learning a formal specification language, both in terms of immediate performance and learning retention, but also in the emotional response of the students. This research artifact provides all the material required to replicate this study (except for the proprietary questionnaires passed to assess the emotional response and user experience), as well as the collected data and data analysis scripts used for the discussion in the paper.
Dataset
The artifact contains the resources described below.
Experiment resources
The resources needed for replicating the experiment, namely in directory experiment:
alloy_sheet_pt.pdf: the 1-page Alloy sheet that participants had access to during the 2 sessions of the experiment. The sheet was passed in Portuguese due to the population of the experiment.
alloy_sheet_en.pdf: a version the 1-page Alloy sheet that participants had access to during the 2 sessions of the experiment translated into English.
docker-compose.yml: a Docker Compose configuration file to launch Alloy4Fun populated with the tasks in directory data/experiment for the 2 sessions of the experiment.
api and meteor: directories with source files for building and launching the Alloy4Fun platform for the study.
Experiment data
The task database used in our application of the experiment, namely in directory data/experiment:
Model.json, Instance.json, and Link.json: JSON files with to populate Alloy4Fun with the tasks for the 2 sessions of the experiment.
identifiers.txt: the list of all (104) available participant identifiers that can participate in the experiment.
Collected data
Data collected in the application of the experiment as a simple one-factor randomised experiment in 2 sessions involving 85 undergraduate students majoring in CSE. The experiment was validated by the Ethics Committee for Research in Social and Human Sciences of the Ethics Council of the University of Minho, where the experiment took place. Data is shared the shape of JSON and CSV files with a header row, namely in directory data/results:
data_sessions.json: data collected from task-solving in the 2 sessions of the experiment, used to calculate variables productivity (PROD1 and PROD2, between 0 and 12 solved tasks) and efficiency (EFF1 and EFF2, between 0 and 1).
data_socio.csv: data collected from socio-demographic questionnaire in the 1st session of the experiment, namely:
participant identification: participant's unique identifier (ID);
socio-demographic information: participant's age (AGE), sex (SEX, 1 through 4 for female, male, prefer not to disclosure, and other, respectively), and average academic grade (GRADE, from 0 to 20, NA denotes preference to not disclosure).
data_emo.csv: detailed data collected from the emotional questionnaire in the 2 sessions of the experiment, namely:
participant identification: participant's unique identifier (ID) and the assigned treatment (column HINT, either N, L, E or D);
detailed emotional response data: the differential in the 5-point Likert scale for each of the 14 measured emotions in the 2 sessions, ranging from -5 to -1 if decreased, 0 if maintained, from 1 to 5 if increased, or NA denoting failure to submit the questionnaire. Half of the emotions are positive (Admiration1 and Admiration2, Desire1 and Desire2, Hope1 and Hope2, Fascination1 and Fascination2, Joy1 and Joy2, Satisfaction1 and Satisfaction2, and Pride1 and Pride2), and half are negative (Anger1 and Anger2, Boredom1 and Boredom2, Contempt1 and Contempt2, Disgust1 and Disgust2, Fear1 and Fear2, Sadness1 and Sadness2, and Shame1 and Shame2). This detailed data was used to compute the aggregate data in data_emo_aggregate.csv and in the detailed discussion in Section 6 of the paper.
data_umux.csv: data collected from the user experience questionnaires in the 2 sessions of the experiment, namely:
participant identification: participant's unique identifier (ID);
user experience data: summarised user experience data from the UMUX surveys (UMUX1 and UMUX2, as a usability metric ranging from 0 to 100).
participants.txt: the list of participant identifiers that have registered for the experiment.
Analysis scripts
The analysis scripts required to replicate the analysis of the results of the experiment as reported in the paper, namely in directory analysis:
analysis.r: An R script to analyse the data in the provided CSV files; each performed analysis is documented within the file itself.
requirements.r: An R script to install the required libraries for the analysis script.
normalize_task.r: A Python script to normalize the task JSON data from file data_sessions.json into the CSV format required by the analysis script.
normalize_emo.r: A Python script to compute the aggregate emotional response in the CSV format required by the analysis script from the detailed emotional response data in the CSV format of data_emo.csv.
Dockerfile: Docker script to automate the analysis script from the collected data.
Setup
To replicate the experiment and the analysis of the results, only Docker is required.
If you wish to manually replicate the experiment and collect your own data, you'll need to install:
A modified version of the Alloy4Fun platform, which is built in the Meteor web framework. This version of Alloy4Fun is publicly available in branch study of its repository at https://github.com/haslab/Alloy4Fun/tree/study.
If you wish to manually replicate the analysis of the data collected in our experiment, you'll need to install:
Python to manipulate the JSON data collected in the experiment. Python is freely available for download at https://www.python.org/downloads/, with distributions for most platforms.
R software for the analysis scripts. R is freely available for download at https://cran.r-project.org/mirrors.html, with binary distributions available for Windows, Linux and Mac.
Usage
Experiment replication
This section describes how to replicate our user study experiment, and collect data about how different hints impact the performance of participants.
To launch the Alloy4Fun platform populated with tasks for each session, just run the following commands from the root directory of the artifact. The Meteor server may take a few minutes to launch, wait for the "Started your app" message to show.
cd experimentdocker-compose up
This will launch Alloy4Fun at http://localhost:3000. The tasks are accessed through permalinks assigned to each participant. The experiment allows for up to 104 participants, and the list of available identifiers is given in file identifiers.txt. The group of each participant is determined by the last character of the identifier, either N, L, E or D. The task database can be consulted in directory data/experiment, in Alloy4Fun JSON files.
In the 1st session, each participant was given one permalink that gives access to 12 sequential tasks. The permalink is simply the participant's identifier, so participant 0CAN would just access http://localhost:3000/0CAN. The next task is available after a correct submission to the current task or when a time-out occurs (5mins). Each participant was assigned to a different treatment group, so depending on the permalink different kinds of hints are provided. Below are 4 permalinks, each for each hint group:
Group N (no hints): http://localhost:3000/0CAN
Group L (error locations): http://localhost:3000/CA0L
Group E (counter-example): http://localhost:3000/350E
Group D (error description): http://localhost:3000/27AD
In the 2nd session, likewise the 1st session, each permalink gave access to 12 sequential tasks, and the next task is available after a correct submission or a time-out (5mins). The permalink is constructed by prepending the participant's identifier with P-. So participant 0CAN would just access http://localhost:3000/P-0CAN. In the 2nd sessions all participants were expected to solve the tasks without any hints provided, so the permalinks from different groups are undifferentiated.
Before the 1st session the participants should answer the socio-demographic questionnaire, that should ask the following information: unique identifier, age, sex, familiarity with the Alloy language, and average academic grade.
Before and after both sessions the participants should answer the standard PrEmo 2 questionnaire. PrEmo 2 is published under an Attribution-NonCommercial-NoDerivatives 4.0 International Creative Commons licence (CC BY-NC-ND 4.0). This means that you are free to use the tool for non-commercial purposes as long as you give appropriate credit, provide a link to the license, and do not modify the original material. The original material, namely the depictions of the diferent emotions, can be downloaded from https://diopd.org/premo/. The questionnaire should ask for the unique user identifier, and for the attachment with each of the depicted 14 emotions, expressed in a 5-point Likert scale.
After both sessions the participants should also answer the standard UMUX questionnaire. This questionnaire can be used freely, and should ask for the user unique identifier and answers for the standard 4 questions in a 7-point Likert scale. For information about the questions, how to implement the questionnaire, and how to compute the usability metric ranging from 0 to 100 score from the answers, please see the original paper:
Kraig Finstad. 2010. The usability metric for user experience. Interacting with computers 22, 5 (2010), 323–327.
Analysis of other applications of the experiment
This section describes how to replicate the analysis of the data collected in an application of the experiment described in Experiment replication.
The analysis script expects data in 4 CSV files,
Facebook
TwitterAttribution-NonCommercial-ShareAlike 3.0 (CC BY-NC-SA 3.0)https://creativecommons.org/licenses/by-nc-sa/3.0/
License information was derived automatically
(:unav)...........................................
Facebook
TwitterTreasury Inflation-Protected Securities, also known as TIPS, are securities whose principal is tied to the Consumer Price Index. With inflation, the principal increases. With deflation, it decreases. When the security matures, the U.S. Treasury pays the original or adjusted principal, whichever is greater.
Facebook
TwitterHints Technologies Llp Export Import Data. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.
Facebook
TwitterD'Amico, Kim, and Wei use a no-arbitrage term structure model to decompose TIPS inflation compensation into three components: inflation expectation, inflation risk premium, and TIPS liquidity premium over the 1983-present period. The model is also used to decompose nominal yields or forward rates into four components: expected real short rate, expected inflation, inflation risk premium, and real term premium.
Facebook
TwitterNYTD in Practice publications provide resources to help the NYTD workforce.
Metadata-only record linking to the original dataset. Open original dataset below.
Facebook
TwitterNYTD in Practice publications provide resources to help the NYTD workforce.
Metadata-only record linking to the original dataset. Open original dataset below.
Facebook
Twitterhttp://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/
This dataset was created by arman301001
Released under Database: Open Database, Contents: Database Contents
Facebook
TwitterThe yield curve, also called the term structure of interest rates, refers to the relationship between the remaining time-to-maturity of debt securities and the yield on those securities. Yield curves have many practical uses, including pricing of various fixed-income securities, and are closely watched by market participants and policymakers alike for potential clues about the markets perception of the path of the policy rate and the macroeconomic outlook. This page provides daily estimated real yield curve parameters, smoothed yields on hypothetical TIPS, and implied inflation compensation, from 1999 to the present. Because this is a staff research product and not an official statistical release, it is subject to delay, revision, or methodological changes without advance notice.
Facebook
TwitterHints Of The Past Export Import Data. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.
Facebook
TwitterThis dataset is a compilation of easy tips to prevent type 2 diabetes. They were compiled from several documents produced by the National Diabetes Education Program (NDEP). NDEP is a partnership of the National Institutes of Health, the Centers for Disease Control and Prevention, and more than 200 public and private organizations.
Splitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:
See the Splitgraph documentation for more information.
Facebook
TwitterThis dataset provides information about the number of properties, residents, and average property values for Tips Lane cross streets in Westover, MD.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
35 Global export shipment records of Tynee Tips Leaf with prices, volume & current Buyer's suppliers relationships based on actual Global export trade database.
Facebook
TwitterAttribution-NonCommercial-ShareAlike 3.0 (CC BY-NC-SA 3.0)https://creativecommons.org/licenses/by-nc-sa/3.0/
License information was derived automatically
(:unav)...........................................
Facebook
Twitterhttps://whoisdatacenter.com/terms-of-use/https://whoisdatacenter.com/terms-of-use/
.TIPS Whois Database, discover comprehensive ownership details, registration dates, and more for .TIPS TLD with Whois Data Center.
Facebook
TwitterMolecular clock methodology provides the best means of establishing evolutionary timescales, the accuracy and precision of which remain reliant on calibration, traditionally based on fossil constraints on clade (node) ages. Tip calibration has been developed to obviate undesirable aspects of node calibration, including the need for maximum age constraints that are invariably very difficult to justify. Instead, tip calibration incorporates fossil species as dated tips alongside living relatives, potentially improving the accuracy and precision of divergence time estimates. We demonstrate that tip calibration yields node calibrations that violate fossil evidence, contributing to unjustifiably young and ancient age estimates, less precise and (presumably) accurate than conventional node calibration. However, we go on to show that node and tip calibrations are complementary, producing meaningful age estimates, with node minima enforcing realistic ages and fossil tips interacting with node calibrations to objectively define maximum age constraints on clade ages. Together, tip and node calibrations may yield evolutionary timescales that are better justified, more precise and accurate than either calibration strategy can achieve alone.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
United States TIPS Yield: Inflation Indexed: Long Term Average: >10 Years data was reported at 1.190 % pa in Oct 2018. This records an increase from the previous number of 0.990 % pa for Sep 2018. United States TIPS Yield: Inflation Indexed: Long Term Average: >10 Years data is updated monthly, averaging 1.620 % pa from Jan 2003 (Median) to Oct 2018, with 190 observations. The data reached an all-time high of 3.090 % pa in Nov 2008 and a record low of -0.120 % pa in Dec 2012. United States TIPS Yield: Inflation Indexed: Long Term Average: >10 Years data remains active status in CEIC and is reported by Federal Reserve Board. The data is categorized under Global Database’s United States – Table US.M008: Treasury Securities Yields.
Facebook
TwitterThe Health Information National Trends Survey (HINTS) is a biennial, cross-sectional survey of a nationally-representative sample of American adults that is used to assess the impact of the health information environment. The survey provides updates on changing patterns, needs, and information opportunities in health; Identifies changing communications trends and practices; Assesses cancer information access and usage; Provides information about how cancer risks are perceived; and Offers a testbed to researchers to test new theories in health communication.