19 datasets found
  1. d

    TIGER/Line Shapefile, 2016, county, Newport News city, VA, Address...

    • catalog.data.gov
    Updated Dec 3, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2020). TIGER/Line Shapefile, 2016, county, Newport News city, VA, Address Range-Feature Name County-based Relationship File [Dataset]. https://catalog.data.gov/dataset/tiger-line-shapefile-2016-county-newport-news-city-va-address-range-feature-name-county-based-r
    Explore at:
    Dataset updated
    Dec 3, 2020
    Area covered
    Newport News, Virginia
    Description

    The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master Address File / Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) Database (MTDB). The MTDB represents a seamless national file with no overlaps or gaps between parts, however, each TIGER/Line shapefile is designed to stand alone as an independent data set, or they can be combined to cover the entire nation. The Address Range / Feature Name Relationship File (ADDRFN.dbf) contains a record for each address range / linear feature name relationship. The purpose of this relationship file is to identify all street names associated with each address range. An edge can have several feature names; an address range located on an edge can be associated with one or any combination of the available feature names (an address range can be linked to multiple feature names). The address range is identified by the address range identifier (ARID) attribute that can be used to link to the Address Ranges Relationship File (ADDR.dbf). The linear feature name is identified by the linear feature identifier (LINEARID) attribute that can be used to link to the Feature Names Relationship File (FEATNAMES.dbf).

  2. w

    News where news title includes R

    • workwithdata.com
    Updated Jul 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Work With Data (2024). News where news title includes R [Dataset]. https://www.workwithdata.com/datasets/news?R%20Window=&f=1&fcol0=news_title_matched&fop0=includes&fval0=R
    Explore at:
    Dataset updated
    Jul 9, 2024
    Dataset authored and provided by
    Work With Data
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is about news and is filtered where the news title includes R, featuring 10 columns including classification, entities, keywords, news link, and news title. The preview is ordered by publication date (descending).

  3. Data from: TDMentions: A Dataset of Technical Debt Mentions in Online Posts

    • zenodo.org
    • data.niaid.nih.gov
    bin, bz2
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Morgan Ericsson; Morgan Ericsson; Anna Wingkvist; Anna Wingkvist (2020). TDMentions: A Dataset of Technical Debt Mentions in Online Posts [Dataset]. http://doi.org/10.5281/zenodo.2593142
    Explore at:
    bin, bz2Available download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Morgan Ericsson; Morgan Ericsson; Anna Wingkvist; Anna Wingkvist
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    # TDMentions: A Dataset of Technical Debt Mentions in Online Posts (version 1.0)

    TDMentions is a dataset that contains mentions of technical debt from Reddit, Hacker News, and Stack Exchange. It also contains a list of blog posts on Medium that were tagged as technical debt. The dataset currently contains approximately 35,000 items.

    ## Data collection and processing

    The dataset is mainly collected from existing datasets. We used data from:

    - the archive of Reddit posts by Jason Baumgartner (available at [https://pushshift.io](https://pushshift.io),
    - the archive of Hacker News available at Google's BigQuery (available at [https://console.cloud.google.com/marketplace/details/y-combinator/hacker-news](https://console.cloud.google.com/marketplace/details/y-combinator/hacker-news)), and the Stack Exchange data dump (available at [https://archive.org/details/stackexchange](https://archive.org/details/stackexchange)).
    - the [GHTorrent](http://ghtorrent.org) project
    - the [GH Archive](https://www.gharchive.org)

    The data set currently contains data from the start of each source/service until 2018-12-31. For GitHub, we currently only include data from 2015-01-01.

    We use the regular expression `tech(nical)?[\s\-_]*?debt` to find mentions in all sources except for Medium. We decided to limit our matches to variations of technical debt and tech debt. Other shorter forms, such as TD, can result in too many false positives. For Medium, we used the tag `technical-debt`.

    ## Data Format

    The dataset is stored as a compressed (bzip2) JSON file with one JSON object per line. Each mention is represented as a JSON object with the following keys.

    - `id`: the id used in the original source. We use the URL path to identify Medium posts.
    - `body`: the text that contains the mention. This is either the comment or the title of the post. For Medium posts this is the title and subtitle (which might not mention technical debt, since posts are identified by the tag).
    - `created_utc`: the time the item was posted in seconds since epoch in UTC.
    - `author`: the author of the item. We use the username or userid from the source.
    - `source`: where the item was posted. Valid sources are:
    - HackerNews Comment
    - HackerNews Job
    - HackerNews Submission
    - Reddit Comment
    - Reddit Submission
    - StackExchange Answer
    - StackExchange Comment
    - StackExchange Question
    - Medium Post
    - `meta`: Additional information about the item specific to the source. This includes, e.g., the subreddit a Reddit submission or comment was posted to, the score, etc. We try to use the same names, e.g., `score` and `num_comments` for keys that have the same meaning/information across multiple sources.

    This is a sample item from Reddit:

    ```JSON
    {
    "id": "ab8auf",
    "body": "Technical Debt Explained (x-post r/Eve)",
    "created_utc": 1546271789,
    "author": "totally_100_human",
    "source": "Reddit Submission",
    "meta": {
    "title": "Technical Debt Explained (x-post r/Eve)",
    "score": 1,
    "num_comments": 0,
    "url": "http://jestertrek.com/eve/technical-debt-2.png",
    "subreddit": "RCBRedditBot"
    }
    }
    ```

    ## Sample Analyses

    We decided to use JSON to store the data, since it is easy to work with from multiple programming languages. In the following examples, we use [`jq`](https://stedolan.github.io/jq/) to process the JSON.

    ### How many items are there for each source?

    ```
    lbzip2 -cd postscomments.json.bz2 | jq '.source' | sort | uniq -c
    ```

    ### How many submissions that mentioned technical debt were posted each month?

    ```
    lbzip2 -cd postscomments.json.bz2 | jq 'select(.source == "Reddit Submission") | .created_utc | strftime("%Y-%m")' | sort | uniq -c
    ```

    ### What are the titles of items that link (`meta.url`) to PDF documents?

    ```
    lbzip2 -cd postscomments.json.bz2 | jq '. as $r | select(.meta.url?) | .meta.url | select(endswith(".pdf")) | $r.body'
    ```

    ### Please, I want CSV!

    ```
    lbzip2 -cd postscomments.json.bz2 | jq -r '[.id, .body, .author] | @csv'
    ```

    Note that you need to specify the keys you want to include for the CSV, so it is easier to either ignore the meta information or process each source.

    Please see [https://github.com/sse-lnu/tdmentions](https://github.com/sse-lnu/tdmentions) for more analyses

    # Limitations and Future updates

    The current version of the dataset lacks GitHub data and Medium comments. GitHub data will be added in the next update. Medium comments (responses) will be added in a future update if we find a good way to represent these.

  4. r

    Views of ABC News Digital Content (May 2016)

    • researchdata.edu.au
    • cloud.csiss.gmu.edu
    • +3more
    Updated Jul 21, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Australian Broadcasting Corporation (2016). Views of ABC News Digital Content (May 2016) [Dataset]. https://researchdata.edu.au/views-abc-news-may-2016/2985934
    Explore at:
    Dataset updated
    Jul 21, 2016
    Dataset provided by
    data.gov.au
    Authors
    Australian Broadcasting Corporation
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Area covered
    Description

    The Views of ABC News Digital Content dataset provides both the number of page/screen views per hour for individual pieces of ABC News content and metadata related to each piece of content. The data is taken from across different ABC digital platforms during the month of May 2016. These platforms include the ABC News desktop and mobile websites and the ABC app (both iOS and Android versions).\r Each piece of content is represented by its ID and is consistent for the same piece of content across platforms. The URL of the content can be recreated using the platform and this ID. For example, for the “News” platform and id “7373616”, the URL is retrieved using “http://www.abc.net.au/news/7373616”. The content ID is the key which joins the Traffic data with the Content Metadata.\r The data set covers the period from 2016-05-01 00:00:00 to 2016-05-31 23:59:59.\r \r

    Rights information\r

    This data includes metadata about existing publicly available stories. In general terms, developers are free to use this data to explore ABC's content. But original stories and images should always be linked to. Stories and photos should not be reproduced in whole on another service. _ The stories themselves and their associated media items still remain the property of ABC and other rights holders where noted._ Full details of copyright and more are listed on abc.net.au: http://www.abc.net.au/conditions.htm\r \r

  5. w

    News where news includes R. Chu

    • workwithdata.com
    Updated May 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Work With Data (2024). News where news includes R. Chu [Dataset]. https://www.workwithdata.com/datasets/news?f=1&fcol0=news_title_matched&fop0=includes&fval0=R.+Chu
    Explore at:
    Dataset updated
    May 21, 2024
    Dataset authored and provided by
    Work With Data
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is about news and is filtered where the news includes R. Chu. It has 8 columns such as classification, entities, news, news link, and publication date. The data is ordered by publication date (descending).

  6. o

    useNews

    • osf.io
    Updated Sep 26, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cornelius Puschmann; Mario Haim (2022). useNews [Dataset]. http://doi.org/10.17605/OSF.IO/UZCA3
    Explore at:
    Dataset updated
    Sep 26, 2022
    Dataset provided by
    Center For Open Science
    Authors
    Cornelius Puschmann; Mario Haim
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The useNews dataset has been compiled to enable the study of online news engagement. It relies on the MediaCloud and CrowdTangle APIs as well as on data from the Reuters Digital News Report. The entire dataset builds on data from 2019 and 2020 as well as a total of 12 countries. It is free to use (subject to citing/referencing it).

    The data originates from both the 2019 and the 2020 Reuters Digital News Report (http://www.digitalnewsreport.org/), media content from MediaCloud (https://mediacloud.org/) for 2019 and 2020 from all news outlets that have been used most frequently in the respective year according to the survey data, and engagement metrics for all available news-article URLs through CrowdTangle (https://www.crowdtangle.com/).

    To start using the data, a total of eight data objects exist, namely one each for 2019 and 2020 for the survey, news-article meta information, news-article DFM's, and engagement metrics. To make your life easy, we've provided several packaged download options:

    • survey data for 2019, 2020, or both (also available in CSV format)
    • news-article meta data for 2019, 2020, or both (also available in CSV format)
    • news-article DFM's for 2019, 2020, or both
    • engagement data for 2019, 2020, or both (also available in CSV format)
    • all of 2019 or 2020

    Also, if you are working with R, we have prepared a simple file to automatically download all necessary data (~1.5 GByte) at once: https://osf.io/fxmgq/

    Note that all .rds files are .xz-compressed, which shouldn't bother you when you are in R. You can import all the .rds files through variable_name <- readRDS('filename.rds'), .RData (also .xz-compressed) can be imported by simply using load('filename.RData') which will load several already named objects into your R environment. To import data through other programming languages, we also provide all data in respective CSV files. These files are rather large, however, which is why we have also .xz-compressed them. DFM's, unfortunately, are not available as CSV's due to their sparsity and size.

    Find out more about the data variables and dig into plenty of examples in the useNews-examples workbook: https://osf.io/snuk2/

  7. Reddit r/news Posts

    • kaggle.com
    zip
    Updated May 24, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    lowerlight (2021). Reddit r/news Posts [Dataset]. https://www.kaggle.com/datasets/lowerlight/reddit-rnews-posts/suggestions?status=pending&yourSuggestions=true
    Explore at:
    zip(140902379 bytes)Available download formats
    Dataset updated
    May 24, 2021
    Authors
    lowerlight
    Description

    Dataset

    This dataset was created by lowerlight

    Contents

  8. Z

    SEN - Sentiment analysis of Entities in News headlines

    • data.niaid.nih.gov
    • zenodo.org
    Updated Oct 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marcin Sydow (2023). SEN - Sentiment analysis of Entities in News headlines [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_5211931
    Explore at:
    Dataset updated
    Oct 15, 2023
    Dataset provided by
    Marcin Sydow
    Katarzyna Baraniak
    Description

    If you wish to use this data please cite:

    Katarzyna Baraniak, Marcin Sydow, A dataset for Sentiment analysis of Entities in News headlines (SEN), Procedia Computer Science, Volume 192, 2021, Pages 3627-3636, ISSN 1877-0509, https://doi.org/10.1016/j.procs.2021.09.136. (https://www.sciencedirect.com/science/article/pii/S1877050921018755)

    bibtex: users.pja.edu.pl/~msyd/bibtex/sydow-baraniak-SENdataset-kes21.bib

    SEN is a novel publicly available human-labelled dataset for training and testing machine learning algorithms for the problem of entity level sentiment analysis of political news headlines.

    On-line news portals play a very important role in the information society. Fair media should present reliable and objective information. In practice there is an observable positive or negative bias concerning named entities (e.g. politicians) mentioned in the on-line news headlines. Our dataset consists of 3819 human-labelled political news headlines coming from several major on-line media outlets in English and Polish.

    Each record contains a news headline, a named entity mentioned in the headline and a human annotated label (one of “positive”, “neutral”, “negative” ). Our SEN dataset package consists of 2 parts: SEN-en (English headlines that split into SEN-en-R and SEN-en-AMT), and SEN-pl (Polish headlines). Each headline-entity pair was annotated via team of volunteer researchers (the whole SEN-pl dataset and a subset of 1271 English records: the SEN-en-R subset, “R” for “researchers”) or via the Amazon Mechanical Turk service (a subset of 1360 English records: the SEN-en-AMT subset).

    During analysis of annotation outlying annotations and removed . Separate version of dataset without outliers is marked by "noutliers" in data file name.

    Details of the process of preparing the dataset and presenting its analysis are presented in the paper.

    In case of any questions, please contact one of the authors. Email adresses are in the paper.

  9. w

    News where news includes R

    • workwithdata.com
    Updated Jul 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Work With Data (2024). News where news includes R [Dataset]. https://www.workwithdata.com/datasets/news?f=1&fcol0=news_title_matched&fop0=includes&fval0=R
    Explore at:
    Dataset updated
    Jul 9, 2024
    Dataset authored and provided by
    Work With Data
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is about news and is filtered where the news includes R. It has 8 columns such as classification, entities, news, news link, and publication date. The data is ordered by publication date (descending).

  10. w

    News title of news where news title includes R. Chu

    • workwithdata.com
    Updated Aug 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Work With Data (2024). News title of news where news title includes R. Chu [Dataset]. https://www.workwithdata.com/datasets/news?col=news_link%2Cnews_title_matched&f=1&fcol0=news_title_matched&fop0=includes&fval0=R.+Chu
    Explore at:
    Dataset updated
    Aug 2, 2024
    Dataset authored and provided by
    Work With Data
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is about news and is filtered where the news title includes R. Chu, featuring 2 columns: news link, and news title. The preview is ordered by publication date (descending).

  11. d

    Replication Data for Estimation of Media Slants in South Korean News...

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Nov 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Park, Jong Hee (2023). Replication Data for Estimation of Media Slants in South Korean News Agencies Using News Reports on the Sewol Ferry Disaster [Dataset]. http://doi.org/10.7910/DVN/6RMWBD
    Explore at:
    Dataset updated
    Nov 21, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Park, Jong Hee
    Area covered
    South Korea
    Description

    This data set is a replication data for "Estimation of Media Slants in South Korean News Agencies Using News Reports on the Sewol Ferry Disaster." It contains two Rdata files, one bugs code, and one R code. The two Rdata files include phrase frequencies computed by preprocessed text document of news reports on the Sewol Ferry Disaster from 28 news agencies in South Korea. The R code implements main results of the analysis in the paper. The computation of the model is done by JAGS.

  12. Data from: Unpacking the Nuances of Agenda-Setting in the Online Media...

    • figshare.com
    zip
    Updated Apr 27, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yuzhou Tao; Mark Boukes; Andreas Schuck (2024). Unpacking the Nuances of Agenda-Setting in the Online Media Environment: An Hourly-Event Approach in the Context of Chinese Economic News [Dataset]. http://doi.org/10.6084/m9.figshare.25497556.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 27, 2024
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Yuzhou Tao; Mark Boukes; Andreas Schuck
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository contains the appendix, the dataset, and the analysis files for the study "Unpacking the Nuances of Agenda-Setting in the Online Media Environment: An Hourly-Event Approach in the Context of Chinese Economic News."Except for the appendix, the "Data" folder contains 36 csv-format files, each for one specific news event. In each file, the first column "hour" denotes hourly intervals of the data, and the 2–6 columns denote the endogenous variables included in the VAR models (i.e., the raw volume of coverage or discussion in different groups concerning media, the neitizens, and other institutions of interest). The datasets have been aggregated by 19-hour lags each day, resulting in 266 lags for the 14-day time window."AnalysisFiles" folder contains the R code and copy results for analysis, in which:-TimeSeriesAnalysis" contains the R code for the time-series analysis of this study. Besides, this folder also contains copies of the results for VAR models.-"t-test & ANOVA" contains the results of 36 separate VAR models and the R code for the t-test and ANOVA for the event feature on the influence of agenda-setting. Besides, this folder also contains copies of the results of t-tests and ANOVA.-"Figure" contains the R code for creating Figure 1 and Figure 2 in the main text of this study and also contains copies of these two figures.

  13. o

    A Weakly-Labeled Stance Dataset during the 2019 South American Protests

    • explore.openaire.eu
    Updated Apr 5, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ramon Villa-Cox; Helen (Shuxuan) Zeng; Ashiqur R. Ashiqur R. KhudaBukhsh; Kathleen M. Kathleen M. Carley (2021). A Weakly-Labeled Stance Dataset during the 2019 South American Protests [Dataset]. http://doi.org/10.5281/zenodo.6213031
    Explore at:
    Dataset updated
    Apr 5, 2021
    Authors
    Ramon Villa-Cox; Helen (Shuxuan) Zeng; Ashiqur R. Ashiqur R. KhudaBukhsh; Kathleen M. Kathleen M. Carley
    Description

    Research across different disciplines has documented the expanding polarization in social media. However, much of it focused on the US political system or its culturally controversial topics. In this work, we explore polarization on Twitter in a different context, namely the protest that paralyzed several countries in the South American region in 2019. By leveraging users��� endorsement of politicians' tweets and hashtag campaigns with defined stances towards the government of each country (for or against), we construct a weakly labeled stance dataset with hundreds of thousands of users. Moreover, through the synergistic usage of network-focused methods applied on news sharing patterns and language-focused methods, we validate our labeling methodology by showing that these stances partition the users into meaningful communities. That is, we show that polarization in users' news sharing patterns was consistent with their stances towards the government and that polarization in their language mainly manifested along ideological, political, or protest-related lines.

  14. f

    KEANE dataset instance structure.

    • figshare.com
    xls
    Updated Jul 8, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Juan R. Martinez-Rico; Lourdes Araujo; Juan Martinez-Romo (2024). KEANE dataset instance structure. [Dataset]. http://doi.org/10.1371/journal.pone.0305362.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jul 8, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Juan R. Martinez-Rico; Lourdes Araujo; Juan Martinez-Romo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Disinformation in the medical field is a growing problem that carries a significant risk. Therefore, it is crucial to detect and combat it effectively. In this article, we provide three elements to aid in this fight: 1) a new framework that collects health-related articles from verification entities and facilitates their check-worthiness and fact-checking annotation at the sentence level; 2) a corpus generated using this framework, composed of 10335 sentences annotated in these two concepts and grouped into 327 articles, which we call KEANE (faKe nEws At seNtence lEvel); and 3) a new model for verifying fake news that combines specific identifiers of the medical domain with triplets subject-predicate-object, using Transformers and feedforward neural networks at the sentence level. This model predicts the fact-checking of sentences and evaluates the veracity of the entire article. After training this model on our corpus, we achieved remarkable results in the binary classification of sentences (check-worthiness F1: 0.749, fact-checking F1: 0.698) and in the final classification of complete articles (F1: 0.703). We also tested its performance against another public dataset and found that it performed better than most systems evaluated on that dataset. Moreover, the corpus we provide differs from other existing corpora in its duality of sentence-article annotation, which can provide an additional level of justification of the prediction of truth or untruth made by the model.

  15. r

    RSS feed of Events - City of Greater Geelong

    • researchdata.edu.au
    Updated Jun 19, 2015
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Greater Geelong (2015). RSS feed of Events - City of Greater Geelong [Dataset]. https://researchdata.edu.au/rss-feed-events-greater-geelong/2984128
    Explore at:
    Dataset updated
    Jun 19, 2015
    Dataset provided by
    data.gov.au
    Authors
    City of Greater Geelong
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Area covered
    Description

    An RSS feed of Geelong events.\r \r The dataset contains fields such as, title, link description, published data, and lat long.\r \r Although all due care has been taken to ensure that these data are correct, no warranty is expressed or implied by the City of Greater Geelong in their use.

  16. H

    Replication Data for: "Automated news: Better than expected?"

    • dataverse.harvard.edu
    • search.dataone.org
    Updated Apr 25, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mario Haim; Andreas Graefe (2017). Replication Data for: "Automated news: Better than expected?" [Dataset]. http://doi.org/10.7910/DVN/OU552V
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 25, 2017
    Dataset provided by
    Harvard Dataverse
    Authors
    Mario Haim; Andreas Graefe
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Online appendix to a manuscript by Andreas Graefe and myself on the expectation-perception confirmation link on the perception of automated news. This dataset includes the data, R script, stimuli as well as additional calculations.

  17. r

    An RSS feed of Council News - City of Greater Geelong

    • researchdata.edu.au
    Updated Jun 19, 2015
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Greater Geelong (2015). An RSS feed of Council News - City of Greater Geelong [Dataset]. https://researchdata.edu.au/an-rss-feed-greater-geelong/2984092
    Explore at:
    Dataset updated
    Jun 19, 2015
    Dataset provided by
    data.gov.au
    Authors
    City of Greater Geelong
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Area covered
    Description

    An RSS feed of council news. \r \r The dataset contains fields such as, title, link, description, copyright and published date.\r \r Although all due care has been taken to ensure that these data are correct, no warranty is expressed or implied by the City of Greater Geelong in their use.

  18. f

    Overall description of the datasets.

    • plos.figshare.com
    xls
    Updated Jun 10, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Keshab Raj Dahal; Nawa Raj Pokhrel; Santosh Gaire; Sharad Mahatara; Rajendra P. Joshi; Ankrit Gupta; Huta R. Banjade; Jeorge Joshi (2023). Overall description of the datasets. [Dataset]. http://doi.org/10.1371/journal.pone.0284695.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 10, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Keshab Raj Dahal; Nawa Raj Pokhrel; Santosh Gaire; Sharad Mahatara; Rajendra P. Joshi; Ankrit Gupta; Huta R. Banjade; Jeorge Joshi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The accelerated progress in artificial intelligence encourages sophisticated deep learning methods in predicting stock prices. In the meantime, easy accessibility of the stock market in the palm of one’s hand has made its behavior more fuzzy, volatile, and complex than ever. The world is looking at an accurate and reliable model that uses text and numerical data which better represents the market’s highly volatile and non-linear behavior in a broader spectrum. A research gap exists in accurately predicting a target stock’s closing price utilizing the combined numerical and text data. This study uses long short-term memory (LSTM) and gated recurrent unit (GRU) to predict the stock price using stock features alone and incorporating financial news data in conjunction with stock features. The comparative study carried out under identical conditions dispassionately evaluates the importance of incorporating financial news in stock price prediction. Our experiment concludes that incorporating financial news data produces better prediction accuracy than using the stock fundamental features alone. The performances of the model architecture are compared using the standard assessment metrics —Root Mean Square Error (RMSE), Mean Absolute Percentage Error (MAPE), and Correlation Coefficient (R). Furthermore, statistical tests are conducted to further verify the models’ robustness and reliability.

  19. Subject participation by school in years 11 and 12

    • researchdata.edu.au
    Updated Oct 9, 2014
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.qld.gov.au (2014). Subject participation by school in years 11 and 12 [Dataset]. https://researchdata.edu.au/subject-participation-school-11-12/660447
    Explore at:
    Dataset updated
    Oct 9, 2014
    Dataset provided by
    Queensland Governmenthttp://qld.gov.au/
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Description

    A CSV of the subject participation by school in years 11 and 12.\r \r This dataset is no longer being updated. Information on Subject participation by school is now being updated directly on QCAA's website at https://www.qcaa.qld.edu.au/news-data/statistics\r

  20. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
(2020). TIGER/Line Shapefile, 2016, county, Newport News city, VA, Address Range-Feature Name County-based Relationship File [Dataset]. https://catalog.data.gov/dataset/tiger-line-shapefile-2016-county-newport-news-city-va-address-range-feature-name-county-based-r

TIGER/Line Shapefile, 2016, county, Newport News city, VA, Address Range-Feature Name County-based Relationship File

Explore at:
Dataset updated
Dec 3, 2020
Area covered
Newport News, Virginia
Description

The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master Address File / Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) Database (MTDB). The MTDB represents a seamless national file with no overlaps or gaps between parts, however, each TIGER/Line shapefile is designed to stand alone as an independent data set, or they can be combined to cover the entire nation. The Address Range / Feature Name Relationship File (ADDRFN.dbf) contains a record for each address range / linear feature name relationship. The purpose of this relationship file is to identify all street names associated with each address range. An edge can have several feature names; an address range located on an edge can be associated with one or any combination of the available feature names (an address range can be linked to multiple feature names). The address range is identified by the address range identifier (ARID) attribute that can be used to link to the Address Ranges Relationship File (ADDR.dbf). The linear feature name is identified by the linear feature identifier (LINEARID) attribute that can be used to link to the Feature Names Relationship File (FEATNAMES.dbf).

Search
Clear search
Close search
Google apps
Main menu