100+ datasets found
  1. d

    Ecommerce Data - Product data, Seller data, Market data, Pricing data|...

    • datarade.ai
    Updated Dec 1, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    APISCRAPY (2023). Ecommerce Data - Product data, Seller data, Market data, Pricing data| Scrape all publicly available eCommerce data| 50% Cost Saving | Free Sample [Dataset]. https://datarade.ai/data-products/apiscrapy-mobile-app-data-api-scraping-service-app-intel-apiscrapy
    Explore at:
    .bin, .json, .xml, .csv, .xls, .sql, .txtAvailable download formats
    Dataset updated
    Dec 1, 2023
    Dataset authored and provided by
    APISCRAPY
    Area covered
    Ukraine, China, Åland Islands, Malta, Spain, Norway, Bosnia and Herzegovina, United States of America, Switzerland, Isle of Man
    Description

    Note:- Only publicly available data can be worked upon

    In today's ever-evolving Ecommerce landscape, success hinges on the ability to harness the power of data. APISCRAPY is your strategic ally, dedicated to providing a comprehensive solution for extracting critical Ecommerce data, including Ecommerce market data, Ecommerce product data, and Ecommerce datasets. With the Ecommerce arena being more competitive than ever, having a data-driven approach is no longer a luxury but a necessity.

    APISCRAPY's forte lies in its ability to unearth valuable Ecommerce market data. We recognize that understanding the market dynamics, trends, and fluctuations is essential for making informed decisions.

    APISCRAPY's AI-driven ecommerce data scraping service presents several advantages for individuals and businesses seeking comprehensive insights into the ecommerce market. Here are key benefits associated with their advanced data extraction technology:

    1. Ecommerce Product Data: APISCRAPY's AI-driven approach ensures the extraction of detailed Ecommerce Product Data, including product specifications, images, and pricing information. This comprehensive data is valuable for market analysis and strategic decision-making.

    2. Data Customization: APISCRAPY enables users to customize the data extraction process, ensuring that the extracted ecommerce data aligns precisely with their informational needs. This customization option adds versatility to the service.

    3. Efficient Data Extraction: APISCRAPY's technology streamlines the data extraction process, saving users time and effort. The efficiency of the extraction workflow ensures that users can obtain relevant ecommerce data swiftly and consistently.

    4. Realtime Insights: Businesses can gain real-time insights into the dynamic Ecommerce Market by accessing rapidly extracted data. This real-time information is crucial for staying ahead of market trends and making timely adjustments to business strategies.

    5. Scalability: The technology behind APISCRAPY allows scalable extraction of ecommerce data from various sources, accommodating evolving data needs and handling increased volumes effortlessly.

    Beyond the broader market, a deeper dive into specific products can provide invaluable insights. APISCRAPY excels in collecting Ecommerce product data, enabling businesses to analyze product performance, pricing strategies, and customer reviews.

    To navigate the complexities of the Ecommerce world, you need access to robust datasets. APISCRAPY's commitment to providing comprehensive Ecommerce datasets ensures businesses have the raw materials required for effective decision-making.

    Our primary focus is on Amazon data, offering businesses a wealth of information to optimize their Amazon presence. By doing so, we empower our clients to refine their strategies, enhance their products, and make data-backed decisions.

    [Tags: Ecommerce data, Ecommerce Data Sample, Ecommerce Product Data, Ecommerce Datasets, Ecommerce market data, Ecommerce Market Datasets, Ecommerce Sales data, Ecommerce Data API, Amazon Ecommerce API, Ecommerce scraper, Ecommerce Web Scraping, Ecommerce Data Extraction, Ecommerce Crawler, Ecommerce data scraping, Amazon Data, Ecommerce web data]

  2. Note Taking App Market Analysis, Size, and Forecast 2024-2028: North America...

    • technavio.com
    Updated Dec 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio (2024). Note Taking App Market Analysis, Size, and Forecast 2024-2028: North America (US and Canada), Europe (France, Germany, Italy, The Netherlands, and UK), APAC (China, India, and Japan), and Rest of World (ROW) [Dataset]. https://www.technavio.com/report/note-taking-app-market-industry-analysis
    Explore at:
    Dataset updated
    Dec 19, 2024
    Dataset provided by
    TechNavio
    Authors
    Technavio
    Time period covered
    2021 - 2025
    Area covered
    Canada, France, United Kingdom, Japan, Germany, United States, Global
    Description

    Snapshot img

    Note Taking App Market Size 2024-2028

    The note taking app market size is forecast to increase by USD 9.74 billion, at a CAGR of 17% between 2023 and 2028.

    The market is experiencing significant growth, driven by the increasing digitization and internet penetration. The integration of Artificial Intelligence (AI) and automation in note taking apps is revolutionizing the way users capture and organize information. This trend is expected to continue as technology advances, offering new opportunities for innovation and user convenience. However, the market faces challenges related to data privacy concerns. With the growing use of note taking apps, the sensitive information they store becomes a potential target for cyber threats.
    Addressing these concerns through robust security measures and transparent data handling practices is essential for companies seeking to build trust and maintain user loyalty. Effective navigation of these challenges will be crucial for businesses looking to capitalize on the market's potential and stay competitive in the evolving digital landscape.
    

    What will be the Size of the Note Taking App Market during the forecast period?

    Explore in-depth regional segment analysis with market size data - historical 2018-2022 and forecasts 2024-2028 - in the full report.
    Request Free Sample

    The note-taking app market continues to evolve, with dynamic market activities unfolding across various sectors. Backup and restore, cloud synchronization, and waterfall methodology are integral components of these applications, ensuring seamless data management. Handwriting recognition and user analytics offer enhanced functionality, while advertising revenue and in-app purchases generate monetization opportunities. Data security, compliance regulations, and performance optimization address growing concerns, ensuring user trust and retention. Version control, audio recording, and cost optimization are essential for efficient note-taking, while organization features, user experience (UX), and desktop app development cater to diverse user needs. Subscription models, search functionality, and collaboration tools enable effective teamwork, and product roadmaps facilitate prioritization and feature development.

    How is this Note Taking App Industry segmented?

    The note taking app industry research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD million' for the period 2024-2028, as well as historical data from 2018-2022 for the following segments.

    Application
    
      Private users
      Commercial users
    
    
    Type
    
      Window system
      Android system
      IOS system
    
    
    Platform
    
      Mobile
      Desktop
      web-Based
    
    
    End-User
    
      Student
      Professional
      Casual User
    
    
    Geography
    
      North America
    
        US
    
    
      Europe
    
        Germany
    
    
      APAC
    
        China
        India
        Japan
    
    
      Rest of World (ROW)
    

    By Application Insights

    The private users segment is estimated to witness significant growth during the forecast period.

    Note taking apps have gained popularity in both business and personal sectors, with the Private Users segment primarily consisting of individuals utilizing these tools for organizing thoughts, managing tasks, capturing ideas, journaling, and studying. Notable apps catering to this demographic include Microsoft OneNote, Evernote, Google Keep, and Apple Notes. These platforms offer features such as cloud synchronization, multimedia support, handwriting recognition, and cross-device accessibility. The growth of this segment can be attributed to the increasing prevalence of smartphones and tablets, particularly among students and knowledge workers. Many apps provide free versions with fundamental features, making them an attractive option for budget-conscious users.

    Additionally, educational tools integration is a common feature for student users. Agile development methodologies, like Scrum, facilitate frequent updates and beta testing, ensuring continuous improvement. API integrations enable seamless data exchange with other applications, while tagging systems and search functionality enhance productivity. Subscription models offer advanced features, and collaboration tools foster teamwork. User interface design prioritizes user experience (UX), ensuring ease of use. Backup and restore, data encryption, and data security ensure data protection. Compliance regulations, performance optimization, and retention rate are crucial considerations for businesses. Version control, audio recording, cost optimization, organization features, and user feedback further enhance functionality.

    Desktop app development and web app development cater to diverse user preferences. Software testing, security features, customer service, and data analytics ensure app reliability and user satisfaction. Mobile app development and agile development methodologies ensure app accessibility and adaptabili

  3. f

    Data_Sheet_8_Saving Time for Patient Care by Optimizing Physician Note...

    • frontiersin.figshare.com
    pdf
    Updated Jun 16, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rana Alissa; Jennifer A. Hipp; Kendall Webb (2023). Data_Sheet_8_Saving Time for Patient Care by Optimizing Physician Note Templates: A Pilot Study.PDF [Dataset]. http://doi.org/10.3389/fdgth.2021.772356.s008
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 16, 2023
    Dataset provided by
    Frontiers
    Authors
    Rana Alissa; Jennifer A. Hipp; Kendall Webb
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Background: At times, electronic medical records (EMRs) have proven to be less than optimal, causing longer hours behind computers, shorter time with patients, suboptimal patient safety, provider dissatisfaction, and physician burnout. These concerning healthcare issues can be positively affected by optimizing EMR usability, which in turn would lead to substantial benefits to healthcare professionals such as increased healthcare professional productivity, efficiency, quality, and accuracy. Documentation issues, such as non-standardization of physician note templates and tedious, time-consuming notes in our mother-baby unit (MBU), were discussed during meetings with stakeholders in the MBU and our hospital's EMR analysts.Objective: The objective of this study was to assess physician note optimization on saving time for patient care and improving provider satisfaction.Methods: This quality improvement pilot investigation was conducted in our MBU where four note templates were optimized: History and Physical (H and P), Progress Note (PN), Discharge Summary (DCS), and Hand-Off List (HOL). Free text elements documented elsewhere in the EMR (e.g., delivery information, maternal data, lab result, etc.) were identified and replaced with dynamic links that automatically populate the note with these data. Discrete data pick lists replaced necessary elements that were previously free texts. The new note templates were given new names for ease of accessibility. Ten randomly chosen pediatric residents completed both the old and new note templates for the same control newborn encounter during a period of one year. Time spent and number of actions taken (clicks, keystrokes, transitions, and mouse-keyboard switches) to complete these notes were recorded. Surveys were sent to MBU providers regarding overall satisfaction with the new note templates.Results: The ten residents' average time saved was 23 min per infant. Reflecting this saved time on the number of infants admitted to our MBU between January 2016 and September, 2019 which was 9373 infants; resulted in 2.6 hours saved per day, knowing that every infant averages two days length of stay. The new note templates required 69 fewer actions taken than the old ones (H and P: 11, PN: 8, DCS: 18, HOL: 32). The provider surveys were consistent with improved provider satisfaction.Conclusion: Optimizing physician notes saved time for patient care and improved physician satisfaction.

  4. C

    Raw Data for ConfLab: A Data Collection Concept, Dataset, and Benchmark for...

    • data.4tu.nl
    Updated Jun 7, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chirag Raman; Jose Vargas Quiros; Stephanie Tan; Ashraful Islam; Ekin Gedik; Hayley Hung (2022). Raw Data for ConfLab: A Data Collection Concept, Dataset, and Benchmark for Machine Analysis of Free-Standing Social Interactions in the Wild [Dataset]. http://doi.org/10.4121/20017748.v2
    Explore at:
    Dataset updated
    Jun 7, 2022
    Dataset provided by
    4TU.ResearchData
    Authors
    Chirag Raman; Jose Vargas Quiros; Stephanie Tan; Ashraful Islam; Ekin Gedik; Hayley Hung
    License

    https://data.4tu.nl/info/fileadmin/user_upload/Documenten/4TU.ResearchData_Restricted_Data_2022.pdfhttps://data.4tu.nl/info/fileadmin/user_upload/Documenten/4TU.ResearchData_Restricted_Data_2022.pdf

    Description

    This file contains raw data for cameras and wearables of the ConfLab dataset.


    ./cameras

    contains the overhead video recordings for 9 cameras (cam2-10) in MP4 files.

    These cameras cover the whole interaction floor, with camera 2 capturing the

    bottom of the scene layout, and camera 10 capturing top of the scene layout.

    Note that cam5 ran out of battery before the other cameras and thus the recordings

    are cut short. However, cam4 and 6 contain significant overlap with cam 5, to

    reconstruct any information needed.


    Note that the annotations are made and provided in 2 minute segments.

    The annotated portions of the video include the last 3min38sec of x2xxx.MP4

    video files, and the first 12 min of x3xxx.MP4 files for cameras (2,4,6,8,10),

    with "x" being the placeholder character in the mp4 file names. If one wishes

    to separate the video into 2 min segments as we did, the "video-splitting.sh"

    script is provided.


    ./camera-calibration contains the camera instrinsic files obtained from

    https://github.com/idiap/multicamera-calibration. Camera extrinsic parameters can

    be calculated using the existing intrinsic parameters and the instructions in the

    multicamera-calibration repo. The coordinates in the image are provided by the

    crosses marked on the floor, which are visible in the video recordings.

    The crosses are 1m apart (=100cm).


    ./wearables

    subdirectory includes the IMU, proximity and audio data from each

    participant at the Conflab event (48 in total). In the directory numbered

    by participant ID, the following data are included:

    1. raw audio file

    2. proximity (bluetooth) pings (RSSI) file (raw and csv) and a visualization

    3. Tri-axial accelerometer data (raw and csv) and a visualization

    4. Tri-axial gyroscope data (raw and csv) and a visualization

    5. Tri-axial magnetometer data (raw and csv) and a visualization

    6. Game rotation vector (raw and csv), recorded in quaternions.


    All files are timestamped.

    The sampling frequencies are:

    - audio: 1250 Hz

    - rest: around 50Hz. However, the sample rate is not fixed

    and instead the timestamps should be used.


    For rotation, the game rotation vector's output frequency is limited by the

    actual sampling frequency of the magnetometer. For more information, please refer to

    https://invensense.tdk.com/wp-content/uploads/2016/06/DS-000189-ICM-20948-v1.3.pdf


    Audio files in this folder are in raw binary form. The following can be used to convert

    them to WAV files (1250Hz):


    ffmpeg -f s16le -ar 1250 -ac 1 -i /path/to/audio/file


    Synchronization of cameras and werables data

    Raw videos contain timecode information which matches the timestamps of the data in

    the "wearables" folder. The starting timecode of a video can be read as:

    ffprobe -hide_banner -show_streams -i /path/to/video


    ./audio

    ./sync: contains wav files per each subject

    ./sync_files: auxiliary csv files used to sync the audio. Can be used to improve the synchronization.

    The code used for syncing the audio can be found here:

    https://github.com/TUDelft-SPC-Lab/conflab/tree/master/preprocessing/audio

  5. COVID-19 Case Surveillance Public Use Data

    • data.cdc.gov
    • paperswithcode.com
    • +5more
    application/rdfxml +5
    Updated Jul 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CDC Data, Analytics and Visualization Task Force (2024). COVID-19 Case Surveillance Public Use Data [Dataset]. https://data.cdc.gov/Case-Surveillance/COVID-19-Case-Surveillance-Public-Use-Data/vbim-akqf
    Explore at:
    application/rdfxml, tsv, csv, json, xml, application/rssxmlAvailable download formats
    Dataset updated
    Jul 9, 2024
    Dataset provided by
    Centers for Disease Control and Preventionhttp://www.cdc.gov/
    Authors
    CDC Data, Analytics and Visualization Task Force
    License

    https://www.usa.gov/government-workshttps://www.usa.gov/government-works

    Description

    Note: Reporting of new COVID-19 Case Surveillance data will be discontinued July 1, 2024, to align with the process of removing SARS-CoV-2 infections (COVID-19 cases) from the list of nationally notifiable diseases. Although these data will continue to be publicly available, the dataset will no longer be updated.

    Authorizations to collect certain public health data expired at the end of the U.S. public health emergency declaration on May 11, 2023. The following jurisdictions discontinued COVID-19 case notifications to CDC: Iowa (11/8/21), Kansas (5/12/23), Kentucky (1/1/24), Louisiana (10/31/23), New Hampshire (5/23/23), and Oklahoma (5/2/23). Please note that these jurisdictions will not routinely send new case data after the dates indicated. As of 7/13/23, case notifications from Oregon will only include pediatric cases resulting in death.

    This case surveillance public use dataset has 12 elements for all COVID-19 cases shared with CDC and includes demographics, any exposure history, disease severity indicators and outcomes, presence of any underlying medical conditions and risk behaviors, and no geographic data.

    CDC has three COVID-19 case surveillance datasets:

    The following apply to all three datasets:

    Overview

    The COVID-19 case surveillance database includes individual-level data reported to U.S. states and autonomous reporting entities, including New York City and the District of Columbia (D.C.), as well as U.S. territories and affiliates. On April 5, 2020, COVID-19 was added to the Nationally Notifiable Condition List and classified as “immediately notifiable, urgent (within 24 hours)” by a Council of State and Territorial Epidemiologists (CSTE) Interim Position Statement (Interim-20-ID-01). CSTE updated the position statement on August 5, 2020, to clarify the interpretation of antigen detection tests and serologic test results within the case classification (Interim-20-ID-02). The statement also recommended that all states and territories enact laws to make COVID-19 reportable in their jurisdiction, and that jurisdictions conducting surveillance should submit case notifications to CDC. COVID-19 case surveillance data are collected by jurisdictions and reported voluntarily to CDC.

    For more information: NNDSS Supports the COVID-19 Response | CDC.

    The deidentified data in the “COVID-19 Case Surveillance Public Use Data” include demographic characteristics, any exposure history, disease severity indicators and outcomes, clinical data, laboratory diagnostic test results, and presence of any underlying medical conditions and risk behaviors. All data elements can be found on the COVID-19 case report form located at www.cdc.gov/coronavirus/2019-ncov/downloads/pui-form.pdf.

    COVID-19 Case Reports

    COVID-19 case reports have been routinely submitted using nationally standardized case reporting forms. On April 5, 2020, CSTE released an Interim Position Statement with national surveillance case definitions for COVID-19 included. Current versions of these case definitions are available here: https://ndc.services.cdc.gov/case-definitions/coronavirus-disease-2019-2021/.

    All cases reported on or after were requested to be shared by public health departments to CDC using the standardized case definitions for laboratory-confirmed or probable cases. On May 5, 2020, the standardized case reporting form was revised. Case reporting using this new form is ongoing among U.S. states and territories.

    Data are Considered Provisional

    • The COVID-19 case surveillance data are dynamic; case reports can be modified at any time by the jurisdictions sharing COVID-19 data with CDC. CDC may update prior cases shared with CDC based on any updated information from jurisdictions. For instance, as new information is gathered about previously reported cases, health departments provide updated data to CDC. As more information and data become available, analyses might find changes in surveillance data and trends during a previously reported time window. Data may also be shared late with CDC due to the volume of COVID-19 cases.
    • Annual finalized data: To create the final NNDSS data used in the annual tables, CDC works carefully with the reporting jurisdictions to reconcile the data received during the year until each state or territorial epidemiologist confirms that the data from their area are correct.
    • Access Addressing Gaps in Public Health Reporting of Race and Ethnicity for COVID-19, a report from the Council of State and Territorial Epidemiologists, to better understand the challenges in completing race and ethnicity data for COVID-19 and recommendations for improvement.

    Data Limitations

    To learn more about the limitations in using case surveillance data, visit FAQ: COVID-19 Data and Surveillance.

    Data Quality Assurance Procedures

    CDC’s Case Surveillance Section routinely performs data quality assurance procedures (i.e., ongoing corrections and logic checks to address data errors). To date, the following data cleaning steps have been implemented:

    • Questions that have been left unanswered (blank) on the case report form are reclassified to a Missing value, if applicable to the question. For example, in the question “Was the individual hospitalized?” where the possible answer choices include “Yes,” “No,” or “Unknown,” the blank value is recoded to Missing because the case report form did not include a response to the question.
    • Logic checks are performed for date data. If an illogical date has been provided, CDC reviews the data with the reporting jurisdiction. For example, if a symptom onset date in the future is reported to CDC, this value is set to null until the reporting jurisdiction updates the date appropriately.
    • Additional data quality processing to recode free text data is ongoing. Data on symptoms, race and ethnicity, and healthcare worker status have been prioritized.

    Data Suppression

    To prevent release of data that could be used to identify people, data cells are suppressed for low frequency (<5) records and indirect identifiers (e.g., date of first positive specimen). Suppression includes rare combinations of demographic characteristics (sex, age group, race/ethnicity). Suppressed values are re-coded to the NA answer option; records with data suppression are never removed.

    For questions, please contact Ask SRRG (eocevent394@cdc.gov).

    Additional COVID-19 Data

    COVID-19 data are available to the public as summary or aggregate count files, including total counts of cases and deaths by state and by county. These

  6. 18 excel spreadsheets by species and year giving reproduction and growth...

    • catalog.data.gov
    • data.wu.ac.at
    Updated Aug 17, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. EPA Office of Research and Development (ORD) (2024). 18 excel spreadsheets by species and year giving reproduction and growth data. One excel spreadsheet of herbicide treatment chemistry. [Dataset]. https://catalog.data.gov/dataset/18-excel-spreadsheets-by-species-and-year-giving-reproduction-and-growth-data-one-excel-sp
    Explore at:
    Dataset updated
    Aug 17, 2024
    Dataset provided by
    United States Environmental Protection Agencyhttp://www.epa.gov/
    Description

    Excel spreadsheets by species (4 letter code is abbreviation for genus and species used in study, year 2010 or 2011 is year data collected, SH indicates data for Science Hub, date is date of file preparation). The data in a file are described in a read me file which is the first worksheet in each file. Each row in a species spreadsheet is for one plot (plant). The data themselves are in the data worksheet. One file includes a read me description of the column in the date set for chemical analysis. In this file one row is an herbicide treatment and sample for chemical analysis (if taken). This dataset is associated with the following publication: Olszyk , D., T. Pfleeger, T. Shiroyama, M. Blakely-Smith, E. Lee , and M. Plocher. Plant reproduction is altered by simulated herbicide drift toconstructed plant communities. ENVIRONMENTAL TOXICOLOGY AND CHEMISTRY. Society of Environmental Toxicology and Chemistry, Pensacola, FL, USA, 36(10): 2799-2813, (2017).

  7. d

    Legal Data | Litigation Data | Legal Parties Data | Easy to Integrate |...

    • datarade.ai
    Updated Oct 20, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    APISCRAPY (2022). Legal Data | Litigation Data | Legal Parties Data | Easy to Integrate | Pre-built AI & Automation | 50% Cost Saving | Free Sample [Dataset]. https://datarade.ai/data-products/legal-data-litigation-data-legal-parties-data-easy-to-i-apiscrapy
    Explore at:
    .bin, .json, .xml, .csv, .xls, .sql, .txtAvailable download formats
    Dataset updated
    Oct 20, 2022
    Dataset authored and provided by
    APISCRAPY
    Area covered
    China, Canada, Åland Islands, British Indian Ocean Territory, United States of America, Australia, Greenland, Japan, New Zealand, Singapore
    Description

    Note:- Only publicly available data can be worked upon

    Unlock the Power of Legal Data with APISCRAPY: In a fast-paced world where information is critical, our Legal Data service provides your gateway to comprehensive and up-to-date legal information, including Intellectual Property Data, Patent Data, Court Data, Litigation Data, Royalty Rates Data, Trademark Data, Attorney Data, Legal Parties Data, and Copyright Data. Stay informed and make data-driven decisions with ease.

    APISCRAPY's AI-driven legal data scraping service provides numerous advantages for legal professionals and organizations seeking comprehensive insights and efficiency in managing legal information. Here are key benefits associated with their advanced legal data scraping technology:

    1. Litigation Data: The service excels in gathering Litigation Data, providing valuable insights into ongoing and historical legal proceedings, case outcomes, and key litigation details.

    2. Legal Parties Data: APISCRAPY's AI-driven approach extends to collecting Legal Parties Data, offering information on involved parties, their legal representation, and other crucial details relevant to legal proceedings.

    3. Easy to Integrate: APISCRAPY's legal data scraping service is designed to be Easy to Integrate into existing legal research and case management systems, ensuring a seamless and efficient workflow for legal professionals.

    4. Prebuilt AI & Automation: The service incorporates Prebuilt AI and Automation, streamlining the data extraction process and enhancing the accuracy and efficiency of legal data retrieval.

    5. 50% Cost Saving: APISCRAPY's AI-driven legal data scraping service offers a substantial 50% cost saving, providing a cost-effective solution for legal professionals and organizations looking to optimize their data acquisition processes.

    Today, having reliable legal data is crucial. APISCRAPY's Legal Data service offers a wide range of legal information, covering various topics, regions, and sources. Whether you're a legal professional, researcher, or a business looking to stay ahead, our service equips you with the tools you need.

    With real-time updates and an easy-to-use platform, you can keep track of legal developments, from court cases to intellectual property trends. APISCRAPY's Legal Data, enriched with Intellectual Property Data, Patent Data, Court Data, Litigation Data, Royalty Rates Data, Trademark Data, Attorney Data, Legal Parties Data, and Copyright Data, helps you make informed, data-driven decisions in the ever-changing world of legal data. Elevate your decision-making, stay current, and excel in your field.

  8. Data from: COVID-19 Case Surveillance Public Use Data with Geography

    • data.cdc.gov
    • data.virginia.gov
    • +4more
    application/rdfxml +5
    Updated Jul 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CDC Data, Analytics and Visualization Task Force (2024). COVID-19 Case Surveillance Public Use Data with Geography [Dataset]. https://data.cdc.gov/Case-Surveillance/COVID-19-Case-Surveillance-Public-Use-Data-with-Ge/n8mc-b4w4
    Explore at:
    application/rssxml, csv, tsv, application/rdfxml, xml, jsonAvailable download formats
    Dataset updated
    Jul 9, 2024
    Dataset provided by
    Centers for Disease Control and Preventionhttp://www.cdc.gov/
    Authors
    CDC Data, Analytics and Visualization Task Force
    License

    https://www.usa.gov/government-workshttps://www.usa.gov/government-works

    Description

    Note: Reporting of new COVID-19 Case Surveillance data will be discontinued July 1, 2024, to align with the process of removing SARS-CoV-2 infections (COVID-19 cases) from the list of nationally notifiable diseases. Although these data will continue to be publicly available, the dataset will no longer be updated.

    Authorizations to collect certain public health data expired at the end of the U.S. public health emergency declaration on May 11, 2023. The following jurisdictions discontinued COVID-19 case notifications to CDC: Iowa (11/8/21), Kansas (5/12/23), Kentucky (1/1/24), Louisiana (10/31/23), New Hampshire (5/23/23), and Oklahoma (5/2/23). Please note that these jurisdictions will not routinely send new case data after the dates indicated. As of 7/13/23, case notifications from Oregon will only include pediatric cases resulting in death.

    This case surveillance public use dataset has 19 elements for all COVID-19 cases shared with CDC and includes demographics, geography (county and state of residence), any exposure history, disease severity indicators and outcomes, and presence of any underlying medical conditions and risk behaviors.

    Currently, CDC provides the public with three versions of COVID-19 case surveillance line-listed data: this 19 data element dataset with geography, a 12 data element public use dataset, and a 33 data element restricted access dataset.

    The following apply to the public use datasets and the restricted access dataset:

    Overview

    The COVID-19 case surveillance database includes individual-level data reported to U.S. states and autonomous reporting entities, including New York City and the District of Columbia (D.C.), as well as U.S. territories and affiliates. On April 5, 2020, COVID-19 was added to the Nationally Notifiable Condition List and classified as “immediately notifiable, urgent (within 24 hours)” by a Council of State and Territorial Epidemiologists (CSTE) Interim Position Statement (Interim-20-ID-01). CSTE updated the position statement on August 5, 2020, to clarify the interpretation of antigen detection tests and serologic test results within the case classification (Interim-20-ID-02). The statement also recommended that all states and territories enact laws to make COVID-19 reportable in their jurisdiction, and that jurisdictions conducting surveillance should submit case notifications to CDC. COVID-19 case surveillance data are collected by jurisdictions and reported voluntarily to CDC.

    For more information: NNDSS Supports the COVID-19 Response | CDC.

    COVID-19 Case Reports COVID-19 case reports are routinely submitted to CDC by public health jurisdictions using nationally standardized case reporting forms. On April 5, 2020, CSTE released an Interim Position Statement with national surveillance case definitions for COVID-19. Current versions of these case definitions are available at: https://ndc.services.cdc.gov/case-definitions/coronavirus-disease-2019-2021/. All cases reported on or after were requested to be shared by public health departments to CDC using the standardized case definitions for lab-confirmed or probable cases. On May 5, 2020, the standardized case reporting form was revised. States and territories continue to use this form.

    Data are Considered Provisional

    • The COVID-19 case surveillance data are dynamic; case reports can be modified at any time by the jurisdictions sharing COVID-19 data with CDC. CDC may update prior cases shared with CDC based on any updated information from jurisdictions. For instance, as new information is gathered about previously reported cases, health departments provide updated data to CDC. As more information and data become available, analyses might find changes in surveillance data and trends during a previously reported time window. Data may also be shared late with CDC due to the volume of COVID-19 cases.
    • Annual finalized data: To create the final NNDSS data used in the annual tables, CDC works carefully with the reporting jurisdictions to reconcile the data received during the year until each state or territorial epidemiologist confirms that the data from their area are correct.

    Access Addressing Gaps in Public Health Reporting of Race and Ethnicity for COVID-19, a report from the Council of State and Territorial Epidemiologists, to better understand the challenges in completing race and ethnicity data for COVID-19 and recommendations for improvement.

    Data Limitations

    To learn more about the limitations in using case surveillance data, visit FAQ: COVID-19 Data and Surveillance.

    Data Quality Assurance Procedures

    CDC’s Case Surveillance Section routinely performs data quality assurance procedures (i.e., ongoing corrections and logic checks to address data errors). To date, the following data cleaning steps have been implemented:

    • Questions that have been left unanswered (blank) on the case report form are reclassified to a Missing value, if applicable to the question. For example, in the question "Was the individual hospitalized?" where the possible answer choices include "Yes," "No," or "Unknown," the blank value is recoded to "Missing" because the case report form did not include a response to the question.
    • Logic checks are performed for date data. If an illogical date has been provided, CDC reviews the data with the reporting jurisdiction. For example, if a symptom onset date in the future is reported to CDC, this value is set to null until the reporting jurisdiction updates the date appropriately.
    • Additional data quality processing to recode free text data is ongoing. Data on symptoms, race, ethnicity, and healthcare worker status have been prioritized.

    Data Suppression

    To prevent release of data that could be used to identify people, data cells are suppressed for low frequency (<11 COVID-19 case records with a given values). Suppression includes low frequency combinations of case month, geographic characteristics (county and state of residence), and demographic characteristics (sex, age group, race, and ethnicity). Suppressed values are re-coded to the NA answer option; records with data suppression are never removed.

    Additional COVID-19 Data

    COVID-19 data are available to the public as summary or aggregate count files, including total counts of cases and deaths by state and by county. These and other COVID-19 data are available from multiple public locations: COVID Data Tracker; United States COVID-19 Cases and Deaths by State; COVID-19 Vaccination Reporting Data Systems; and COVID-19 Death Data and Resources.

    Notes:

    March 1, 2022: The "COVID-19 Case Surveillance Public Use Data with Geography" will be updated on a monthly basis.

    April 7, 2022: An adjustment was made to CDC’s cleaning algorithm for COVID-19 line level case notification data. An assumption in CDC's algorithm led to misclassifying deaths that were not COVID-19 related. The algorithm has since been revised, and this dataset update reflects corrected individual level information about death status for all cases collected to date.

    June 25, 2024: An adjustment

  9. e

    Uptake of open access to scientific peer reviewed publications in Horizon...

    • data.europa.eu
    excel xls, pdf
    Updated Feb 17, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Directorate-General for Research and Innovation (2017). Uptake of open access to scientific peer reviewed publications in Horizon 2020 [Dataset]. https://data.europa.eu/data/datasets/open-access-to-scientific-publications-horizon2020?locale=en
    Explore at:
    pdf, excel xlsAvailable download formats
    Dataset updated
    Feb 17, 2017
    Dataset authored and provided by
    Directorate-General for Research and Innovation
    License

    http://data.europa.eu/eli/dec/2011/833/ojhttp://data.europa.eu/eli/dec/2011/833/oj

    Description

    Open access (OA) can be defined as the practice of providing on-line access to scientific information that is free of charge to the user and that is re-usable. A distinction is usually made between OA to scientific peer reviewed publications and research data. In Horizon 2020 open access to peer-reviewed scientific publications (primarily articles) is mandatory; however, researchers can choose between the open access route most appropriate to them.

    For open access publishing (gold open access), researchers can publish in open access journals, or in journals that sell subscriptions and also offer the possibility of making individual articles openly accessible (hybrid journals). In that case, publishers often charge an article processing charge (APC). These costs are eligible for reimbursement during the duration of the Horizon 2020 grant. For APCs incurred after the end of the grant agreement, a mechanism for reimbursing some of these costs is being piloted and implemented through the OpenAIRE project. Note that in case of gold open access publishing, a copy must also be deposited in an open access repository.

    For self-archiving (green open access), researchers deposit the final peer-reviewed manuscript in a repository of their choice. In this case, they must ensure open access to the publication within six months of publication (12 months in case of the social sciences and humanities).

    This page provides an overview of the state of play as regards the uptake of open access to scientific publications in Horizon 2020 from 2014 to 2017, updating information from 2016.

    Two datasets have been used for the analysis presented in this note: one dataset from the EU funded OpenAIRE project for FP7 and H2020 and one dataset from CORDA for H2020, which also provides supplementary information on article processing charges and embargo periods. The datasets are from September and August 2017 respectively.

    The OpenAIRE sample includes primarily peer-reviewed scientific articles but also some other forms of publications such as conference papers, book chapters and reports or pre-prints. It is based on information obtained from Open Access repositories, pre-print servers, OA journals and project reports and contains some underreporting since OpenAIRE has difficulties tracking hybrid publications and publications in repositories which are not OpenAIRE compliant. The CORDA sample contains only peer-reviewed scientific articles and is based on project self-reporting. The figures in this note measure open access in a broad sense and not the compliance with the specifics of article 29.2. of the Model Grant Agreement.

    The 2017 analysis of open access during the entirety of Horizon 2020 so far shows an overall open access rate of 63,2% from OpenAIRE data (+2,4% compared with the sample from 2016). Internal project reporting through SYGMA shows a total of 80,6% open access for Horizon 2020 scientific peer reviewed articles and 75% for all peer-reviewed publications (including also conference procedures, book chapter, monographs and the like); however, since this data is based on beneficiary self-reporting it may contain some over-reporting.

    According to the OpenAIRE sample 75% of publications are green open access and 25% gold open access. Internal figures are similar although they show a slightly higher amount of gold OA with a split of 70% green and 30% gold.

    For gold OA internal project reporting suggests than an average of 1500 € is spent per article (median: 1200 €), an increase from the average of 1006 € in the previous sample. A more detailed analysis reveals that 27% percent of articles have a price tag of between 1000 to 1999 €. It is also important to note that 26% of all publications are in gold OA but without any APC charges. Very high APCs of 4000€ or more only concerns a tiny fraction of Horizon 2020 publications (3%).

    The average embargo period of green OA publications is 10 months, that is a decrease of 1 month from the 2016 sample. 40% of articles have an embargo period of 11-12 months, followed by 575 articles (or 33% with no embargo period at all. 302 articles, that is 17% have an embargo period of 12,1-24 months and 162 articles or 9% of 0,1 to 6 months. Finally, 12 articles, that is 1%, have an embargo period that is longer than 36 months.

    This 2017 analysis thus broadly confirms the earlier findings from summer 2016, but is based on a larger and more robust sample. In the 2017 sample overall open access rates have gone up in all the datasets and cohorts. The distribution between gold and green open access remains similar to the 2016 dataset; for gold OA, average APCs have increased, for green OA embargo periods have slight decreased.

    Please consult the background note for a more detailed analysis. Note also that these files only refer to open access to publications. Information on open access to research data is made available on the open data portal on a diffe

  10. d

    Commercial Real Estate Data | 52M+ POI | SafeGraph Property Dataset

    • datarade.ai
    .csv
    Updated Aug 22, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    SafeGraph (2024). Commercial Real Estate Data | 52M+ POI | SafeGraph Property Dataset [Dataset]. https://datarade.ai/data-products/commercial-real-estate-data-52m-poi-safegraph-property-d-safegraph
    Explore at:
    .csvAvailable download formats
    Dataset updated
    Aug 22, 2024
    Dataset authored and provided by
    SafeGraph
    Area covered
    El Salvador, Saint Martin (French part), Kyrgyzstan, Curaçao, Latvia, Gibraltar, Holy See, Finland, Ukraine, Yemen
    Description

    SafeGraph Places provides baseline location information for every record in the SafeGraph product suite via the Places schema and polygon information when applicable via the Geometry schema. The current scope of a place is defined as any location humans can visit with the exception of single-family homes. This definition encompasses a diverse set of places ranging from restaurants, grocery stores, and malls; to parks, hospitals, museums, offices, and industrial parks. Premium sets of Places include apartment buildings, Parking Lots, and Point POIs (such as ATMs or transit stations).

    SafeGraph Places is a point of interest (POI) data offering with varying coverage and properties depending on the country. Note that address conventions and formatting vary across countries. SafeGraph has coalesced these fields into the Places schema.

    SafeGraph provides clean and accurate geospatial datasets on 51M+ physical places/points of interest (POI) globally. Hundreds of industry leaders like Mapbox, Verizon, Clear Channel, and Esri already rely on SafeGraph POI data to unlock business insights and drive innovation.

  11. d

    COVID Impact Survey - Public Data

    • data.world
    csv, zip
    Updated Oct 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Associated Press (2024). COVID Impact Survey - Public Data [Dataset]. https://data.world/associatedpress/covid-impact-survey-public-data
    Explore at:
    csv, zipAvailable download formats
    Dataset updated
    Oct 16, 2024
    Authors
    The Associated Press
    Description

    Overview

    The Associated Press is sharing data from the COVID Impact Survey, which provides statistics about physical health, mental health, economic security and social dynamics related to the coronavirus pandemic in the United States.

    Conducted by NORC at the University of Chicago for the Data Foundation, the probability-based survey provides estimates for the United States as a whole, as well as in 10 states (California, Colorado, Florida, Louisiana, Minnesota, Missouri, Montana, New York, Oregon and Texas) and eight metropolitan areas (Atlanta, Baltimore, Birmingham, Chicago, Cleveland, Columbus, Phoenix and Pittsburgh).

    The survey is designed to allow for an ongoing gauge of public perception, health and economic status to see what is shifting during the pandemic. When multiple sets of data are available, it will allow for the tracking of how issues ranging from COVID-19 symptoms to economic status change over time.

    The survey is focused on three core areas of research:

    • Physical Health: Symptoms related to COVID-19, relevant existing conditions and health insurance coverage.
    • Economic and Financial Health: Employment, food security, and government cash assistance.
    • Social and Mental Health: Communication with friends and family, anxiety and volunteerism. (Questions based on those used on the U.S. Census Bureau’s Current Population Survey.) ## Using this Data - IMPORTANT This is survey data and must be properly weighted during analysis: DO NOT REPORT THIS DATA AS RAW OR AGGREGATE NUMBERS!!

    Instead, use our queries linked below or statistical software such as R or SPSS to weight the data.

    Queries

    If you'd like to create a table to see how people nationally or in your state or city feel about a topic in the survey, use the survey questionnaire and codebook to match a question (the variable label) to a variable name. For instance, "How often have you felt lonely in the past 7 days?" is variable "soc5c".

    Nationally: Go to this query and enter soc5c as the variable. Hit the blue Run Query button in the upper right hand corner.

    Local or State: To find figures for that response in a specific state, go to this query and type in a state name and soc5c as the variable, and then hit the blue Run Query button in the upper right hand corner.

    The resulting sentence you could write out of these queries is: "People in some states are less likely to report loneliness than others. For example, 66% of Louisianans report feeling lonely on none of the last seven days, compared with 52% of Californians. Nationally, 60% of people said they hadn't felt lonely."

    Margin of Error

    The margin of error for the national and regional surveys is found in the attached methods statement. You will need the margin of error to determine if the comparisons are statistically significant. If the difference is:

    • At least twice the margin of error, you can report there is a clear difference.
    • At least as large as the margin of error, you can report there is a slight or apparent difference.
    • Less than or equal to the margin of error, you can report that the respondents are divided or there is no difference. ## A Note on Timing Survey results will generally be posted under embargo on Tuesday evenings. The data is available for release at 1 p.m. ET Thursdays.

    About the Data

    The survey data will be provided under embargo in both comma-delimited and statistical formats.

    Each set of survey data will be numbered and have the date the embargo lifts in front of it in the format of: 01_April_30_covid_impact_survey. The survey has been organized by the Data Foundation, a non-profit non-partisan think tank, and is sponsored by the Federal Reserve Bank of Minneapolis and the Packard Foundation. It is conducted by NORC at the University of Chicago, a non-partisan research organization. (NORC is not an abbreviation, it part of the organization's formal name.)

    Data for the national estimates are collected using the AmeriSpeak Panel, NORC’s probability-based panel designed to be representative of the U.S. household population. Interviews are conducted with adults age 18 and over representing the 50 states and the District of Columbia. Panel members are randomly drawn from AmeriSpeak with a target of achieving 2,000 interviews in each survey. Invited panel members may complete the survey online or by telephone with an NORC telephone interviewer.

    Once all the study data have been made final, an iterative raking process is used to adjust for any survey nonresponse as well as any noncoverage or under and oversampling resulting from the study specific sample design. Raking variables include age, gender, census division, race/ethnicity, education, and county groupings based on county level counts of the number of COVID-19 deaths. Demographic weighting variables were obtained from the 2020 Current Population Survey. The count of COVID-19 deaths by county was obtained from USA Facts. The weighted data reflect the U.S. population of adults age 18 and over.

    Data for the regional estimates are collected using a multi-mode address-based (ABS) approach that allows residents of each area to complete the interview via web or with an NORC telephone interviewer. All sampled households are mailed a postcard inviting them to complete the survey either online using a unique PIN or via telephone by calling a toll-free number. Interviews are conducted with adults age 18 and over with a target of achieving 400 interviews in each region in each survey.Additional details on the survey methodology and the survey questionnaire are attached below or can be found at https://www.covid-impact.org.

    Attribution

    Results should be credited to the COVID Impact Survey, conducted by NORC at the University of Chicago for the Data Foundation.

    AP Data Distributions

    ​To learn more about AP's data journalism capabilities for publishers, corporations and financial institutions, go here or email kromano@ap.org.

  12. N

    Free Soil, MI Age Group Population Dataset: A Complete Breakdown of Free...

    • neilsberg.com
    csv, json
    Updated Jul 24, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Neilsberg Research (2024). Free Soil, MI Age Group Population Dataset: A Complete Breakdown of Free Soil Age Demographics from 0 to 85 Years and Over, Distributed Across 18 Age Groups // 2024 Edition [Dataset]. https://www.neilsberg.com/research/datasets/aa8f1215-4983-11ef-ae5d-3860777c1fe6/
    Explore at:
    json, csvAvailable download formats
    Dataset updated
    Jul 24, 2024
    Dataset authored and provided by
    Neilsberg Research
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Michigan, Free Soil
    Variables measured
    Population Under 5 Years, Population over 85 years, Population Between 5 and 9 years, Population Between 10 and 14 years, Population Between 15 and 19 years, Population Between 20 and 24 years, Population Between 25 and 29 years, Population Between 30 and 34 years, Population Between 35 and 39 years, Population Between 40 and 44 years, and 9 more
    Measurement technique
    The data presented in this dataset is derived from the latest U.S. Census Bureau American Community Survey (ACS) 2018-2022 5-Year Estimates. To measure the two variables, namely (a) population and (b) population as a percentage of the total population, we initially analyzed and categorized the data for each of the age groups. For age groups we divided it into roughly a 5 year bucket for ages between 0 and 85. For over 85, we aggregated data into a single group for all ages. For further information regarding these estimates, please feel free to reach out to us via email at research@neilsberg.com.
    Dataset funded by
    Neilsberg Research
    Description
    About this dataset

    Context

    The dataset tabulates the Free Soil population distribution across 18 age groups. It lists the population in each age group along with the percentage population relative of the total population for Free Soil. The dataset can be utilized to understand the population distribution of Free Soil by age. For example, using this dataset, we can identify the largest age group in Free Soil.

    Key observations

    The largest age group in Free Soil, MI was for the group of age 50 to 54 years years with a population of 13 (12.62%), according to the ACS 2018-2022 5-Year Estimates. At the same time, the smallest age group in Free Soil, MI was the 40 to 44 years years with a population of 1 (0.97%). Source: U.S. Census Bureau American Community Survey (ACS) 2018-2022 5-Year Estimates

    Content

    When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2018-2022 5-Year Estimates

    Age groups:

    • Under 5 years
    • 5 to 9 years
    • 10 to 14 years
    • 15 to 19 years
    • 20 to 24 years
    • 25 to 29 years
    • 30 to 34 years
    • 35 to 39 years
    • 40 to 44 years
    • 45 to 49 years
    • 50 to 54 years
    • 55 to 59 years
    • 60 to 64 years
    • 65 to 69 years
    • 70 to 74 years
    • 75 to 79 years
    • 80 to 84 years
    • 85 years and over

    Variables / Data Columns

    • Age Group: This column displays the age group in consideration
    • Population: The population for the specific age group in the Free Soil is shown in this column.
    • % of Total Population: This column displays the population of each age group as a proportion of Free Soil total population. Please note that the sum of all percentages may not equal one due to rounding of values.

    Good to know

    Margin of Error

    Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.

    Custom data

    If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.

    Inspiration

    Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.

    Recommended for further research

    This dataset is a part of the main dataset for Free Soil Population by Age. You can refer the same here

  13. e

    Excel Mapping Template for London Boroughs and Wards

    • data.europa.eu
    Updated Oct 16, 2014
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Greater London Authority (2014). Excel Mapping Template for London Boroughs and Wards [Dataset]. https://data.europa.eu/88u/dataset/excel-mapping-template-for-london-boroughs-and-wards1
    Explore at:
    Dataset updated
    Oct 16, 2014
    Dataset authored and provided by
    Greater London Authority
    Area covered
    London
    Description

    A free mapping tool that allows you to create a thematic map of London without any specialist GIS skills or software - all you need is Microsoft Excel. Templates are available for London’s Boroughs and Wards. Full instructions are contained within the spreadsheets.

    Macros

    The tool works in any version of Excel. But the user MUST ENABLE MACROS, for the features to work. There a some restrictions on functionality in the ward maps in Excel 2003 and earlier - full instructions are included in the spreadsheet.

    To check whether the macros are enabled in Excel 2003 click Tools, Macro, Security and change the setting to Medium. Then you have to re-start Excel for the changes to take effect. When Excel starts up a prompt will ask if you want to enable macros - click yes.

    In Excel 2007 and later, it should be set by default to the correct setting, but if it has been changed, click on the Windows Office button in the top corner, then Excel options (at the bottom), Trust Centre, Trust Centre Settings, and make sure it is set to 'Disable all macros with notification'. Then when you open the spreadsheet, a prompt labelled 'Options' will appear at the top for you to enable macros.

    To create your own thematic borough maps in Excel using the ward map tool as a starting point, read these instructions. You will need to be a confident Excel user, and have access to your boundaries as a picture file from elsewhere. The mapping tools created here are all fully open access with no passwords.

    Copyright notice: If you publish these maps, a copyright notice must be included within the report saying: "Contains Ordnance Survey data © Crown copyright and database rights."

    NOTE: Excel 2003 users must 'ungroup' the map for it to work.

  14. Data from: Annotation-free Audio-Visual Segmentation

    • zenodo.org
    bin, csv, zip
    Updated Aug 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jinxiang Liu; Yu Wang; Chen Ju; Chaofan Ma; Ya Zhang; Weidi Xie; Jinxiang Liu; Yu Wang; Chen Ju; Chaofan Ma; Ya Zhang; Weidi Xie (2023). Annotation-free Audio-Visual Segmentation [Dataset]. http://doi.org/10.48550/arxiv.2305.11019
    Explore at:
    zip, bin, csvAvailable download formats
    Dataset updated
    Aug 22, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jinxiang Liu; Yu Wang; Chen Ju; Chaofan Ma; Ya Zhang; Weidi Xie; Jinxiang Liu; Yu Wang; Chen Ju; Chaofan Ma; Ya Zhang; Weidi Xie
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ## AVS-Synthetic Dataset
    **********
    ### Updated 2023-08-22
    1. The paper [`Annotation-free Audio-Visual Segmentation`](https://arxiv.org/abs/2305.11019v3) with the dataset is accepted by WACV2024. The project page is [https://jinxiang-liu.github.io/anno-free-AVS/](https://jinxiang-liu.github.io/anno-free-AVS/).

    2. We release the codes at [https://github.com/jinxiang-liu/anno-free-AVS](https://github.com/jinxiang-liu/anno-free-AVS).

    3. Due to some technical reasons, there some missing audio clips (for training) in the orginal `audio.zip` file. If you download the dataset before August 22th, please re-download the `audios.zip` to replace the original one; Otherwise, just ignore this message and download the dataset.

    4. If you have any problems, feel free to contact `jinxliu#sjtu.edu.cn` (replace `#` with `@`).

    **********
    - Note, the dataset corresponds to the arxiv paper https://arxiv.org/abs/2305.11019v3 .



    - The `images` and `masks` folders provide the image-mask pairs from LVIS and OpenImages.

    - The `audios` folder contains the 3-second long audio clips from the VGGSound, please using the center 1-second sub-clip for training and evaluating. And the pickle file `category_for_vggsound_audios.pkl` describes the labels of the audios. The labels are in according with the `cls_id` column in the `annotations.csv` file for model training.

    - The `annotations.csv` file provides the annotations for each training, validation and testing samples. For the training samples, we do not sepcify the audios. In pratice, just randomly sample the vggsound audios with the `cls_id` in each epoch to compose the (image, masl, audio) triplet. For validation and test sets, we designate the audio sample from VGGSound for each image-mask sample.

  15. Z

    Nikolai Medtner – Tales (A corpus of annotated scores)

    • data.niaid.nih.gov
    Updated Mar 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Johannes Hentschel (2025). Nikolai Medtner – Tales (A corpus of annotated scores) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7473528
    Explore at:
    Dataset updated
    Mar 10, 2025
    Dataset provided by
    Martin Rohrmeier
    Markus Neuwirth
    Yannis Rammos
    Johannes Hentschel
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    This is a README file for a data repository originating from the DCML corpus initiative and serves as welcome page for both

    the GitHub repo https://github.com/DCMLab/medtner_tales and the corresponding

    documentation page https://dcmlab.github.io/medtner_tales

    For information on how to obtain and use the dataset, please refer to this documentation page.

    Nikolai Medtner – Tales (A corpus of annotated scores)

    Getting the data

    download repository as a ZIP file

    download a Frictionless Datapackage that includes concatenations of the TSV files in the four folders (measures, notes, chords, and harmonies) and a JSON descriptor:

    medtner_tales.zip

    medtner_tales.datapackage.json

    clone the repo: git clone https://github.com/DCMLab/medtner_tales.git

    Data Formats

    Each piece in this corpus is represented by five files with identical name prefixes, each in its own folder. For example, the first tale has the following files:

    MS3/op08n01.mscx: Uncompressed MuseScore 3.6.2 file including the music and annotation labels.

    notes/op08n01.notes.tsv: A table of all note heads contained in the score and their relevant features (not each of them represents an onset, some are tied together)

    measures/op08n01.measures.tsv: A table with relevant information about the measures in the score.

    chords/op08n01.chords.tsv: A table containing layer-wise unique onset positions with the musical markup (such as dynamics, articulation, lyrics, figured bass, etc.).

    harmonies/op08n01.harmonies.tsv: A table of the included harmony labels (including cadences and phrases) with their positions in the score.

    Each TSV file comes with its own JSON descriptor that describes the meanings and datatypes of the columns ("fields") it contains, follows the Frictionless specification, and can be used to validate and correctly load the described file.

    Opening Scores

    After navigating to your local copy, you can open the scores in the folder MS3 with the free and open source score editor MuseScore. Please note that the scores have been edited, annotated and tested with MuseScore 3.6.2. MuseScore 4 has since been released which renders them correctly but cannot store them back in the same format.

    Opening TSV files in a spreadsheet

    Tab-separated value (TSV) files are like Comma-separated value (CSV) files and can be opened with most modern text editors. However, for correctly displaying the columns, you might want to use a spreadsheet or an addon for your favourite text editor. When you use a spreadsheet such as Excel, it might annoy you by interpreting fractions as dates. This can be circumvented by using Data --> From Text/CSV or the free alternative LibreOffice Calc. Other than that, TSV data can be loaded with every modern programming language.

    Loading TSV files in Python

    Since the TSV files contain null values, lists, fractions, and numbers that are to be treated as strings, you may want to use this code to load any TSV files related to this repository (provided you're doing it in Python). After a quick pip install -U ms3 (requires Python 3.10 or later) you'll be able to load any TSV like this:

    import ms3

    labels = ms3.load_tsv("harmonies/op08n01.harmonies.tsv") notes = ms3.load_tsv("notes/op08n01.notes.tsv")

    Version history

    See the GitHub releases.

    Questions, Suggestions, Corrections, Bug Reports

    Please create an issue and/or feel free to fork and submit pull requests.

    Cite as

    Hentschel, J., Rammos, Y., Neuwirth, M., Moss, F. C., & Rohrmeier, M. (2024). An annotated corpus of tonal piano music from the long 19th century. Empirical Musicology Review, 18(1), 84–95. https://doi.org/10.18061/emr.v18i1.8903

    License

    Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0).

  16. P

    MIMIC-IV-Note Dataset

    • paperswithcode.com
    Updated Feb 24, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). MIMIC-IV-Note Dataset [Dataset]. https://paperswithcode.com/dataset/mimic-iv-note
    Explore at:
    Dataset updated
    Feb 24, 2025
    Description

    The advent of large, open access text databases has driven advances in state-of-the-art model performance in natural language processing (NLP). The relatively limited amount of clinical data available for NLP has been cited as a significant barrier to the field's progress. Here we describe MIMIC-IV-Note: a collection of deidentified free-text clinical notes for patients included in the MIMIC-IV clinical database. MIMIC-IV-Note contains 331,794 deidentified discharge summaries from 145,915 patients admitted to the hospital and emergency department at the Beth Israel Deaconess Medical Center in Boston, MA, USA. The database also contains 2,321,355 deidentified radiology reports for 237,427 patients. All notes have had protected health information removed in accordance with the Health Insurance Portability and Accountability Act (HIPAA) Safe Harbor provision. All notes are linkable to MIMIC-IV providing important context to the clinical data therein. The database is intended to stimulate research in clinical natural language processing and associated areas.

  17. Z

    Data from: Computational 3D resolution enhancement for optical coherence...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jeroen Kalkman (2024). Computational 3D resolution enhancement for optical coherence tomography with a narrowband visible light source [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7870794
    Explore at:
    Dataset updated
    Jul 12, 2024
    Dataset provided by
    Jeroen Kalkman
    George-Othon Glentis
    Jos de Wit
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository contains the code and data underlying the publication "Computational 3D resolution enhancement for optical coherence tomography with a narrowband visible light source" in Biomedical Optics Express 14, 3532-3554 (2023) (doi.org/10.1364/BOE.487345).

    The reader is free to use the scripts and data in this depository, as long as the manuscript is correctly cited in their work. For further questions, please contact the corresponding author.

    Description of the code and datasets

    Table 1 describes all the Matlab and Python scripts in this depository. Table 2 describes the datasets. The input datasets are the phase corrected datasets, as the raw data is large in size and phase correction using a coverslip as reference is rather straightforward. Processed datasets are also added to the repository to allow for running only a limited number of scripts, or to obtain for example the aberration corrected data without the need to use python. Note that the simulation input data (input_simulations_pointscatters_SLDshape_98zf_noise75.mat) is generated with random noise, so if this is overwritten de results may slightly vary. Also the aberration correction is done with random apertures, so the processed aberration corrected data (exp_pointscat_image_MIAA_ISAM_CAO.mat and exp_leaf_image_MIAA_ISAM_CAO.mat) will also slightly change if the aberration correction script is run anew. The current processed datasets are used as basis for the figures in the publication. For details on the implementation we refer to the publication.

    Table 1: The Matlab and Python scripts with their description
    
    
        Script name
        Description
    
    
        MIAA_ISAM_processing.m
        This scripts performs the DFT, RFIAA and MIAA processing of the phase-corrected data that can be loaded from the datasets. Afterwards it also applies ISAM on the DFT and MIAA data and plots the results in a figure (via the scripts plot_figure3, plot_figure5 and plot_simulationdatafigure).
    
    
        resolution_analysis_figure4.m
        This figure loads the data from the point scatterers (absolute amplitude data), seeks the point scatterrers and fits them to obtain the resolution data. Finally it plots figure 4 of the publication.
    
    
        fiaa_oct_c1.m, oct_iaa_c1.m, rec_fiaa_oct_c1.m, rfiaa_oct_c1.m 
        These four functions are used to apply fast IAA and MIAA. See script MIAA_ISAM_processing.m for their usage.
    
    
        viridis.m, morgenstemning.m
        These scripts define the colormaps for the figures.
    
    
        plot_figure3.m, plot_figure5.m, plot_simulationdatafigure.m
        These scripts are used to plot the figures 3 and 5 and a figure with simulation data. These scripts are executed at the end of script MIAA_ISAM_processing.m.
    
    
        Python script: computational_adaptive_optics_script.py
        Python script that applied computational adaptive optics to obtain the data for figure 6 of the manuscript.
    
    
        Python script: zernike_functions2.py
        Python script that gives the values and carthesian derrivatives of the Zernike polynomials.
    
    
        figure6_ComputationalAdaptiveOptics.m
        Script that loads the CAO data that was saved in Python, analyzes the resolution, and plots figure 6.
    
    
        Python script: OCTsimulations_3D_script2.py
        Python script simulates OCT data, adds noise and saves it as .mat file for use in the matlab script above.
    
    
        Python script: OCTsimulations2.py
        Module that contains a python class that can be used to simulate 3D OCT datasets based on a Gaussian beam.
    
    
        Matlab toolbox DIPimage 2.9.zip
        Dipimage is used in the scripts. The toolbox can be downloaded online or this zip can be used.
    
    
    
    
    
    
    The datasets in this Zenodo repository
    
    
        Name
        Description
    
    
        input_leafdisc_phasecorrected.mat
        Phase corrected input image of the leaf disc (used in figure 5).
    
    
        input_TiO2gelatin_004_phasecorrected.mat
        Phase corrected input image of the TiO2 in gelatin sample.
    
    
        input_simulations_pointscatters_SLDshape_98zf_noise75
        Input simulation data that, once processed, is used in figure 4.
    

    exp_pointscat_image_DFT.mat

    exp_pointscat_image_DFT_ISAM.mat

    exp_pointscat_image_RFIAA.mat

    exp_pointscat_image_MIAA_ISAM.mat

    exp_pointscat_image_MIAA_ISAM_CAO.mat

        Processed experimental amplitude data for the TiO2 point scattering sample with respectively DFT, DFT+ISAM, RFIAA, MIAA+ISAM and MIAA+ISAM+CAO. These datasets are used for fitting in figure 4 (except for CAO), and MIAA_ISAM and MIAA_ISAM_CAO are used for figure 6.
    

    simu_pointscat_image_DFT.mat

    simu_pointscat_image_RFIAA.mat

    simu_pointscat_image_DFT_ISAM.mat

    simu_pointscat_image_MIAA_ISAM.mat

        Processed amplitude data from the simulation dataset, which is used in the script for figure 4 for the resolution analysis.
    

    exp_leaf_image_MIAA_ISAM.mat

    exp_leaf_image_MIAA_ISAM_CAO.mat

        Processed amplitude data from the leaf sample, with and without aberration correction which is used to produce figure 6.
    

    exp_leaf_zernike_coefficients_CAO_normal_wmaf.mat

    exp_pointscat_zernike_coefficients_CAO_normal_wmaf.mat

        Estimated Zernike coefficients and the weighted moving average of them that is used for the computational aberration correction. Some of this data is plotted in Figure 6 of the manuscript.
    
    
        input_zernike_modes.mat
        The reference Zernike modes corresponding to the data that is loaded to give the modes the proper name.
    

    exp_pointscat_MIAA_ISAM_complex.mat

    exp_leaf_MIAA_ISAM_complex

        Complex MIAA+ISAM processed data that is used as input for the computational aberration correction.
    
  18. USA Name Data

    • kaggle.com
    zip
    Updated Feb 12, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data.gov (2019). USA Name Data [Dataset]. https://www.kaggle.com/datasets/datagov/usa-names
    Explore at:
    zip(0 bytes)Available download formats
    Dataset updated
    Feb 12, 2019
    Dataset provided by
    Data.govhttps://data.gov/
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Area covered
    United States
    Description

    Context

    Cultural diversity in the U.S. has led to great variations in names and naming traditions and names have been used to express creativity, personality, cultural identity, and values. Source: https://en.wikipedia.org/wiki/Naming_in_the_United_States

    Content

    This public dataset was created by the Social Security Administration and contains all names from Social Security card applications for births that occurred in the United States after 1879. Note that many people born before 1937 never applied for a Social Security card, so their names are not included in this data. For others who did apply, records may not show the place of birth, and again their names are not included in the data.

    All data are from a 100% sample of records on Social Security card applications as of the end of February 2015. To safeguard privacy, the Social Security Administration restricts names to those with at least 5 occurrences.

    Fork this kernel to get started with this dataset.

    Acknowledgements

    https://bigquery.cloud.google.com/dataset/bigquery-public-data:usa_names

    https://cloud.google.com/bigquery/public-data/usa-names

    Dataset Source: Data.gov. This dataset is publicly available for anyone to use under the following terms provided by the Dataset Source — http://www.data.gov/privacy-policy#data_policy — and is provided "AS IS" without any warranty, express or implied, from Google. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.

    Banner Photo by @dcp from Unplash.

    Inspiration

    What are the most common names?

    What are the most common female names?

    Are there more female or male names?

    Female names by a wide margin?

  19. o

    Data from: Workplace Charging Data

    • openenergyhub.ornl.gov
    • ornl.opendatasoft.com
    csv, excel, json
    Updated Apr 2, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Workplace Charging Data [Dataset]. https://openenergyhub.ornl.gov/explore/dataset/workplace-charging-data/
    Explore at:
    excel, json, csvAvailable download formats
    Dataset updated
    Apr 2, 2025
    Description

    Note: Sample data provided. ・ A data set gathered and maintained by NREL that tracks over 300 vehicles during the course of a 4-year period and how they behave in a workplace charging capacity. The data is further enriched by examining the effect of free charging versus paid charging. There is also a distinction in data marked by the onset of Covid-19. Vehicles are owned and operated by employees and range from smaller pack PHEV to larger pack BEVs.https://data.nrel.gov/submissions/182

  20. d

    Data from: National Longitudinal Study of Adolescent Health (Add Health)

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Nov 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Harvard Dataverse (2023). National Longitudinal Study of Adolescent Health (Add Health) [Dataset]. http://doi.org/10.7910/DVN/TM2WCE
    Explore at:
    Dataset updated
    Nov 21, 2023
    Dataset provided by
    Harvard Dataverse
    Description

    Users can download or order data regarding adolescent health and well-being and the factors that influence the adolescent transition into adulthood. Background The Add Health Study, conducted by the Eunice Kennedy Shriver National Institute for Child Health and Human Development, began during the 1994-1995 school year with a nationally representative sample of students in grades 7-12. The cohort has been followed into adulthood. Participants' social, physical, economic and psychological information is ascertained within the contexts of their family, neighborhood, school, peer groups, friendships and romantic relationships. The original purpose of the study was to understand factors that may influence adolescent behaviors, but as the study has continued, it was evolved to gather information on the factors related to the transition into adulthood. User Functionality Users can download or order the CD-Rom of the public use data sets (which include only a subset of the sample). To do so, users must generate a free log in with Data Sharing for Demographic Research, which is part of the Inter-University Consortium for Political and Social Research, or users must contact Sociometrics. Links to both data warehouses are provided. Data Notes The study began in 1994; respondents were followed up with in 1996, 2001-2 002, and 2007-2008. In addition to the cohort members, parents, siblings, fellow students, school administrators, and romantic partners are also interviewed.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
APISCRAPY (2023). Ecommerce Data - Product data, Seller data, Market data, Pricing data| Scrape all publicly available eCommerce data| 50% Cost Saving | Free Sample [Dataset]. https://datarade.ai/data-products/apiscrapy-mobile-app-data-api-scraping-service-app-intel-apiscrapy

Ecommerce Data - Product data, Seller data, Market data, Pricing data| Scrape all publicly available eCommerce data| 50% Cost Saving | Free Sample

Explore at:
.bin, .json, .xml, .csv, .xls, .sql, .txtAvailable download formats
Dataset updated
Dec 1, 2023
Dataset authored and provided by
APISCRAPY
Area covered
Ukraine, China, Åland Islands, Malta, Spain, Norway, Bosnia and Herzegovina, United States of America, Switzerland, Isle of Man
Description

Note:- Only publicly available data can be worked upon

In today's ever-evolving Ecommerce landscape, success hinges on the ability to harness the power of data. APISCRAPY is your strategic ally, dedicated to providing a comprehensive solution for extracting critical Ecommerce data, including Ecommerce market data, Ecommerce product data, and Ecommerce datasets. With the Ecommerce arena being more competitive than ever, having a data-driven approach is no longer a luxury but a necessity.

APISCRAPY's forte lies in its ability to unearth valuable Ecommerce market data. We recognize that understanding the market dynamics, trends, and fluctuations is essential for making informed decisions.

APISCRAPY's AI-driven ecommerce data scraping service presents several advantages for individuals and businesses seeking comprehensive insights into the ecommerce market. Here are key benefits associated with their advanced data extraction technology:

  1. Ecommerce Product Data: APISCRAPY's AI-driven approach ensures the extraction of detailed Ecommerce Product Data, including product specifications, images, and pricing information. This comprehensive data is valuable for market analysis and strategic decision-making.

  2. Data Customization: APISCRAPY enables users to customize the data extraction process, ensuring that the extracted ecommerce data aligns precisely with their informational needs. This customization option adds versatility to the service.

  3. Efficient Data Extraction: APISCRAPY's technology streamlines the data extraction process, saving users time and effort. The efficiency of the extraction workflow ensures that users can obtain relevant ecommerce data swiftly and consistently.

  4. Realtime Insights: Businesses can gain real-time insights into the dynamic Ecommerce Market by accessing rapidly extracted data. This real-time information is crucial for staying ahead of market trends and making timely adjustments to business strategies.

  5. Scalability: The technology behind APISCRAPY allows scalable extraction of ecommerce data from various sources, accommodating evolving data needs and handling increased volumes effortlessly.

Beyond the broader market, a deeper dive into specific products can provide invaluable insights. APISCRAPY excels in collecting Ecommerce product data, enabling businesses to analyze product performance, pricing strategies, and customer reviews.

To navigate the complexities of the Ecommerce world, you need access to robust datasets. APISCRAPY's commitment to providing comprehensive Ecommerce datasets ensures businesses have the raw materials required for effective decision-making.

Our primary focus is on Amazon data, offering businesses a wealth of information to optimize their Amazon presence. By doing so, we empower our clients to refine their strategies, enhance their products, and make data-backed decisions.

[Tags: Ecommerce data, Ecommerce Data Sample, Ecommerce Product Data, Ecommerce Datasets, Ecommerce market data, Ecommerce Market Datasets, Ecommerce Sales data, Ecommerce Data API, Amazon Ecommerce API, Ecommerce scraper, Ecommerce Web Scraping, Ecommerce Data Extraction, Ecommerce Crawler, Ecommerce data scraping, Amazon Data, Ecommerce web data]

Search
Clear search
Close search
Google apps
Main menu