58 datasets found
  1. Average time spent on other billable work per week by attorneys in the U.S....

    • statista.com
    Updated Jul 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Average time spent on other billable work per week by attorneys in the U.S. 2020 [Dataset]. https://www.statista.com/statistics/869589/us-legal-services-time-spent-on-other-billable-work/
    Explore at:
    Dataset updated
    Jul 9, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Feb 14, 2020 - Apr 17, 2020
    Area covered
    United States
    Description

    This statistic depicts the average amount of time spent per week attorneys in the United States spent on billable work other than meeting clients or representing clients in court in 2020. During the survey, ** percent of respondents reported spending ** hours or more per week on non-client facing billable work such as legal research, court filings and administrative/managerial work.

  2. q

    Change point estimation in monitoring survival time: average of posterior...

    • researchdatafinder.qut.edu.au
    Updated Aug 15, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Distinguished Professor Kerrie Mengersen (2019). Change point estimation in monitoring survival time: average of posterior estimates of step change point model parameters [Dataset]. https://researchdatafinder.qut.edu.au/display/n4669
    Explore at:
    Dataset updated
    Aug 15, 2019
    Dataset provided by
    Queensland University of Technology (QUT)
    Authors
    Distinguished Professor Kerrie Mengersen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset was collected to model change point estimation in time-to-event data for a clinical process with dichotomous outcomes, death and survival, where patient mix was present. Modelling was completed using a Bayesian framework. The performance of the Bayesian estimators was investigated through simulation in conjunction with RAST CUSUM control charts for monitoring right censored survival time of patients who underwent cardiac surgery procedures within a follow-up period of 30 days.

    The dataset presents the average of posterior estimates (mode, sd.) of step change point model parameters ( and ) for a change in the mean survival time following signals (RL) from RAST CUSUM () where and .

  3. Average daily time spent on social media worldwide 2012-2025

    • statista.com
    Updated Jun 19, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Average daily time spent on social media worldwide 2012-2025 [Dataset]. https://www.statista.com/statistics/433871/daily-social-media-usage-worldwide/
    Explore at:
    Dataset updated
    Jun 19, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Area covered
    Worldwide
    Description

    How much time do people spend on social media? As of 2025, the average daily social media usage of internet users worldwide amounted to 141 minutes per day, down from 143 minutes in the previous year. Currently, the country with the most time spent on social media per day is Brazil, with online users spending an average of 3 hours and 49 minutes on social media each day. In comparison, the daily time spent with social media in the U.S. was just 2 hours and 16 minutes. Global social media usageCurrently, the global social network penetration rate is 62.3 percent. Northern Europe had an 81.7 percent social media penetration rate, topping the ranking of global social media usage by region. Eastern and Middle Africa closed the ranking with 10.1 and 9.6 percent usage reach, respectively. People access social media for a variety of reasons. Users like to find funny or entertaining content and enjoy sharing photos and videos with friends, but mainly use social media to stay in touch with current events friends. Global impact of social mediaSocial media has a wide-reaching and significant impact on not only online activities but also offline behavior and life in general. During a global online user survey in February 2019, a significant share of respondents stated that social media had increased their access to information, ease of communication, and freedom of expression. On the flip side, respondents also felt that social media had worsened their personal privacy, increased a polarization in politics and heightened everyday distractions.

  4. n

    Data from: Enhancing research informatics core user satisfaction through...

    • data.niaid.nih.gov
    • search.dataone.org
    • +2more
    zip
    Updated Nov 22, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrew Post; Jared Luther; J Loveless; Melanie Ward; Shirleen Hewitt (2021). Enhancing research informatics core user satisfaction through agile practices [Dataset]. http://doi.org/10.5061/dryad.00000004v
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 22, 2021
    Dataset provided by
    University of Utah
    Authors
    Andrew Post; Jared Luther; J Loveless; Melanie Ward; Shirleen Hewitt
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Objective: The Huntsman Cancer Institute (HCI) Research Informatics Shared Resource (RISR), a software and database development core facility, sought to address a lack of published operational best practices for research informatics cores. It aimed to use those insights to enhance effectiveness after an increase in team size from 20 to 31 full-time equivalents coincided with a reduction in user satisfaction. Materials and Methods: RISR migrated from a water-scrum-fall model of software development to agile software development practices, which emphasize iteration and collaboration. RISR’s agile implementation emphasizes the product owner role, which is responsible for user engagement and may be particularly valuable in software development that requires close engagement with users like in science. Results: All RISR’s software development teams implemented agile practices in early 2020. All project teams are led by a product owner who serves as the voice of the user on the development team. Annual user survey scores for service quality and turnaround time recorded nine months after implementation increased by 17% and 11%, respectively. Discussion: RISR is illustrative of the increasing size of research informatics cores and the need to identify best practices for maintaining high effectiveness. Agile practices may address concerns about the fit of software engineering practices in science. The study had one time point after implementing agile practices and one site, limiting its generalizability. Conclusion: Agile software development may substantially increase a research informatics core facility’s effectiveness and should be studied further as a potential best practice for how such cores are operated. Methods We used Huntsman Cancer Institute (HCI)'s annual user survey of its shared resources to evaluate the impact of the Research Informatics Shared Resource (RISR)'s new structure in its first year. The survey is administered by the HCI Research Administration office and is distributed through Survey Monkey to cancer center members and recent users of at least one HCI shared resource. While the survey asks many questions that applied to RISR, the questions that are the focus of this analysis are listed below:

    Overall, how would you rate the quality of the service/product you received from the Research Informatics Shared Resource? Answers: Exceptional, high, average, poor, unacceptable Overall, how would you rate the turnaround time for receiving data, products or other services from the Research Informatics Shared Resource? Answers: Exceptional, high, average, poor, unacceptable

    The user survey was open between September 11 and September 24. Thus, it provided feedback nine months after RISR introduced agile practices into its operations. A total of 17 respondents answered the questions above out of 52 identified RISR users (33% response rate).

  5. w

    Living Standards Measurement Survey 2003 (Wave 3 Panel) - Bosnia-Herzegovina...

    • microdata.worldbank.org
    • catalog.ihsn.org
    • +1more
    Updated Jan 30, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    State Agency for Statistics (BHAS) (2020). Living Standards Measurement Survey 2003 (Wave 3 Panel) - Bosnia-Herzegovina [Dataset]. https://microdata.worldbank.org/index.php/catalog/67
    Explore at:
    Dataset updated
    Jan 30, 2020
    Dataset provided by
    State Agency for Statistics (BHAS)
    Federation of BiH Institute of Statistics (FIS)
    Republika Srpska Institute of Statistics (RSIS)
    Time period covered
    2003
    Area covered
    Bosnia and Herzegovina
    Description

    Abstract

    In 2001, the World Bank in co-operation with the Republika Srpska Institute of Statistics (RSIS), the Federal Institute of Statistics (FOS) and the Agency for Statistics of BiH (BHAS), carried out a Living Standards Measurement Survey (LSMS). The Living Standard Measurement Survey LSMS, in addition to collecting the information necessary to obtain a comprehensive as possible measure of the basic dimensions of household living standards, has three basic objectives, as follows:

    1. To provide the public sector, government, the business community, scientific institutions, international donor organizations and social organizations with information on different indicators of the population's living conditions, as well as on available resources for satisfying basic needs.

    2. To provide information for the evaluation of the results of different forms of government policy and programs developed with the aim to improve the population's living standard. The survey will enable the analysis of the relations between and among different aspects of living standards (housing, consumption, education, health, labor) at a given time, as well as within a household.

    3. To provide key contributions for development of government's Poverty Reduction Strategy Paper, based on analyzed data.

    The Department for International Development, UK (DFID) contributed funding to the LSMS and provided funding for a further two years of data collection for a panel survey, known as the Household Survey Panel Series (HSPS). Birks Sinclair & Associates Ltd. were responsible for the management of the HSPS with technical advice and support provided by the Institute for Social and Economic Research (ISER), University of Essex, UK. The panel survey provides longitudinal data through re-interviewing approximately half the LSMS respondents for two years following the LSMS, in the autumn of 2002 and 2003. The LSMS constitutes Wave 1 of the panel survey so there are three years of panel data available for analysis. For the purposes of this documentation we are using the following convention to describe the different rounds of the panel survey: - Wave 1 LSMS conducted in 2001 forms the baseline survey for the panel
    - Wave 2 Second interview of 50% of LSMS respondents in Autumn/ Winter 2002 - Wave 3 Third interview with sub-sample respondents in Autumn/ Winter 2003

    The panel data allows the analysis of key transitions and events over this period such as labour market or geographical mobility and observe the consequent outcomes for the well-being of individuals and households in the survey. The panel data provides information on income and labour market dynamics within FBiH and RS. A key policy area is developing strategies for the reduction of poverty within FBiH and RS. The panel will provide information on the extent to which continuous poverty is experienced by different types of households and individuals over the three year period. And most importantly, the co-variates associated with moves into and out of poverty and the relative risks of poverty for different people can be assessed. As such, the panel aims to provide data, which will inform the policy debates within FBiH and RS at a time of social reform and rapid change.

    Geographic coverage

    National coverage. Domains: Urban/rural/mixed; Federation; Republic

    Kind of data

    Sample survey data [ssd]

    Sampling procedure

    The Wave 3 sample consisted of 2878 households who had been interviewed at Wave 2 and a further 73 households who were interviewed at Wave 1 but were non-contact at Wave 2 were issued. A total of 2951 households (1301 in the RS and 1650 in FBiH) were issued for Wave 3. As at Wave 2, the sample could not be replaced with any other households.

    Panel design

    Eligibility for inclusion

    The household and household membership definitions are the same standard definitions as a Wave 2. While the sample membership status and eligibility for interview are as follows: i) All members of households interviewed at Wave 2 have been designated as original sample members (OSMs). OSMs include children within households even if they are too young for interview. ii) Any new members joining a household containing at least one OSM, are eligible for inclusion and are designated as new sample members (NSMs). iii) At each wave, all OSMs and NSMs are eligible for inclusion, apart from those who move outof-scope (see discussion below). iv) All household members aged 15 or over are eligible for interview, including OSMs and NSMs.

    Following rules

    The panel design means that sample members who move from their previous wave address must be traced and followed to their new address for interview. In some cases the whole household will move together but in others an individual member may move away from their previous wave household and form a new split-off household of their own. All sample members, OSMs and NSMs, are followed at each wave and an interview attempted. This method has the benefit of maintaining the maximum number of respondents within the panel and being relatively straightforward to implement in the field.

    Definition of 'out-of-scope'

    It is important to maintain movers within the sample to maintain sample sizes and reduce attrition and also for substantive research on patterns of geographical mobility and migration. The rules for determining when a respondent is 'out-of-scope' are as follows:

    i. Movers out of the country altogether i.e. outside FBiH and RS. This category of mover is clear. Sample members moving to another country outside FBiH and RS will be out-of-scope for that year of the survey and not eligible for interview.

    ii. Movers between entities Respondents moving between entities are followed for interview. The personal details of the respondent are passed between the statistical institutes and a new interviewer assigned in that entity.

    iii. Movers into institutions Although institutional addresses were not included in the original LSMS sample, Wave 3 individuals who have subsequently moved into some institutions are followed. The definitions for which institutions are included are found in the Supervisor Instructions.

    iv. Movers into the district of Brcko are followed for interview. When coding entity Brcko is treated as the entity from which the household who moved into Brcko originated.

    Mode of data collection

    Face-to-face [f2f]

    Research instrument

    Questionnaire design

    Approximately 90% of the questionnaire (Annex B) is based on the Wave 2 questionnaire, carrying forward core measures that are needed to measure change over time. The questionnaire was widely circulated and changes were made as a result of comments received.

    Pretesting

    In order to undertake a longitudinal test the Wave 2 pretest sample was used. The Control Forms and Advance letters were generated from an Access database containing details of ten households in Sarajevo and fourteen in Banja Luka. The pretest was undertaken from March 24-April 4 and resulted in 24 households (51 individuals) successfully interviewed. One mover household was successfully traced and interviewed.
    In order to test the questionnaire under the hardest circumstances a briefing was not held. A list of the main questionnaire changes was given to experienced interviewers.

    Issues arising from the pretest

    Interviewers were asked to complete a Debriefing and Rating form. The debriefing form captured opinions on the following three issues:

    1. General reaction to being re-interviewed. In some cases there was a wariness of being asked to participate again, some individuals asking “Why Me?” Interviewers did a good job of persuading people to take part, only one household refused and another asked to be removed from the sample next year. Having the same interviewer return to the same households was considered an advantage. Most respondents asked what was the benefit to them of taking part in the survey. This aspect was reemphasised in the Advance Letter, Respondent Report and training of the Wave 3 interviewers.

    2. Length of the questionnaire. The average time of interview was 30 minutes. No problems were mentioned in relation to the timing, though interviewers noted that some respondents, particularly the elderly, tended to wonder off the point and that control was needed to bring them back to the questions in the questionnaire. One interviewer noted that the economic situation of many respondents seems to have got worse from the previous year and it was necessary to listen to respondents “stories” during the interview.

    3. Confidentiality. No problems were mentioned in relation to confidentiality. Though interviewers mentioned it might be worth mentioning the new Statistics Law in the Advance letter. The Rating Form asked for details of specific questions that were unclear. These are described below with a description of the changes made.

    • Module 3. Q29-31 have been added to capture funds received for education, scholarships etc.

    • Module 4. Pretest respondents complained that the 6 questions on "Has your health limited you..." and the 16 on "in the last 7 days have you felt depressed” etc were too many. These were reduced by half (Q38-Q48). The LSMS data was examined and those questions where variability between the answers was widest were chosen.

    • Module 5. The new employment questions (Q42-Q44) worked well and have been kept in the main questionnaire.

    • Module 7. There were no problems reported with adding the credit questions (Q28-Q36)

    • Module 9. SIG recommended that some of Questions 1-12 were relevant only to those aged over 18 so additional skips have been added. Some respondents complained the questionnaire was boring. To try and overcome

  6. d

    Near-Real Time Year Average Surface Ocean Velocity, U.S. West Coast, 2km...

    • catalog.data.gov
    • data.ioos.us
    Updated Aug 27, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Coastal Observing Research and Development Center, Scripps Institution of Oceanography (Point of Contact) (2024). Near-Real Time Year Average Surface Ocean Velocity, U.S. West Coast, 2km Resolution [Dataset]. https://catalog.data.gov/dataset/near-real-time-year-average-surface-ocean-velocity-u-s-west-coast-2km-resolution
    Explore at:
    Dataset updated
    Aug 27, 2024
    Dataset provided by
    Coastal Observing Research and Development Center, Scripps Institution of Oceanography (Point of Contact)
    Area covered
    West Coast of the United States, United States
    Description

    Surface ocean velocities estimated from HF-Radar are representative of the upper 1.0 meters of the ocean. The main objective of near-real time processing is to produce the best product from available data at the time of processing. Radial velocity measurements are obtained from individual radar sites through the U.S. HF-Radar Network. Hourly radial data are processed by unweighted least squares on a 2km resolution grid of the U.S. West Coast to produce hourly near real-time surface current maps. The year average is computed from all available hourly near real-time surface current maps for the given year.

  7. A test case data set with requirements

    • kaggle.com
    Updated Jun 11, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zumar Khalid (2021). A test case data set with requirements [Dataset]. https://www.kaggle.com/zumarkhalid/a-test-case-data-set-with-requirements/activity
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 11, 2021
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Zumar Khalid
    Description

    Context

    Since i have started research in the field of data science, i have noticed there are lot of data sets available for NLP, medicine, images and other subjects but i could not find any single adequate data for the domain of software testing. The data sets which are hardly available are extracted from some piece of code or some historical data that too not available publicly to analyze. The domain of software testing and data science, especially machine learning has a lot of potential. While conducting research on testcase prioritization especially in initial stages of software test cycle the way companies set the priorities in software industry there is no black box data set available in that format. This was the reason that i wanted such data set to exist. So i collected the necessary attributes , arrange them against their values and make one.

    Content

    This data was gathered in [Aug, 2020], from a software company worked on a car financing lease company's whole software package from web to their management system. The dataset is in .csv format, there are 2000 rows and 6 columns in this data set. The detail of six attributes are as under: B_Req --> Business Requirement R_Prioirty --> Requirement Priority of particular business requirement FP --> Function point of each testing task, which in our case are test cases against each requirement under covers a particular FP Complexity --> Complexity of a particular function point or related modules(the description of assigning complexity is listed below in this section)* Time --> Estimated max time assigned to each Function Point of particular testing task by QA team lead or sr. SQA analyst Cost --> Calculated cost for each function point using complexity and time with function point estimation technique to calculates cost using the formula listed below: cost = “Cost = (Complexity * Time) * average amount set per task or per Function Point note: In this case it is set as 5$ per FP. The criteria for complexity is listed in .txt file attached with new version.

    Acknowledgements

    I would like to thank the persons from QA departments of different software companies. Especially team of the the company who provided me this estimation data and traceability matrix to extract data and compile these in to a dataset. I get a great help from the websites like www.softwaretestinghelp.com, www.coderus.com and many other sources which helps me to understand all the testing process and in which phases priorities are assigned usually.

    Inspiration

    My inspiration to collect this data is the shortage of dataset showing the priority of testcases with their requirements and estimated metrics to analyze the data while doing research in automation of testcase priority using machine learning. --> The dataset can be used to analyze and apply classification or any machine learning algorithm to prioritize testcases. --> Can be used reduce , select or automate testing based on priority, or cost and time or complexity and requirements. --> Can be used to build recommendation system problem related to software testing which helps software testing team to ease their task based estimation and recommendation.

  8. South Africa Manufacturing Survey: Average Hours Worked per Factory Worker:...

    • ceicdata.com
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CEICdata.com, South Africa Manufacturing Survey: Average Hours Worked per Factory Worker: Expected [Dataset]. https://www.ceicdata.com/en/south-africa/business-survey-manufacturing-turnover-weighted/manufacturing-survey-average-hours-worked-per-factory-worker-expected
    Explore at:
    Dataset provided by
    CEIC Data
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Mar 1, 2016 - Dec 1, 2018
    Area covered
    South Africa
    Description

    South Africa Manufacturing Survey: Average Hours Worked per Factory Worker: Expected data was reported at -15.000 % Point in Dec 2018. This records a decrease from the previous number of -12.000 % Point for Sep 2018. South Africa Manufacturing Survey: Average Hours Worked per Factory Worker: Expected data is updated quarterly, averaging -11.000 % Point from Mar 1992 (Median) to Dec 2018, with 108 observations. The data reached an all-time high of 22.000 % Point in Jun 1994 and a record low of -48.000 % Point in Mar 2009. South Africa Manufacturing Survey: Average Hours Worked per Factory Worker: Expected data remains active status in CEIC and is reported by Bureau for Economic Research. The data is categorized under Global Database’s South Africa – Table ZA.S011: Business Survey: Manufacturing: Turnover Weighted.

  9. South Africa Manufacturing Survey: FB: Average Hours Worked per Factory...

    • ceicdata.com
    Updated Jan 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CEICdata.com (2025). South Africa Manufacturing Survey: FB: Average Hours Worked per Factory Worker: Realized [Dataset]. https://www.ceicdata.com/en/south-africa/business-survey-manufacturing-turnover-weighted-food-and-beverages/manufacturing-survey-fb-average-hours-worked-per-factory-worker-realized
    Explore at:
    Dataset updated
    Jan 15, 2025
    Dataset provided by
    CEIC Data
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Mar 1, 2016 - Dec 1, 2018
    Area covered
    South Africa
    Description

    South Africa Manufacturing Survey: FB: Average Hours Worked per Factory Worker: Realized data was reported at -9.000 % Point in Dec 2018. This records a decrease from the previous number of 16.000 % Point for Sep 2018. South Africa Manufacturing Survey: FB: Average Hours Worked per Factory Worker: Realized data is updated quarterly, averaging -4.000 % Point from Mar 1975 (Median) to Dec 2018, with 176 observations. The data reached an all-time high of 31.000 % Point in Mar 1981 and a record low of -43.000 % Point in Mar 2009. South Africa Manufacturing Survey: FB: Average Hours Worked per Factory Worker: Realized data remains active status in CEIC and is reported by Bureau for Economic Research. The data is categorized under Global Database’s South Africa – Table ZA.S012: Business Survey: Manufacturing: Turnover Weighted: Food and Beverages.

  10. c

    UK Higher Education Institution Research Data Management Policies, 2009-2016...

    • datacatalogue.cessda.eu
    Updated Jun 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Horton, L (2025). UK Higher Education Institution Research Data Management Policies, 2009-2016 [Dataset]. http://doi.org/10.5255/UKDA-SN-851566
    Explore at:
    Dataset updated
    Jun 4, 2025
    Dataset provided by
    London School of Economics and Political Science
    Authors
    Horton, L
    Time period covered
    Mar 1, 2014 - Oct 1, 2016
    Area covered
    United Kingdom
    Variables measured
    Organization
    Measurement technique
    Data collection was based on a list of UK Higher Education Institutions with data policies. This list was provided by the Digital Curation Centre. I also conducted a google search for UK university data policies to discover additional institutions that had adopted Research Data Management requirements. The data does not include 'Roadmaps' to EPSRC compliance.
    Description

    This dataset compares existing research data policies at UK higher education institutions. It consists of 83 cases. Polices were compared on a range of variables. Variables included policy length in words, whether the policy offers definitions, length of their definition of "data", defines institutional support, requires data management plans, states scope of staff and student coverage, specifies ownership of research outputs, details where external funder rights take precedent, guides on what data and documentation is required to be retained, how long it needs to be retained, reinforces where research ethics prevent open data, finalises where data can be accessed, speaks about open data requirements, includes a statement on funding the costs of Research Data Management, and specifies a review period for the policy. Data also includes the institution's year of foundation and a categorical variable grouping institutions by year of foundation allowing comparison across cohort groups of universities. A further two variables allow for identification of research based universities. Data on total research funding and research council for the year 2014/2015 was added, along with the number of research staff eligible for the 2014 UK Research Excellence Framework (REF). Also included is the institution's Grade Point Average based on its REF score using a Times Higher Education (THES) calculated score.

  11. n

    Writing_vs_Tapping(Arabic_English)

    • narcis.nl
    • data.mendeley.com
    Updated Dec 8, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lee, B (via Mendeley Data) (2020). Writing_vs_Tapping(Arabic_English) [Dataset]. http://doi.org/10.17632/j4mvtjmp5j.1
    Explore at:
    Dataset updated
    Dec 8, 2020
    Dataset provided by
    Data Archiving and Networked Services (DANS)
    Authors
    Lee, B (via Mendeley Data)
    Description

    This is the dataset reflects the recorded times that it took for 72 participants to transcribe an Arabic text, and 78 participants to transcribe an English text, both by paper and by smartphone. (*Note that Participant 48 in the English subgroup was identified as an outlier as times for smartphone entry were over 5 SD away from the mean.) All data points are times (in seconds).

    It was hypothesized, based on precursor research, that handwriting would be faster than smartphone entry for participants writing in their second language. This hypothesis was supported by this data. Also, the non-normal distributions of the English subgroups (the second language of the participants) is typical of research based on self-paced actions (in this case, self-paced writing). Both subgroups of the English data were positively skewed.

  12. H

    Data from: Agricultural Science and Technology Indicators: 2018 Global Food...

    • dataverse.harvard.edu
    Updated Mar 19, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Agricultural Science and Technology Indicators (ASTI) (2018). Agricultural Science and Technology Indicators: 2018 Global Food Policy Report Annex Table 1 [Dataset]. http://doi.org/10.7910/DVN/LXRF8B
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 19, 2018
    Dataset provided by
    Harvard Dataverse
    Authors
    Agricultural Science and Technology Indicators (ASTI)
    License

    https://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.7910/DVN/LXRF8Bhttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.7910/DVN/LXRF8B

    Time period covered
    2014
    Description

    Policy makers recognize that increased investment in agricultural research is key to increasing agricultural productivity. Despite this, many low- and middle-income countries struggle with capacity and funding constraints in their agricultural research systems. Agricultural Science and Technology Indicators (ASTI), led by the International Food Policy Research Institute (IFPRI), within the portfolio of the CGIAR Research Program on Policies, Institutions, and Markets works with national, regional, and international partners to collect time series data on the funding, human resource capacity, and outputs of agricultural research in low- and middle-income countries. Based on this information, ASTI produces analysis, capacity-building tools, and outreach products to help facilitate policies for effective and efficient agricultural research. Indicators “Agricultural research” includes government, higher education, and nonprofit agencies, but excludes the private for-profit sector. Total agricultural research spending includes salaries, operating and program costs, and capital investments for all agencies (excluding the private for-profit sector) involved in agricultural research in a country. Expenditures are adjusted for inflation and expressed in 2011 prices. Purchasing power parities (PPPs) measure the relative purchasing power of currencies across countries by eliminating national differences in pricing levels for a wide range of goods. PPPs are relatively stable over time, whereas exchange rates fluctuate considerably. In addition to looking at absolute levels of agricultural research investment and capacity, another way of comparing commitment to agricultural research is to measure research intensity—that is, total agricultural research spending as a percentage of agricultural output (AgGDP). “Total agricultural researchers” includes all research agencies (excluding the private for-profit sector) in a country. Totals are reported in full-time equivalents (FTEs) to account for the proportion of time researchers actually spend on research activities. A critical mass of qualified agricultural researchers is crucial for implementing a viable research agenda, for effectively communicating with stakeholders, and in securing external funding. Therefore, it is important to look at the share of PhD-qualified researchers. Gender balance in agricultural research is important, given that women researchers offer different insights and perspectives that can help research agencies more effectively address the unique and pressing challenges of female farmers. Age imbalances among research staff should be minimized. Having too many PhD-qualified researchers approaching retirement age can jeopardize the continuity of future research. Research involves unavoidable time lags from the point when investments are made until tangible benefits are attained; in the interim, long-term, stable funding is required. The volatility coefficient measures the volatility of agricultural research spending by applying the standard deviation formula to average one-year logarithmic growth of agricultural research spending over a certain period. A value of 0 indicates “no volatility”; countries with values between 0 and 0.1 are classified as having “low volatility”; countries with values between 0.1 and 0.2 are considered to have “moderate volatility”; and countries with values above 0.2 fall into the “high volatility” category.

  13. South Africa Manufacturing Survey: CG: Average Hours Worked per Factory...

    • ceicdata.com
    Updated Jan 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CEICdata.com (2025). South Africa Manufacturing Survey: CG: Average Hours Worked per Factory Worker: Expected [Dataset]. https://www.ceicdata.com/en/south-africa/business-survey-manufacturing-turnover-weighted/manufacturing-survey-cg-average-hours-worked-per-factory-worker-expected
    Explore at:
    Dataset updated
    Jan 15, 2025
    Dataset provided by
    CEIC Data
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Mar 1, 2016 - Dec 1, 2018
    Area covered
    South Africa
    Description

    South Africa Manufacturing Survey: CG: Average Hours Worked per Factory Worker: Expected data was reported at -3.000 % Point in Dec 2018. This records a decrease from the previous number of 2.000 % Point for Sep 2018. South Africa Manufacturing Survey: CG: Average Hours Worked per Factory Worker: Expected data is updated quarterly, averaging -8.000 % Point from Mar 1992 (Median) to Dec 2018, with 108 observations. The data reached an all-time high of 19.000 % Point in Jun 1994 and a record low of -43.000 % Point in Mar 2009. South Africa Manufacturing Survey: CG: Average Hours Worked per Factory Worker: Expected data remains active status in CEIC and is reported by Bureau for Economic Research. The data is categorized under Global Database’s South Africa – Table ZA.S011: Business Survey: Manufacturing: Turnover Weighted.

  14. S

    South Africa Manufacturing Survey: GP: Average Hours Worked per Factory...

    • ceicdata.com
    Updated Jan 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CEICdata.com (2025). South Africa Manufacturing Survey: GP: Average Hours Worked per Factory Worker: Realized [Dataset]. https://www.ceicdata.com/en/south-africa/business-survey-manufacturing-turnover-weighted-glass-and-non-metallic-minerals/manufacturing-survey-gp-average-hours-worked-per-factory-worker-realized
    Explore at:
    Dataset updated
    Jan 15, 2025
    Dataset provided by
    CEICdata.com
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Mar 1, 2016 - Dec 1, 2018
    Area covered
    South Africa
    Description

    South Africa Manufacturing Survey: GP: Average Hours Worked per Factory Worker: Realized data was reported at -9.000 % Point in Dec 2018. This records an increase from the previous number of -26.000 % Point for Sep 2018. South Africa Manufacturing Survey: GP: Average Hours Worked per Factory Worker: Realized data is updated quarterly, averaging -7.500 % Point from Mar 1975 (Median) to Dec 2018, with 176 observations. The data reached an all-time high of 47.000 % Point in Jun 1981 and a record low of -61.000 % Point in Mar 2009. South Africa Manufacturing Survey: GP: Average Hours Worked per Factory Worker: Realized data remains active status in CEIC and is reported by Bureau for Economic Research. The data is categorized under Global Database’s South Africa – Table ZA.S016: Business Survey: Manufacturing: Turnover Weighted: Glass and Non Metallic Minerals.

  15. f

    Overall mean values for each variable at each time point.

    • figshare.com
    xls
    Updated Jun 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Danica Hendry; Amity Campbell; Anne Smith; Luke Hopper; Leon Straker; Peter O’Sullivan (2023). Overall mean values for each variable at each time point. [Dataset]. http://doi.org/10.1371/journal.pone.0268444.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 5, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Danica Hendry; Amity Campbell; Anne Smith; Luke Hopper; Leon Straker; Peter O’Sullivan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Overall mean values for each variable at each time point.

  16. Earthquake Early Warning Dataset

    • figshare.com
    txt
    Updated Nov 20, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kevin Fauvel; Daniel Balouek-Thomert; Diego Melgar; Pedro Silva; Anthony Simonet; Gabriel Antoniu; Alexandru Costan; Véronique Masson; Manish Parashar; Ivan Rodero; Alexandre Termier (2019). Earthquake Early Warning Dataset [Dataset]. http://doi.org/10.6084/m9.figshare.9758555.v3
    Explore at:
    txtAvailable download formats
    Dataset updated
    Nov 20, 2019
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Kevin Fauvel; Daniel Balouek-Thomert; Diego Melgar; Pedro Silva; Anthony Simonet; Gabriel Antoniu; Alexandru Costan; Véronique Masson; Manish Parashar; Ivan Rodero; Alexandre Termier
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is composed of GPS stations (1 file) and seismometers (1 file) multivariate time series data associated with three types of events (normal activity / medium earthquakes / large earthquakes). Files Format: plain textFiles Creation Date: 02/09/2019Data Type: multivariate time seriesNumber of Dimensions: 3 (east-west, north-south and up-down)Time Series Length: 60 (one data point per second)Period: 2001-2018Geographic Location: -62 ≤ latitude ≤ 73, -179 ≤ longitude ≤ 25Data Collection - Large Earthquakes: GPS stations and seismometers data are obtained from the archive [1]. This archive includes 29 large eathquakes. In order to be able to adopt a homogeneous labeling method, dataset is limited to the data available from the American Incorporated Research Institutions for Seismology - IRIS (14 large earthquakes remaining over 29). > GPS stations (14 events): High Rate Global Navigation Satellite System (HR-GNSS) displacement data (1-5Hz). Raw observations have been processed with a precise point positioning algorithm [2] to obtain displacement time series in geodetic coordinates. Undifferenced GNSS ambiguities were fixed to integers to improve accuracy, especially over the low frequency band of tens of seconds [3]. Then, coordinates have been rotated to a local east-west, north-south and up-down system. > Seismometers (14 events): seismometers strong motion data (1-10Hz). Channel files are specifying the units, sample rates, and gains of each channel. - Normal Activity / Medium Earthquakes: > GPS stations (255 events: 255 normal activity): High Rate Global Navigation Satellite System (HR-GNSS) normal activity displacement data (1Hz). GPS data outside of large earthquake periods can be considered as normal activity (noise). Data is downloaded from [4], an archive maintained by the University of Oregon which stores a representative extract of GPS noise. It is an archive of real-time three component positions for 240 stations in the western U.S. from California to Alaska and spanning from October 2018 to the present day. The raw GPS data (observations of phase and range to visible satellites) are processed with an algorithm called FastLane [5] and converted to 1 Hz sampled positions. Normal activity MTS are randomly sampled from the archive to match the number of seismometers events and to keep a ratio above 30% between the number of large earthquakes MTS and normal activity in order not to encounter a class imbalance issue.> Seismometers (255 events: 170 normal activity, 85 medium earthquakes): seismometers strong motion data (1-10Hz). Time series data collected from the international Federation of Digital Seismograph Networks (FDSN) client available in Python package ObsPy [6]. Channel information is specifying the units, sample rates, and gains of each channel. The number of medium earthquakes is calculated by the ratio of medium over large earthquakes during the past 10 years in the region. A ratio above 30% is kept between the number of 60 seconds MTS corresponding to earthquakes (medium + large) and total (earthquakes + normal activity) number of MTS to prevent a class imbalance issue. The number of GPS stations and seismometers for each event varies (tens to thousands). Preprocessing:- Conversion (seismometers): data are available as digital signal, which is specific for each sensor. Therefore, each instrument digital signal is converted to its physical signal (acceleration) to obtain comparable seismometers data- Aggregation (GPS stations and seismometers): data aggregation by second (mean)Variables:- event_id: unique ID of an event. Dataset is composed of 269 events.- event_time: timestamp of the event occurence - event_magnitude: magnitude of the earthquake (Richter scale)- event_latitude: latitude of the event recorded (degrees)- event_longitude: longitude of the event recorded (degrees)- event_depth: distance below Earth's surface where earthquake happened (km)- mts_id: unique multivariate time series ID. Dataset is composed of 2,072 MTS from GPS stations and 13,265 MTS from seismometers.- station: sensor name (GPS station or seismometer)- station_latitude: sensor (GPS station or seismometer) latitude (degrees)- station_longitude: sensor (GPS station or seismometer) longitude (degrees)- timestamp: timestamp of the multivariate time series- dimension_E: East-West component of the sensor (GPS station or seismometer) signal (cm/s/s)- dimension_N: North-South component of the sensor (GPS station or seismometer) signal (cm/s/s)- dimension_Z: Up-Down component of the sensor (GPS station or seismometer) signal (cm/s/s)- label: label associated with the event. There are 3 labels: normal activity (GPS stations: 255 events, seismometers: 170 events) / medium earthquake (GPS stations: 0 event, seismometers: 85 events) / large earthquake (GPS stations: 14 events, seismometers: 14 events). EEW relies on the detection of the primary wave (P-wave) before the secondary wave (damaging wave) arrive. P-waves follow a propagation model (IASP91 [7]). Therefore, each MTS is labeled based on the P-wave arrival time on each sensor (seismometers, GPS stations) calculated with the propagation model.[1] Ruhl, C. J., Melgar, D., Chung, A. I., Grapenthin, R. and Allen, R. M. 2019. Quantifying the value of real‐time geodetic constraints for earthquake early warning using a global seismic and geodetic data set. Journal of Geophysical Research: Solid Earth 124:3819-3837.[2] Geng, J., Bock, Y., Melgar, D, Crowell, B. W., and Haase, J. S. 2013. A new seismogeodetic approach applied to GPS and accelerometer observations of the 2012 Brawley seismic swarm: Implications for earthquake early warning. Geochemistry, Geophysics, Geosystems 14:2124-2142.[3] Geng, J., Jiang, P., and Liu, J. 2017. Integrating GPS with GLONASS for high‐rate seismogeodesy. Geophysical Research Letters 44:3139-3146.[4] http://tunguska.uoregon.edu/rtgnss/data/cwu/mseed/[5] Melgar, D., Melbourne, T., Crowell, B., Geng, J, Szeliga, W., Scrivner, C., Santillan, M. and Goldberg, D. 2019. Real-Time High-Rate GNSS Displacements: Performance Demonstration During the 2019 Ridgecrest, CA Earthquakes (Version 1.0) [Data set]. Zenodo.[6] https://docs.obspy.org/packages/obspy.clients.fdsn.html[7] Kennet, B. L. N. 1991. Iaspei 1991 Seismological Tables. Terra Nova 3:122–122.

  17. CYGNSS Level 2 Ocean Surface Heat Flux Science Data Record Version 1.0 -...

    • data.staging.idas-ds1.appdat.jsc.nasa.gov
    • data.nasa.gov
    Updated Mar 20, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). CYGNSS Level 2 Ocean Surface Heat Flux Science Data Record Version 1.0 - Dataset - NASA Open Data Portal [Dataset]. https://data.staging.idas-ds1.appdat.jsc.nasa.gov/dataset/cygnss-level-2-ocean-surface-heat-flux-science-data-record-version-1-0-f739e
    Explore at:
    Dataset updated
    Mar 20, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    This dataset contains the Version 1.0 CYGNSS Level 2 Ocean Surface Heat Flux Science Data Record, which provides the time-tagged and geolocated ocean surface heat flux parameters with 25x25 kilometer footprint resolution from the Delay Doppler Mapping Instrument (DDMI) aboard the CYGNSS satellite constellation. The reported sample locations are determined by the specular points corresponding to the Delay Doppler Maps (DDMs). Only one netCDF-4 data file is produced each day (each file containing data from a combination of up to 8 unique CYGNSS spacecraft) with a latency of approximately 1 to 2 months from the last recorded measurement time. Version 1.0 represents the first release. The Cyclone Global Navigation Satellite System (CYGNSS), launched on 15 December 2016, is a NASA Earth System Science Pathfinder Mission that was launched with the purpose to collect the first frequent space-based measurements of surface wind speeds in the inner core of tropical cyclones. Made up of a constellation of eight micro-satellites, the CYGNSS observatories provide nearly gap-free Earth coverage with a mean (i.e., average) revisit time of seven hours and a median revisit time of three hours. The 35 degree orbital inclination allows CYGNSS to measure ocean surface winds between approximately 38 degrees North and 38 degrees South latitude using an innovative combination of all-weather performance Global Positioning System (GPS) L-band ocean surface reflectometry to penetrate the clouds and heavy precipitation. The Coupled Ocean-Atmosphere Response Experiment (COARE) algorithm is what is used in this dataset to estimate the latent and sensible heat fluxes and their respective transfer coefficients. While COARE's initial intentions were for low to moderate wind speeds, the version used for this product, COARE 3.5, has been verified with direct in situ flux measurements for wind speeds up to 25 m/s. As CYGNSS does not provide air/sea temperature, humidity, surface pressure or density, the producer of this dataset obtains these values from the NASA Modern-Era Retrospective Analysis for Research and Applications Version 2 (MERRA-2), which uses data assimilation to combine all available in situ and satellite observation data with an initial estimate of the atmospheric state, provided by a global atmospheric model. Since the MERRA-2 data is only updated on monthly intervals, this corresponding heat flux dataset is likewise updated on a monthly interval to reflect the latest data available from MERRA-2, thus accounting for measurement latency, with respect to CYGNSS observables, ranging from 1 to 2 months. The data from this release compares well with in situ buoy data, including: Kuroshio Extension Observatory (KEO), National Data Buoy Center (NDBC), Ocean Sustained Interdisciplinary Time-series Environment observation System (OceanSITES), Prediction and Research Moored Array in the Tropical Atlantic (PIRATA), Research Moored Array for African-Asian-Australian Monsoon Analysis and Prediction (RAMA), and the Tropical Atmosphere Ocean (TAO) array. As this marks only the first data release, future work is expected to provide comparisons and validation with various field campaigns (e.g., PISTON, CAMP2Ex) as well as more buoy data, especially at higher flux estimates.

  18. n

    Chapter 10 of the Working Group I Contribution to the IPCC Sixth Assessment...

    • data-search.nerc.ac.uk
    Updated Sep 9, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). Chapter 10 of the Working Group I Contribution to the IPCC Sixth Assessment Report - data for Figure 10.10 (v20220622) [Dataset]. https://data-search.nerc.ac.uk/geonetwork/srv/search?keyword=Chapter%2010
    Explore at:
    Dataset updated
    Sep 9, 2023
    Description

    Data for Figure 10.10 from Chapter 10 of the Working Group I (WGI) Contribution to the Intergovernmental Panel on Climate Change (IPCC) Sixth Assessment Report (AR6). Figure 10.10 shows observed and projected changes in austral summer (December to February) mean precipitation in Global Precipitation Climatology Centre (GPCC), Climatic Research Unit Time-Series (CRU TS) and 100 members of the Max-Planck-Institut für Meteorologie Earth-System Model (MPI-ESM). --------------------------------------------------- How to cite this dataset --------------------------------------------------- When citing this dataset, please include both the data citation below (under 'Citable as') and the following citation for the report component from which the figure originates: Doblas-Reyes, F.J., A.A. Sörensson, M. Almazroui, A. Dosio, W.J. Gutowski, R. Haarsma, R. Hamdi, B. Hewitson, W.-T. Kwon, B.L. Lamptey, D. Maraun, T.S. Stephenson, I. Takayabu, L. Terray, A. Turner, and Z. Zuo, 2021: Linking Global to Regional Climate Change. In Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change[Masson-Delmotte, V., P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, pp. 1363–1512, doi:10.1017/9781009157896.012. --------------------------------------------------- Figure subpanels --------------------------------------------------- The figure has two panels, with data provided for both panels. Panel (a) consists of two maps, panel (b) shows multiple timeseries and boxplots. --------------------------------------------------- List of data provided --------------------------------------------------- The dataset contains data of relative precipitation anomalies over 1950-2100 with respect to 1995-2014 average for global, S.E.South-America, Sao Paulo and Buenos Aires for: - Observational data (GPCC and CRU TS) - Model data (100 runs of MPI-ESM) --------------------------------------------------- Data provided in relation to figure --------------------------------------------------- Panel (a): - Data files: Modelled precipitation rate OLS linear trends between 2015-2070 with respect to 1995-2014 average over S.E. South America region, from left to right (MPI-ESM member with min (driest) and max (wettest) trends): Fig_10_10_panel-a_mapplot_trend_SES_DJF_MPI-GE_min_single-MultiModelMean_trend-min-median-max.nc, Fig_10_10_panel-a_mapplot_trend_SES_DJF_MPI-GE_max_single-MultiModelMean_trend-min-median-max.nc Panel (b): - Data files: Precipitation rate anomalies 1950-2100 with respect to 1995-2014 average for the global mean, S.E.South-America mean, Sao Paulo mean and Buenos Aires mean of GPCC (dark blue), CRU (dark brown), members of the MPI-ESM (grey), the MPI-ESM member with the driest (brown) and wettest (green) trend: Fig_10_10_panel-b_timeseries_global.csv, Fig_10_10_panel-b_timeseries_SES.csv, Fig_10_10_panel-b_timeseries_SaoPaulo.csv, Fig_10_10_panel-b_timeseries_BuenosAires.csv - Data files: Underlying data points of the boxplot showing MPI-ESM modelled precipitation rate OLS linear trends over all members between 2015-2070 with respect to 1995-2014 average for the global mean, S.E.South-America mean, Sao Paulo mean and Buenos Aires mean: Fig_10_10_panel-b_boxplot_BuenosAires.csv, Fig_10_10_panel-b_boxplot_global.csv, Fig_10_10_panel-b_boxplot_SaoPaulo.csv, Fig_10_10_panel-b_boxplot_SES.csv; OLS - ordinary least squares regression. --------------------------------------------------- Notes on reproducing the figure from the provided data --------------------------------------------------- The code for ESMValTool is provided. --------------------------------------------------- Sources of additional information --------------------------------------------------- The following weblinks are provided in the Related Documents section of this catalogue record: - Link to the figure on the IPCC AR6 website - Link to the report component containing the figure (Chapter 10) - Link to the Supplementary Material for Chapter 10, which contains details on the input data used in Table 10.SM.11 - Link to the code for the figure, archived on Zenodo.

  19. a

    POWER Annual Meteorology

    • ai-climate-hackathon-global-community.hub.arcgis.com
    • climat.esri.ca
    • +2more
    Updated Dec 1, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NASA ArcGIS Online (2021). POWER Annual Meteorology [Dataset]. https://ai-climate-hackathon-global-community.hub.arcgis.com/datasets/0974a33b537f46f495e328b85a229fec
    Explore at:
    Dataset updated
    Dec 1, 2021
    Dataset authored and provided by
    NASA ArcGIS Online
    Area covered
    Description

    The Prediction Of Worldwide Energy Resource (POWER) Project gathers NASA Earth Observation data and parameters related to the fields of surface solar irradiance and meteorology to serve the public in several free, easy-to-access, and easy-to-use methods. POWER helps communities become resilient amid observed climate variability by improving data accessibility, aiding research in renewable energy development, building energy efficiency, and agriculture sustainability. POWER is funded through the NASA Earth Action Program within the Earth Science Mission Directorate at NASA Langley Research Center (LaRC).---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------This annual meteorology service provides time-enabled global Analysis Ready Data (ARD) parameters from 1981 to 2023 for POWER’s communities. Time Interval: AnnualTime Extent: 1981/01/01 to 2023/12/31Time Standard: Local Sidereal Time (LST)Grid Size: 0.5 x 0.5 DegreeProjection: GCS WGS84Extent: GlobalSource: NASA Prediction Of Worldwide Energy Resources (POWER)For questions or issues please email: larc-power-project@mail.nasa.govMeteorology Data Sources:NASA's GMAO MERRA-2 archive (Jan. 1, 1981 – Dec. 31, 2023)Meteorology Data Parameters:CDD10 (Cooling Degree Days Above 10 C): The daily accumulation of degrees when the daily mean temperature is above 10 degrees Celsius.CDD18_3 (Cooling Degree Days Above 18.3 C): The daily accumulation of degrees when the daily mean temperature is above 18.3 degrees Celsius.DISPH (Zero Plane Displacement Height): The height at which the mean velocity is zero due to large obstacles such as buildings/canopy.EVLAND (Evaporation Land): The evaporation over land at the surface of the earth.EVPTRNS (Evapotranspiration Energy Flux): The evapotranspiration energy flux at the surface of the earth.FROST_DAYS (Frost Days): A frost day occurs when the 2m temperature cools to the dew point temperature and both are less than 0 C or 32 F.GWETTOP (Surface Soil Wetness): The percent of soil moisture a value of 0 indicates a completely water-free soil and a value of 1 indicates a completely saturated soil; where surface is the layer from the surface 0 cm to 5 cm below grade.HDD10 (Heating Degree Days Below 10 C): The daily accumulation of degrees when the daily mean temperature is below 10 degrees Celsius.HDD18_3 (Heating Degree Days Below 18.3 C): The daily accumulation of degrees when the daily mean temperature is below 15.3 degrees Celsius.PBLTOP (Planetary Boundary Layer Top Pressure): The pressure at the top of the planet boundary layer.PRECSNOLAND_SUM (Snow Precipitation Land Sum): The snow precipitation sum over land at the surface of the earth.PRECTOTCORR_SUM (Precipitation Corrected Sum): The bias corrected sum of total precipitation at the surface of the earth.PS (Surface Pressure): The average of surface pressure at the surface of the earth.QV10M (Specific Humidity at 10 Meters): The ratio of the mass of water vapor to the total mass of air at 10 meters (kg water/kg total air).QV2M (Specific Humidity at 2 Meters): The ratio of the mass of water vapor to the total mass of air at 2 meters (kg water/kg total air).RH2M (Relative Humidity at 2 Meters): The ratio of actual partial pressure of water vapor to the partial pressure at saturation, expressed in percent.T10M (Temperature at 10 Meters): The air (dry bulb) temperature at 10 meters above the surface of the earth.T2M (Temperature at 2 Meters): The average air (dry bulb) temperature at 2 meters above the surface of the earth.T2MDEW (Dew/Frost Point at 2 Meters): The dew/frost point temperature at 2 meters above the surface of the earth.T2MWET (Wet Bulb Temperature at 2 Meters): The adiabatic saturation temperature which can be measured by a thermometer covered in a water-soaked cloth over which air is passed at 2 meters above the surface of the earth.TO3 (Total Column Ozone): The total amount of ozone in a column extending vertically from the earth's surface to the top of the atmosphere.TQV (Total Column Precipitable Water): The total atmospheric water vapor contained in a vertical column of unit cross-sectional area extending from the surface to the top of the atmosphere.TS (Earth Skin Temperature): The average temperature at the earth's surface.WD10M (Wind Direction at 10 Meters): The average of the wind direction at 10 meters above the surface of the earth.WD2M (Wind Direction at 2 Meters): The average of the wind direction at 2 meters above the surface of the earth.WD50M (Wind Direction at 50 Meters): The average of the wind direction at 50 meters above the surface of the earth.WS10M (Wind Speed at 10 Meters): The average of wind speed at 10 meters above the surface of the earth.WS2M (Wind Speed at 2 Meters): The average of wind speed at 2 meters above the surface of the earth.WS50M (Wind Speed at 50 Meters): The average of wind speed at 50 meters above the surface of the earth.

  20. n

    Data from: Learning of probabilistic punishment as a model of anxiety...

    • data.niaid.nih.gov
    • zenodo.org
    • +1more
    zip
    Updated Sep 28, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Jacobs; Madeleine Allen; Junchol Park; Bita Moghaddam (2022). Learning of probabilistic punishment as a model of anxiety produces changes in action but not punisher encoding in the dmPFC and VTA [Dataset]. http://doi.org/10.5061/dryad.9s4mw6mkn
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 28, 2022
    Dataset provided by
    Oregon Health & Science University
    Janelia Research Campus
    Authors
    David Jacobs; Madeleine Allen; Junchol Park; Bita Moghaddam
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Previously, we developed a novel model for anxiety during motivated behavior by training rats to perform a task where actions executed to obtain a reward were probabilistically punished and observed that after learning, neuronal activity in the ventral tegmental area (VTA) and dorsomedial prefrontal cortex (dmPFC) represent the relationship between action and punishment risk (Park & Moghaddam, 2017). Here we used male and female rats to expand on the previous work by focusing on neural changes in the dmPFC and VTA that were associated with the learning of probabilistic punishment, and anxiolytic treatment with diazepam after learning. We find that adaptive neural responses of dmPFC and VTA during the learning of anxiogenic contingencies are independent from the punisher experience and occur primarily during the peri-action and reward period. Our results also identify peri-action ramping of VTA neural calcium activity, and VTA-dmPFC correlated activity, as potential markers for the anxiolytic properties of diazepam. Methods Subjects Male and female Long-Evans (bred in house n=8) and Sprague-Dawley (Charles River n=5) rats were used. Animals were pair-housed on a reverse 12 h:12 h light/dark cycle. All experimental procedures and behavioral testing were performed during the dark (active) cycle. All studies included both strains of male (n=7) and female (n=6) rats. All experimental procedures were approved by the OHSU Institutional Animal Use and Care Committee and were conducted in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals. Initial Training & Punishment Risk Task (PRT) The PRT follows previously published methods (Park & Moghaddam, 2017; Chowdhury et al., 2019). Rats were trained to make an instrumental response to receive a 45-mg sugar pellet (BioServe) under fixed ratio one schedule of reinforcement (FR1). The availability of the nosepoke for reinforcement was signaled by a 5-s tone. After at least three FR1 training sessions, PRT sessions began. PRT sessions consisted of three blocks of 30 trials each. The action-reward contingency remained constant, with one nose-poke resulting in one sugar pellet. However, there was a probability of receiving a footshock (300 ms electrical footshock of 0.3 mA) after the FR1 action, which increased over the blocks (0%, 6%, or 10% in blocks 1, 2 and 3, respectively). To minimize generalization of the action-punishment contingency, blocks were organized in an ascending footshock probability with 2-min timeouts between blocks. Punishment trials were pseudo-randomly assigned, with the first footshock occurring within the first five trials. All sessions were terminated if not completed in 180 mins. Fiber Photometry Analysis Peri-event analysis: Signals from the 465 (GCaMP6s) and 560 (tdTomato) streams were processed in Python (Version 3.7.4) using custom-written scripts similar to previously published methods (Jacobs & Moghaddam, 2020). Briefly, 465 and 560 streams were low pass filtered at 3 Hz using a butterworth filter and subsequently broken up based on the start and end of a given trial. The 560 signal was fitted to the 465 using a least-squares first order polynomial and subtracted from 465 signal to yield the change in fluorescent activity (ΔF/F= 465 signal - fitted 560 signal/ fitted 560 signal). Peri-event z-scores were computed by comparing the ΔF/F after the behavioral action to the 4-2 sec baseline ΔF/F prior to a given epoch. To investigate potential different neural calcium responses to receiving the footshock vs. anticipation, punished (i.e. shock) trials and unpunished trials were separated. Trials with a z-score value > 40 were excluded. From approximately 3,000 trials analyzed, this occurred on < 1% of trials. Area under the curve (AUC) analyses: To represent individual data we calculated the AUCs for each subject. To quantify peri-cue and peri-action changes we calculated a change or summation score between 1 sec before (pre-event) and 1 sec after (post-event) cue onset or action execution. For the reward period, we calculated a change score by comparing 2 sec after reward delivery to the 1 sec prior to reward delivery. For punished trials, response to footshock was calculated as the change from 1 sec following footshock delivery compared to the 1 sec before footshock. Outliers were removed using GraphPad Prism’s ROUT method (Q=1%; Motulsky & Brown, 2006) which removed only three data points from the analysis. Time Lagged Cross-Correlation Analysis: Cross-correlation analysis has been used to identify networks from simultaneously measured fiber photometry signals (Sych et al., 2019). For rats with properly placed fibers in the dmPFC and VTA, correlations between photometry signals arising in the VTA and dmPFC were calculated for the peri-action, peri-footshock and peri-reward periods using the z-score normalized data. The following equation was used to normalize covariance scores for each time lag to achieve a correlation coefficient between -1 and 1: Coef = Cov/(s1*s2*n) Where Cov is the covariance from the dot product of the signal for each timepoint, s1 and s2 are the standard deviations of the dmPFC and VTA streams, respectively, and n is the number of samples. An entire cross-correlations function was derived for each trial and epoch. Comparison to Electrophysiology Results: Fiber photometry data for the third PRT session were compared to the average of the 50 msec binned single unit data (see Figure 4 of Park & Moghaddam, 2017). This third PRT session corresponds to the session electrophysiology data were collected from. To overlay data from the two techniques, data were lowpass filtered at 3 Hz and photometry data were downsampled to 20 Hz (to match the 50 msec binning). Data from both streams were then min-max normalized between 0 and 1 at the corresponding cue and action+reward epochs. To assess the similarity of the two signals, we performed a Pearson correlation analysis between the normalized single unit and fiber photometry data for cue or action+reward epochs at each risk block, as well as between randomly shuffled photometry signals with single-unit response as a control. For significant Pearson correlations, we performed cross-correlation analysis (see above) to investigate if the photometry signal lagged behind electrophysiology given the slower kinetics of GCAMP6 compared to single-unit approaches (Chen et al., 2013). Statistical Analysis For FR1 training, trial completion was measured as the number of food pellets earned. Data were assessed for the first 3-4 training sessions. Action and reward latencies were defined as time from cue onset to action execution or from food delivery until retrieval, respectively. Values were assessed using a mixed-effects model with the training as a factor and post-hoc tests were performed using the Bonferroni correction where appropriate. For the PRT, trial completion was measured as the percentage of completed trials (of the 30 possible) for each block. Action latencies were defined as time from cue onset to action execution. Data were analyzed using a two-way RM ANOVA or mixed effects model. Because there were missing data for non-random reasons (e.g. failure to complete trials in response to punishment risk) we took the average of risk blocks (blocks 2 and 3) and the no-risk block (block 1) to permit repeated measures analysis. We used mixed effects model if data were missing for random reasons. Risk and session were used as factors and post-hoc tests were performed using the Bonferroni correction where appropriate. When only two groups were compared a paired t-test or Wilcoxon test was performed after checking normality assumption through the Shapiro-Wilk test. To assess changes in neural calcium activity, we utilized a permutation-based approach as outlined in (Jean-Richard-dit-Bressel et al., 2020) using Python (Version 3). An average response for each subject for a given time point in the cue, action, or reward delivery period was compared to either the first PRT or saline session. For each time point, a null distribution was generated by shuffling the data, randomly selecting the data into two groups, and calculating the mean difference between groups. This was done 1,000 times for each time-point and a p-value was obtained by determining the percentage of times a value in the null distribution of mean differences was greater than or equal to the observed difference in the unshuffled data (one-tailed for comparisons to 0% risk and FR1 data, two-tailed for all other comparisons). To control for multiple comparisons we utilized a consecutive threshold approach based on the 3 Hz lowpass filter window (Jean-Richard-dit-Bressel et al., 2020; Pascoli et al., 2018), where a p-value < 0.05 was required for 14 consecutive samples to be considered significant. To assess AUC changes in photometry data, we compared all risk blocks and all sessions using ANOVA with factors risk block and session. Because not all subjects completed learning and diazepam data, we used an ordinary two-way ANOVA. Significant main effects and interactions were assessed with post-hoc Bonferroni multiple comparison tests. To assess correlated activity changes as a function of risk or session, we took the peak and 95% confidence interval for the overall cross-correlation function. These values were compared by a two-way ANOVA with factors risk and session and utilized a post-hoc Bonferroni correction. Other than permutation tests, all statistical tests were done using GraphPad Prism (Version 8) and an α of 0.05. Results for all statistical tests and corresponding figures can be found in Table 1 or supplemental figures. Excluded Data Outliers from latency analysis were removed when a data point was > 5 SDs above the mean across all blocks. This removed one data point from the analysis. In FR1 studies, data from one rat’s third and fourth session were excluded because

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Statista (2025). Average time spent on other billable work per week by attorneys in the U.S. 2020 [Dataset]. https://www.statista.com/statistics/869589/us-legal-services-time-spent-on-other-billable-work/
Organization logo

Average time spent on other billable work per week by attorneys in the U.S. 2020

Explore at:
Dataset updated
Jul 9, 2025
Dataset authored and provided by
Statistahttp://statista.com/
Time period covered
Feb 14, 2020 - Apr 17, 2020
Area covered
United States
Description

This statistic depicts the average amount of time spent per week attorneys in the United States spent on billable work other than meeting clients or representing clients in court in 2020. During the survey, ** percent of respondents reported spending ** hours or more per week on non-client facing billable work such as legal research, court filings and administrative/managerial work.

Search
Clear search
Close search
Google apps
Main menu