Facebook
TwitterAttribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
Citation metrics are widely used and misused. We have created a publicly available database of top-cited scientists that provides standardized information on citations, h-index, co-authorship adjusted hm-index, citations to papers in different authorship positions and a composite indicator (c-score). Separate data are shown for career-long and, separately, for single recent year impact. Metrics with and without self-citations and ratio of citations to citing papers are given and data on retracted papers (based on Retraction Watch database) as well as citations to/from retracted papers have been added in the most recent iteration. Scientists are classified into 22 scientific fields and 174 sub-fields according to the standard Science-Metrix classification. Field- and subfield-specific percentiles are also provided for all scientists with at least 5 papers. Career-long data are updated to end-of-2023 and single recent year data pertain to citations received during calendar year 2023. The selection is based on the top 100,000 scientists by c-score (with and without self-citations) or a percentile rank of 2% or above in the sub-field. This version (7) is based on the August 1, 2024 snapshot from Scopus, updated to end of citation year 2023. This work uses Scopus data. Calculations were performed using all Scopus author profiles as of August 1, 2024. If an author is not on the list it is simply because the composite indicator value was not high enough to appear on the list. It does not mean that the author does not do good work. PLEASE ALSO NOTE THAT THE DATABASE HAS BEEN PUBLISHED IN AN ARCHIVAL FORM AND WILL NOT BE CHANGED. The published version reflects Scopus author profiles at the time of calculation. We thus advise authors to ensure that their Scopus profiles are accurate. REQUESTS FOR CORRECIONS OF THE SCOPUS DATA (INCLUDING CORRECTIONS IN AFFILIATIONS) SHOULD NOT BE SENT TO US. They should be sent directly to Scopus, preferably by use of the Scopus to ORCID feedback wizard (https://orcid.scopusfeedback.com/) so that the correct data can be used in any future annual updates of the citation indicator databases. The c-score focuses on impact (citations) rather than productivity (number of publications) and it also incorporates information on co-authorship and author positions (single, first, last author). If you have additional questions, see attached file on FREQUENTLY ASKED QUESTIONS. Finally, we alert users that all citation metrics have limitations and their use should be tempered and judicious. For more reading, we refer to the Leiden manifesto: https://www.nature.com/articles/520429a
Facebook
TwitterA. SUMMARY Mechanical street sweeping and street cleaning schedule managed by San Francisco Public Works. B. HOW THE DATASET IS CREATED This dataset is created by extracting all street sweeping schedule data from a Department of Public Works database, it is then geocoded to add common identifiers such as Centerline Network Number ("CNN") then published to the open data portal. C. UPDATE PROCESS This dataset will be updated on an 'as needed' basis, when sweeping schedules change. D. HOW TO USE THIS DATASET Use this dataset to understand, track, or analyze street sweeping in San Francisco.
Facebook
TwitterODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
License information was derived automatically
A. SUMMARY Transactions from FPPC Forms 460, 461, 496, 497, and 450. This dataset combines all schedules, pages, and includes unitemized totals. Only transactions from the "most recent" version of a filing (original/amendment) appear here.
B. HOW THE DATASET IS CREATED Committees file campaign statements with the Ethics Commission on a periodic basis. Those statements are stored with the Commission's data provider. Data is generally presented as-filed by committees.
If a committee files an amendment, the data from that filing completely replaces the original and any prior amendments in the filing sequence.
C. UPDATE PROCESS Each night starting at midnight Pacific time a script runs to check for new filings with the Commission's database, and updates this dataset with transactions from new filings. The update process can take a variable amount of time to complete. Viewing or downloading this dataset while the update is running may result in incomplete data, therefore it is highly recommended to view or download this data before midnight or after 8am.
During the update, some fields are copied from the Filings dataset into this dataset for viewing convenience. The copy process may occasionally fail for some transactions due to timing issues but should self-correct the following day. Transactions with a blank 'Filing Id Number' or 'Filing Date' field are such transactions, but can be joined with the appropriate record using the 'Filing Activity Nid' field shared between Filing and Transaction datasets.
D. HOW TO USE THIS DATASET
Transactions from rejected filings are not included in this dataset. Transactions from many different FPPC forms and schedules are combined in this dataset, refer to the column "Form Type" to differentiate transaction types.
Properties suffixed with "-nid" can be used to join the data between Filers, Filings, and Transaction datasets.
Refer to the Ethics Commission's webpage for more information.
Fppc Form460 is organized into Schedules as follows:
RELATED DATASETS
Facebook
TwitterODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
License information was derived automatically
A. SUMMARY The dataset inventory provides a list of data maintained by departments that are candidates for open data publishing or have already been published and is collected in accordance with Chapter 22D of the Administrative Code. The inventory will be used in conjunction with department publishing plans to track progress toward meeting plan goals for each department.
B. HOW THE DATASET IS CREATED This dataset is collated through 2 ways: 1. Ongoing updates are made throughout the year to reflect new datasets, this process involves DataSF staff reconciling publishing records after datasets are published 2. Annual bulk updates - departments review their inventories and identify changes and updates and submit those to DataSF for a once a year bulk update - not all departments will have changes or their changes will have been captured over the course of the prior year already as ongoing updates
C. UPDATE PROCESS The dataset is synced automatically daily, but the underlying data changes manually throughout the year as needed
D. HOW TO USE THIS DATASET Interpreting dates in this dataset This dataset has 2 dates: 1. Date Added - when the dataset was added to the inventory itself 2. First Published - the open data portal automatically captures the date the dataset was first created, this is that system generated date
Note that in certain cases we may have published a dataset prior to it being added to the inventory. We do our best to have an accurate accounting of when something was added to this inventory and when it was published. In most cases the inventory addition will happen prior to publishing, but in certain cases it will be published and we will have missed updating the inventory as this is a manual process.
First published will give an accounting of when it was actually available on the open data catalog and date added when it was added to this list.
E. RELATED DATASETS
Facebook
TwitterOpen Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
A. SUMMARY The San Francisco Controller's Office maintains a database of the salary and benefits paid to City employees since fiscal year 2013.
B. HOW THE DATASET IS CREATED This data is summarized and presented on the Employee Compensation report hosted at http://openbook.sfgov.org, and is also available in this dataset in CSV format.
C. UPDATE PROCESS New data is added on a bi-annual basis when available for each fiscal and calendar year.
D. HOW TO USE THIS DATASET Before using please first review the following two resources: Data Dictionary - Can be found in 'About this dataset' section after click 'Show More' Employee Compensation FAQ - https://support.datasf.org/help/employee-compensation-faq
This is a dataset hosted by the city of San Francisco. The organization has an open data platform found here and they update their information according the amount of data that is brought in. Explore San Francisco's Data using Kaggle and all of the data sources available through the San Francisco organization page!
This dataset is maintained using Socrata's API and Kaggle's API. Socrata has assisted countless organizations with hosting their open data and has been an integral part of the process of bringing more data to the public.
Cover photo by rawpixel on Unsplash
Unsplash Images are distributed under a unique Unsplash License.
Facebook
TwitterA. SUMMARY This dataset is used to report on public dataset access and usage within the open data portal. Each row sums the amount of users who access a dataset each day, grouped by access type (API Read, Download, Page View, etc). B. HOW THE DATASET IS CREATED This dataset is created by joining two internal analytics datasets generated by the SF Open Data Portal. We remove non-public information during the process. C. UPDATE PROCESS This dataset is scheduled to update every 7 days via ETL. D. HOW TO USE THIS DATASET This dataset can help you identify stale datasets, highlight the most popular datasets and calculate other metrics around the performance and usage in the open data portal. Please note a special call-out for two fields: - "derived": This field shows if an asset is an original source (derived = "False") or if it is made from another asset though filtering (derived = "True"). Essentially, if it is derived from another source or not. - "provenance": This field shows if an asset is "official" (created by someone in the city of San Francisco) or "community" (created by a member of the community, not official). All community assets are derived as members of the community cannot add data to the open data portal.
Facebook
TwitterA. SUMMARY This dataset contains a list of active and terminated campaign committees and non-committee campaign finance filers in the Ethics Commission's records database. B. HOW THE DATASET IS CREATED This dataset comes from an export of the Ethics Commission's filer database. C. UPDATE PROCESS Each night starting at midnight Pacific time a script runs replacing this dataset with a complete list of all filers within the Ethics Commission's filing database. This process can take a variable amount of time to complete. Viewing or downloading this dataset while the update is running may result in incomplete data, therefore it is highly recommended to view or download this data before midnight or after 8am. D. HOW TO USE THIS DATASET The "Filer Name" is the most current version of a committee name as registered with the Ethics Commission. Properties suffixed with "-nid" can be used to join the data between Filers, Filings, and Transaction datasets. The California Secretary of State issues "FPPCIds" to committees when registering. After the committee registers with the SOS, the committee can file a courtesy copy of their registration with the Ethics Commission. Until that filing is received, committees' fppcids may appear as "pending". Committees are (generally) only required to register with the Ethics Commission if they participating in local elections. Refer to the Ethics Commission's webpage for more information. RELATED DATASETS San Francisco Campaign Filers Filings Received by SFEC Summary Totals Transactions
Facebook
TwitterFor any b2z file, It is recommend to be parallel bzip decompressor (https://github.com/mxmlnkn/indexed_bzip2) for speed.
In summary:
See forum discussion for details of [1],[2]: https://www.kaggle.com/competitions/leash-BELKA/discussion/492846
This is somehow obsolete as the competition progresses. ecfp6 gives better results and can be extracted fast with scikit-fingerprints.
See forum discussion for details of [3]: https://www.kaggle.com/competitions/leash-BELKA/discussion/498858 https://www.kaggle.com/code/hengck23/lb6-02-graph-nn-example
See forum discussion for details of [4]: https://www.kaggle.com/competitions/leash-BELKA/discussion/505985 https://www.kaggle.com/code/hengck23/conforge-open-source-conformer-generator
Facebook
TwitterUPDATE 1/7/2025: On June 28th 2023, the San Francisco Police Department (SFPD) changed its Stops Data Collection System (SDCS). As a result of this change, record identifiers have changed from the Department of Justice (DOJ) identifier to an internal record numbering system (referred to as "LEA Record ID"). The data that SFPD uploads to the DOJ system will contain the internal record number which can be used for joins with the data available on DataSF. A. SUMMARY The San Francisco Police Department (SFPD) Stop Data was designed to capture information to comply with the Racial and Identity Profiling Act (RIPA), or California Assembly Bill (AB)953. SFPD officers collect specific information on each stop, including elements of the stop, circumstances and the perceived identity characteristics of the individual(s) stopped. The information obtained by officers is reported to the California Department of Justice. This dataset includes data on stops starting on July 1st, 2018, which is when the data collection program went into effect. Read the detailed overview for this dataset here. B. HOW THE DATASET IS CREATED By the end of each shift, officers enter all stop data into the Stop Data Collection System, which is automatically submitted to the California Department of Justice (CA DOJ). Once a quarter the Department receives a stops data file from CA DOJ. The SFPD conducts several transformations of this data to ensure privacy, accuracy and compliance with State law and regulation. For increased usability, text descriptions have also been added for several data fields which include numeric codes (including traffic, suspicion, citation, and custodial arrest offense codes, and actions taken as a result of a stop). See the data dictionaries below for explanations of all coded data fields. Read more about the data collection, and transformation, including geocoding and PII cleaning processes, in the detailed overview of this dataset. C. UPDATE PROCESS Information is updated on a quarterly basis. D. HOW TO USE THIS DATASET This dataset includes information about police stops that occurred, including some details about the person(s) stopped, and what happened during the stop. Each row is a person stopped with a record identifier for the stop and a unique identifier for the person. A single stop may involve multiple people and may produce more than one associated unique identifier for the same record identifier. A certain percentage of stops have stop information that can’t be geocoded. This may be due to errors in data input at the officer level (typos in entry or providing an address that doesn't exist). More often, this is due to officers providing a level of detail that isn't codable to a geographic coordinate - most often at the Airport (ie: Terminal 3, door 22.) In these cases, the _location of the stops is coded as unknown. E. DATA DICTIONARIES CJIS Offense Codes data look up table Look up table for other coded data fields
Facebook
TwitterA. SUMMARY This data is an annual snapshot of existing land use as of March of the indicated year for every parcel in the City and County of San Francisco. This year's 2023 data was produced from the Land Use 2020, updated for residential properties using the Planning Department's permit database and current 2023 Assessor-Recorder data. The commercial data was not updated; the commercial data will be updated in next year's 2024 release. Each row of data corresponds to a parcel with 16 columns (fields or attributes) of information about each parcel, as described below. B. HOW THE DATASET IS CREATED The dataset is assembled from a range of City and commercial databases, including Assessor’s office and Dun & Bradstreet for commercial land uses. C. UPDATE PROCESS A new dataset will be added annually without updating previous years’ data. D. HOW TO USE THIS DATASET Review this document to understand the data (fields and their categories): Land Use Database 2023 Summary Limitations: Although every attempt is made to provide accurate data, the volume of data and parcels does not allow the Department to guarantee accuracy. Should errors be found, or questions arise, please email rebecca.latto@sfgov.org. E. RELATED DATASETS San Francisco Land Use - 2020
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains data from the Council’s Annual Budget. The budget is comprised of Tables A to F and Appendix 1. Each table is represented by a separate data file.Table C is the Calculation of the Annual Rate on Valuation for the Financial Year for Balbriggan Town Council. It contains –Estimate of ‘Money Demanded’Adopted ‘Money Demanded’Estimated ‘Irrecoverable rates and cost of collection’Adopted ‘Irrecoverable rates and cost of collection’Total Sum to be Raised is the sum of ‘Money Demanded’ and ‘Irrecoverable rates and cost of collection’‘Annual Rate on Valuation to meet Total Sum to be Raised’This dataset is used to create Table C in the published Annual Budget document, which can be found at www.fingal.ieThe data is best understood by comparing it to Table C.Data fields for Table C are as follows –Doc : Table ReferenceHeading : Indicates sections in the Table - Table C is comprised of one section, therefore Heading value for all records = 1Ref : Town ReferenceDesc : Town DescriptionMD_Est : Money Demanded EstimatedMD_Adopt : Money Demanded AdoptedIR_Est : Irrecoverable rates and cost of collection EstimatedIR_Adopt : Irrecoverable rates and cost of collection AdoptedNEV : Annual Rate on Valuation to meet Total Sum to be Raised
Facebook
TwitterODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
License information was derived automatically
A. SUMMARY This dataset contains data from financial statements of campaign committees that file with the San Francisco Ethics Commission and (1) contribute to or (2) receive funds from a San Francisco committee which was Primarily Formed for a local election, or (3) filed a Late Reporting Period statement with the SFEC. Financial statements are included for a committee if they meet any of the three criteria for each election included in the search parameters and are not primarily formed for the election. The search period for financial statements begins two years before an election and runs through the next semi-annual filing deadline. The dataset currently filters by the elections of 2024-03-05 and 2024-11-05. B. HOW THE DATASET IS CREATED During an election period an automated script runs nightly to examine filings by Primarily Formed San Francisco committees. If a primarily formed committee reports accepting money from or giving money to a second committee, that second committee's ID number is added to a filter list. If a committee electronically files a late reporting period form with the San Francisco Ethics Commission, the committee's ID number is also included in the filter list. The filter list is used in a second step that looks for filings by committees that file with the San Francisco Ethics Commission or the California Secretary of State. This dataset shows the output of the second step for committees that file with the San Francisco Ethics Commission. The data comes from a nightly search of the Ethics Commission campaign database. A second dataset includes committees that file with the Secretary of State. C. UPDATE PROCESS This dataset is rewritten nightly and is based on data derived from campaign filings. The update script runs automatically on a timer during the 90 days before an election. Refer to the "Data Last Updated" date in the section "About This Dataset" on the landing page to see when the script last ran successfully. D. HOW TO USE THIS DATASET Transactions from all FPPC Form 460 schedules are presented together, refer to the Form Type to differentiate. Transactions from FPPC Form 461 and Form 465 filings are presented together, refer to the Form Type to differentiate. Transactions with a Form Type of D, E, F, G, H, F461P5, F465P3, F496, or F497P2 represent expenditures, or money spent by the committee. Transactions with Form Type A, B1, C, I, F496P3, and F497P1 represent receipts, or money taken in by the committee. Refer to the instructions for Forms 460, 496, and 497 for more details. Transactions on Form 460 Schedules D, F, G, and H are also reported on Schedule E. When doing summary statistics use care not to double count expenditures. Transactions from FPPC Form 496 and Form 497 filings are presented in this dataset. Transactions that were reported on these forms are also reported on the Form 460 at the next filing deadline. If a 460 filing deadline has passed and the committee has filed a campaign statement, transactions on 496/497 filings from the late reporting period should be disregarded. This dataset only shows transactions from the most recent filing version. Committee amendments overwrite filings which come before in sequence. Campaign Committees are required to file statements according to a schedule set out by the C
Facebook
TwitterA. SUMMARY
San Francisco's local Emergency Medical Service Agency (EMSA) publishes information on the time it takes for emergency response vehicles to arrive at the scene of a medical incident after it is dispatched. This dataset is used to calculate the response times published in these dashboards. The dashboards are available here.
This dataset is derived from Fire Department Calls for Service dataset and includes responses to 911 calls for service from the city’s Computer-Aided Dispatch (CAD) system. Please refer to that dataset for details on the underlying data.
B. HOW THE DATASET IS CREATED
This dataset applies additional validation steps to the Fire Department Calls for Service data. This data excludes any responses that meet the following criteria:
C. UPDATE PROCESS
This dataset updates daily via automated data pipeline.
D. HOW TO USE THIS DATASET
This dataset is used to populate the Response Times dashboard. Please note that the incident_number column is not a unique identifier as several emergency medical service units may respond to the same call or incident.
Facebook
TwitterSPECIAL NOTE: C-MAPSS and C-MAPSS40K ARE CURRENTLY UNAVAILABLE FOR DOWNLOAD. Glenn Research Center management is reviewing the availability requirements for these software packages. We are working with Center management to get the review completed and issues resolved in a timely manner. We will post updates on this website when the issues are resolved. We apologize for any inconvenience. Please contact Jonathan Litt, jonathan.s.litt@nasa.gov, if you have any questions in the meantime. Subject Area: Engine Health Description: This data set was generated with the C-MAPSS simulator. C-MAPSS stands for 'Commercial Modular Aero-Propulsion System Simulation' and it is a tool for the simulation of realistic large commercial turbofan engine data. Each flight is a combination of a series of flight conditions with a reasonable linear transition period to allow the engine to change from one flight condition to the next. The flight conditions are arranged to cover a typical ascent from sea level to 35K ft and descent back down to sea level. The fault was injected at a given time in one of the flights and persists throughout the remaining flights, effectively increasing the age of the engine. The intent is to identify which flight and when in the flight the fault occurred. How Data Was Acquired: The data provided is from a high fidelity system level engine simulation designed to simulate nominal and fault engine degradation over a series of flights. The simulated data was created with a Matlab Simulink tool called C-MAPSS. Sample Rates and Parameter Description: The flights are full flight recordings sampled at 1 Hz and consist of 30 engine and flight condition parameters. Each flight contains 7 unique flight conditions for an approximately 90 min flight including ascent to cruise at 35K ft and descent back to sea level. The parameters for each flight are the flight conditions, health indicators, measurement temperatures and pressure measurements. Faults/Anomalies: Faults arose from the inlet engine fan, the low pressure compressor, the high pressure compressor, the high pressure turbine and the low pressure turbine.
Facebook
TwitterARISE_Merge_Data_1 is the Arctic Radiation - IceBridge Sea & Ice Experiment (ARISE) 2014 pre-generated aircraft (C-130) merge data files. This product is a result of a joint effort of the Radiation Sciences, Cryospheric Sciences and Airborne Sciences programs of the Earth Science Division in NASA's Science Mission Directorate in Washington. Data collection is complete.ARISE was NASA's first Arctic airborne campaign designed to take simultaneous measurements of ice, clouds and the levels of incoming and outgoing radiation, the balance of which determined the degree of climate warming. Over the past few decades, an increase in global temperatures led to decreased Arctic summer sea ice. Typically, Arctic sea ice reflects sunlight from the Earth. However, a loss of sea ice means there is more open water to absorb heat from the sun, enhancing warming in the region. More open water can also cause the release of more moisture into the atmosphere. This additional moisture could affect cloud formation and the exchange of heat from Earth’s surface to space. Conducted during the peak of summer ice melt (August 28, 2014-October 1, 2014), ARISE was designed to study and collect data on thinning sea ice, measure cloud and atmospheric properties in the Arctic, and to address questions about the relationship between retreating sea ice and the Arctic climate. During the campaign, instruments on NASA’s C-130 aircraft conducted measurements of spectral and broadband radiative flux profiles, quantified surface characteristics, cloud properties, and atmospheric state parameters under a variety of Arctic atmospheric and surface conditions (e.g. open water, sea ice, and land ice). When possible, C-130 flights were coordinated to fly under satellite overpasses. The primary aerial focus of ARISE was over Arctic sea ice and open water, with minor coverage over Greenland land ice. Through these efforts, the ARISE field campaign helped improve cloud and sea ice computer modeling in the Arctic.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
The effect of microgravity on gene expression in C.elegans was comprehensively analysed by DNA microarray. This is the first DNA microarray analysis for C.elegans grown under microgravity. Hyper gravity and clinorotation experiments were performed as reference against the flight experiment.
Facebook
TwitterINTEX-NA is a two phase experiment that aims to understand the transport and transformation of gases and aerosols on transcontinental/intercontinental scales and assess their impact on air quality and climate. The primary constituents of interest are ozone and precursors, aerosols and precursors, and the long-lived greenhouse gases. The first phase (INTEX-A) was completed in the summer of 2004 and the second phase (INTEX-B) is to be performed in the spring of 2006. This document is intended to provide an update on the goals of INTEX-B and define its implementation strategy. The scientific goals envisioned here are based on the joint implementation of INTEX-B, MIRAGE-Mex and DLR/IMPACT studies and their coordination with satellite observations. In collaboration with these partners, the main goals of INTEX-B are to:- Quantify the transpacific transport and evolution of Asian pollution to North America and assess its implications for regional air quality and climate; - Quantify the outflow and evolution of gases and aerosols from the Mexico City Megaplex; - Investigate the transport of Asian and North America pollution to the eastern Atlantic and assess its implications for European air quality; - Validate and refine satellite observations of tropospheric composition; - Map emissions of trace gases and aerosols and relate atmospheric composition to sources and sinks.The INTEX-B field study is to be performed during an approximate 8-week period from March 1 to April 30, 2006.
Facebook
TwitterThe data in the csv and text files provided in this release are an update to the data tables originally published in USGS Open-File Report (OFR) 83-250 (https://doi.org/10.3133/cir892). Those data were published as paper tables and have until now only been available as pdf image documents that were not machine readable. USGS OFR 83-250 presented data for 2071 geothermal sites which are representative of 1168 low-temperature geothermal systems identified in 26 states. The low-temperature geothermal systems consist of 978 isolated hydrothermal-convection systems, 148 delineated-area hydrothermal-convection systems, and 42 delineated-area conduction-dominated systems. The basic data and estimates of reservoir conditions are presented for each geothermal system, and energy estimates are given for the accessible resource base, resource, and beneficial heat for each isolated system. This electronic version of USGS OFR 83-250 tables includes several changes. Typographical errors were corrected. The location accuracy of many wells and springs was improved by comparing the original locations with other databases and with USGS topographic maps. Charge balance and additional geothermometer calculations made by the original authors that have become available since the original publication were also included.
Facebook
TwitterAs of 9/12/2024, we have resumed reporting on COVID-19 hospitalization data using a San Francisco specific dataset. These new data differ slightly from previous hospitalization data sources but the overall patterns and trends in hospitalizations remain consistent. You can access the previous data here.
A. SUMMARY This dataset includes information on COVID+ hospital admissions for San Francisco residents into San Francisco hospitals. Specifically, the dataset includes the count and rate of COVID+ hospital admissions per 100,000. The data are reported by week.
B. HOW THE DATASET IS CREATED Hospital admission data is reported to the San Francisco Department of Public Health (SFDPH) via the COVID Hospital Data Repository (CHDR), a system created via health officer order C19-16. The data includes all San Francisco hospitals except for the San Francisco VA Medical Center.
San Francisco population estimates are pulled from a view based on the San Francisco Population and Demographic Census dataset. These population estimates are from the 2018-2022 5-year American Community Survey (ACS).
C. UPDATE PROCESS Data updates weekly on Wednesday with data for the past Wednesday-Tuesday (one week lag). Data may change as more current information becomes available.
D. HOW TO USE THIS DATASET New admissions are the count of COVID+ hospital admissions among San Francisco residents to San Francisco hospitals by week.
The admission rate per 100,000 is calculated by multiplying the count of admissions each week by 100,000 and dividing by the population estimate.
E. CHANGE LOG
Facebook
TwitterThis dataset covers vocational qualifications starting 2012 to present for England.
The dataset is updated every quarter. Data for previous quarters may be revised to insert late data or to correct an error. Updates also reflect where qualifications were re-categorised to a different type, level, sector subject area or awarding organisation. Where a quarterly update includes revisions to data for previous quarters, a table of revisions is published in the vocational and other qualifications quarterly release
In the dataset, the number of certificates issued are rounded to the nearest 5 and values less than 5 appear as ‘Fewer than 5’ to preserve confidentiality (and a 0 represents no certificates).
Where a qualification has been owned by more than one awarding organisation at different points in time, a separate row is given for each organisation.
Background information and key headlines for every quarter are published in in the vocational and other qualifications quarterly release.
For any queries contact us at data.analytics@ofqual.gov.uk.
CSV, 20.5 MB
Facebook
TwitterAttribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
Citation metrics are widely used and misused. We have created a publicly available database of top-cited scientists that provides standardized information on citations, h-index, co-authorship adjusted hm-index, citations to papers in different authorship positions and a composite indicator (c-score). Separate data are shown for career-long and, separately, for single recent year impact. Metrics with and without self-citations and ratio of citations to citing papers are given and data on retracted papers (based on Retraction Watch database) as well as citations to/from retracted papers have been added in the most recent iteration. Scientists are classified into 22 scientific fields and 174 sub-fields according to the standard Science-Metrix classification. Field- and subfield-specific percentiles are also provided for all scientists with at least 5 papers. Career-long data are updated to end-of-2023 and single recent year data pertain to citations received during calendar year 2023. The selection is based on the top 100,000 scientists by c-score (with and without self-citations) or a percentile rank of 2% or above in the sub-field. This version (7) is based on the August 1, 2024 snapshot from Scopus, updated to end of citation year 2023. This work uses Scopus data. Calculations were performed using all Scopus author profiles as of August 1, 2024. If an author is not on the list it is simply because the composite indicator value was not high enough to appear on the list. It does not mean that the author does not do good work. PLEASE ALSO NOTE THAT THE DATABASE HAS BEEN PUBLISHED IN AN ARCHIVAL FORM AND WILL NOT BE CHANGED. The published version reflects Scopus author profiles at the time of calculation. We thus advise authors to ensure that their Scopus profiles are accurate. REQUESTS FOR CORRECIONS OF THE SCOPUS DATA (INCLUDING CORRECTIONS IN AFFILIATIONS) SHOULD NOT BE SENT TO US. They should be sent directly to Scopus, preferably by use of the Scopus to ORCID feedback wizard (https://orcid.scopusfeedback.com/) so that the correct data can be used in any future annual updates of the citation indicator databases. The c-score focuses on impact (citations) rather than productivity (number of publications) and it also incorporates information on co-authorship and author positions (single, first, last author). If you have additional questions, see attached file on FREQUENTLY ASKED QUESTIONS. Finally, we alert users that all citation metrics have limitations and their use should be tempered and judicious. For more reading, we refer to the Leiden manifesto: https://www.nature.com/articles/520429a