Facebook
TwitterThis dataset reflects the current (updated weekly) set of EDGAR filings available on the Yens at /zfs/data/NODR/EDGAR_HTTPS/edgar/.
A script is run on a weekly basis that pulls the most recent indices of EDGAR filings from this link, downloads new filings to /zfs/data/NODR/EDGAR_HTTPS/edgar/ on the Yens, and then updates the table in this dataset with those filings. You can use the filepath column to access a specific filing on the Yens.
Note that in order to use filings on the Yens, you will need to have access to the Yens either as a member of the Stanford GSB research community or as a sponsored collaborator.
You may use this dataset to filter through the universe of EDGAR filings by CIK, company name, filing date, etc. and then compile a list of filings that you would like to use on the Yens.
Facebook
TwitterDataset added at the request of Lingyu Gu, a PhD student of Suzie Noh, to be used to aggregate the Yodlee dataset at different geographical levels. Contains County Fips, Census Block and Lat/Lon data points.
The data was provided to the GSB library by Lingyu Gu and added at her request to Redivis by Matt Hutchinson.
Data can be freely used by anyone
Facebook
TwitterThis dataset is a mirror of the Financial Statement and Notes Data Set (https://www.sec.gov/dera/data/financial-statement-and-notes-data-set.html) hosted by the SEC and is updated monthly.
From this page:
%3E The Financial Statement and Notes Data Sets provide the text and detailed numeric information from all financial statements and their notes. This data is extracted from exhibits to corporate financial reports filed with the Commission using eXtensible Business Reporting Language (XBRL). As compared to the more compact Financial Statement Data Sets which provide only the numeric information from face financials, the Financial Statement and Notes Data Sets provide significantly more disclosure data. The information is presented without change from the "as filed" financial reports submitted by each registrant. The data is presented in a flattened format to help users analyze and compare corporate disclosure information over time and across registrants. The data sets also contain additional fields such as a company's Standard Industrial Classification to facilitate the data's use.
%3E DISCLAIMER: The Financial Statement and Notes Data Sets contain information derived from structured data filed with the Commission by individual registrants as well as Commission-generated filing identifiers. Because the data sets are derived from information provided by individual registrants, we cannot guarantee the accuracy of the data sets. In addition, it is possible inaccuracies or other errors were introduced into the data sets during the process of extracting the data and compiling the data sets. Finally, the data sets do not reflect all available information, including certain metadata associated with Commission filings. The data sets are intended to assist the public in analyzing data contained in Commission filings; however, they are not a substitute for such filings. Investors should review the full Commission filings before making any investment decision.
Once a month, the second-to-latest dump of data (ex: August 2022 dump is downloaded in October 2022) is downloaded from the page and then the tables are extracted and appended to the existing ones in this Redivis dataset.
Please refer to this documentation file created by the SEC, which provides documentation of scope, organization, file formats and table definitions.
Facebook
TwitterIn 2023, the top ranked full-time business school in the United States was the Stanford Graduate School of Business in Stanford, California, where tuition costs students a total of 80,613 U.S. dollars.
Facebook
TwitterThe Wikipedia Change Metadata is a curation of article changes, updates, and edits over time.
This dataset includes the history (2001 to 2019) of Wikipedia edits and collaboration elements (e.g. administrative decisions, elections, communication between
This dataset is freely available to any Stanford researcher and does not require a DUA
**Source for details below: **https://zenodo.org/record/3605388#.YWitsdnML0o
Dataset details
Part 1: HTML revision history The data is split into 558 directories, named enwiki-20190301-pages-meta-history$1.xml-p$2p$3, where $1 ranges from 1 to 27, and *p$2p$3 *indicates that the directory contains revisions for pages with ids between $2 and $3. (This naming scheme directly mirrors that of the wikitext revision history from which WikiHist.html was derived.) Each directory contains a collection of gzip-compressed JSON files, each containing 1,000 HTML article revisions. Each row in the gzipped JSON files represents one article revision. Rows are sorted by page id, and revisions of the same page are sorted by revision id. We include all revision information from the original wikitext dump, the only difference being that we replace the revision’s wikitext content with its parsed HTML version (and that we store the data in JSON rather than XML):
%3C!-- --%3E
Part 2: Page creation times (page_creation_times.json.gz)
This JSON file specifies the creation time of each English Wikipedia page. It can, e.g., be used to determine if a wiki link was blue or red at a specific time in the past. Format:
%3C!-- --%3E
Part 3: Redirect history (redirect_history.json.gz)
This JSON file specifies all revisions corresponding to redirects, as well as the target page to which the respective page redirected at the time of the revision. This information is useful for reconstructing Wikipedia's link network at any time in the past. Format:
%3C!-- --%3E
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterThis dataset reflects the current (updated weekly) set of EDGAR filings available on the Yens at /zfs/data/NODR/EDGAR_HTTPS/edgar/.
A script is run on a weekly basis that pulls the most recent indices of EDGAR filings from this link, downloads new filings to /zfs/data/NODR/EDGAR_HTTPS/edgar/ on the Yens, and then updates the table in this dataset with those filings. You can use the filepath column to access a specific filing on the Yens.
Note that in order to use filings on the Yens, you will need to have access to the Yens either as a member of the Stanford GSB research community or as a sponsored collaborator.
You may use this dataset to filter through the universe of EDGAR filings by CIK, company name, filing date, etc. and then compile a list of filings that you would like to use on the Yens.