Facebook
TwitterThe provided Python code is developed to extract data from the Federal Reserve Economic Data (FRED) regarding Bachelor's or Higher degree education in the United States, specifically at the state and county levels. The code generates data based on the current date and is available up until the year 2021.
This code is useful for research purposes, particularly for conducting comparative analyses involving educational and economic indicators. There are two distinct CSV files associated with this code. One file contains information on the percentage of Bachelor's or Higher degree holders among residents of all USA states, while the other file provides data on states, counties, and municipalities throughout the entire USA.
The extraction process involves applying different criteria, including content filtering (such as title, frequency, seasonal adjustment, and unit) and collaborative filtering based on item similarity. For the first CSV file, the algorithm extracts data for each state in the USA and assigns corresponding state names to the respective FRED codes using a loop. Similarly, for the second CSV file, data is extracted based on a given query, encompassing USA states, counties, and municipalities.
Facebook
TwitterThe dataset was created to predict market recession as inspired by assignment notebook in an online course, Python and Machine Learning for Asset Management by Edhec Business School, on Coursera. However, I aimed at doing this exercise for Indian economy but due to lack of monthly data for most indicators, I used FRED database similarly used in the course.
The time period chosen is 1996-2020 according to most data available.
This dataset is inspired by the assignment notebook in the online course mentioned to predict market recession for portfolio management.
Facebook
Twitterhttps://choosealicense.com/licenses/afl-3.0/https://choosealicense.com/licenses/afl-3.0/
Textual Time Series Dataset collected from the FRED.gov dataset using the FRED API for finetuning / pretraining in csv format as part of Humanity Unleashed Research.
Facebook
Twitterhttp://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/
This dataset was download via the Python library FREDAPI.
This data is part of my project with KaggleX. I hope this dataset may be of use to you.
| Field Name | Description |
|---|---|
| DGS1 | Market Yield on U.S. Treasury Securities at 1-Year Constant Maturity, Quoted on an Investment Basis |
| DGS10 | Market Yield on U.S. Treasury Securities at 10-Year Constant Maturity, Quoted on an Investment Basis |
| DGS1MO | Market Yield on U.S. Treasury Securities at 1-Month Constant Maturity, Quoted on an Investment Basis |
| DGS2 | Market Yield on U.S. Treasury Securities at 2-Year Constant Maturity, Quoted on an Investment Basis |
| DGS20 | Market Yield on U.S. Treasury Securities at 20-Year Constant Maturity, Quoted on an Investment Basis |
| DGS3 | Market Yield on U.S. Treasury Securities at 3-Year Constant Maturity, Quoted on an Investment Basis |
| DGS30 | Market Yield on U.S. Treasury Securities at 30-Year Constant Maturity, Quoted on an Investment Basis |
| DGS3MO | Market Yield on U.S. Treasury Securities at 3-Month Constant Maturity, Quoted on an Investment Basis |
| DGS5 | Market Yield on U.S. Treasury Securities at 5-Year Constant Maturity, Quoted on an Investment Basis |
| DGS6MO | Market Yield on U.S. Treasury Securities at 6-Month Constant Maturity, Quoted on an Investment Basis |
| DGS7 | Market Yield on U.S. Treasury Securities at 7-Year Constant Maturity, Quoted on an Investment Basis |
Facebook
Twitterhttps://www.usa.gov/government-works/https://www.usa.gov/government-works/
This dataset represents a snapshot of the FRED catalog, captured on 2025-03-24.
What is FRED? As per the FRED website,
Short for Federal Reserve Economic Data, FRED is an online database consisting of hundreds of thousands of economic data time series from scores of national, international, public, and private sources. FRED, created and maintained by the Research Department at the Federal Reserve Bank of St. Louis, goes far beyond simply providing data: It combines data with a powerful mix of tools that help the user understand, interact with, display, and disseminate the data. In essence, FRED helps users tell their data stories. The purpose of this article is to guide the potential (or current) FRED user through the various aspects and tools of the database.
The FRED database is an abolute gold mine of economic data time series. Thousands of such series are published on the FRED website, organized by category and avialable for viewing and downloading. In fact, a number of these economic datasets have been uploaded to kaggle. With in the current notebook, however, we are not interested in the individual time series; rather, we are focused on catalog itself.
The FRED API has been used for gaining access to the catalog. The catalog consists of two files
A given category is identified by a category_id. And, in a similar fashion, a given series is identified by a series_id. In a given category, one may find both a group of series and a set of sub-categories. As such every series record contains a category_id to identify the immediate category under which it is found category record contains a parent_id to indicate where in the category heirarchy it resides
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Reference: https://www.zillow.com/research/zhvi-methodology/
In setting out to create a new home price index, a major problem Zillow sought to overcome in existing indices was their inability to deal with the changing composition of properties sold in one time period versus another time period. Both a median sale price index and a repeat sales index are vulnerable to such biases (see the analysis here for an example of how influential the bias can be). For example, if expensive homes sell at a disproportionately higher rate than less expensive homes in one time period, a median sale price index will characterize this market as experiencing price appreciation relative to the prior period of time even if the true value of homes is unchanged between the two periods.
The ideal home price index would be based off sale prices for the same set of homes in each time period so there was never an issue of the sales mix being different across periods. This approach of using a constant basket of goods is widely used, common examples being a commodity price index and a consumer price index. Unfortunately, unlike commodities and consumer goods, for which we can observe prices in all time periods, we can’t observe prices on the same set of homes in all time periods because not all homes are sold in every time period.
The innovation that Zillow developed in 2005 was a way of approximating this ideal home price index by leveraging the valuations Zillow creates on all homes (called Zestimates). Instead of actual sale prices on every home, the index is created from estimated sale prices on every home. While there is some estimation error associated with each estimated sale price (which we report here), this error is just as likely to be above the actual sale price of a home as below (in statistical terms, this is referred to as minimal systematic error). Because of this fact, the distribution of actual sale prices for homes sold in a given time period looks very similar to the distribution of estimated sale prices for this same set of homes. But, importantly, Zillow has estimated sale prices not just for the homes that sold, but for all homes even if they didn’t sell in that time period. From this data, a comprehensive and robust benchmark of home value trends can be computed which is immune to the changing mix of properties that sell in different periods of time (see Dorsey et al. (2010) for another recent discussion of this approach).
For an in-depth comparison of the Zillow Home Value Index to the Case Shiller Home Price Index, please refer to the Zillow Home Value Index Comparison to Case-Shiller
Each Zillow Home Value Index (ZHVI) is a time series tracking the monthly median home value in a particular geographical region. In general, each ZHVI time series begins in April 1996. We generate the ZHVI at seven geographic levels: neighborhood, ZIP code, city, congressional district, county, metropolitan area, state and the nation.
Estimated sale prices (Zestimates) are computed based on proprietary statistical and machine learning models. These models begin the estimation process by subdividing all of the homes in United States into micro-regions, or subsets of homes either near one another or similar in physical attributes to one another. Within each micro-region, the models observe recent sale transactions and learn the relative contribution of various home attributes in predicting the sale price. These home attributes include physical facts about the home and land, prior sale transactions, tax assessment information and geographic location. Based on the patterns learned, these models can then estimate sale prices on homes that have not yet sold.
The sale transactions from which the models learn patterns include all full-value, arms-length sales that are not foreclosure resales. The purpose of the Zestimate is to give consumers an indication of the fair value of a home under the assumption that it is sold as a conventional, non-foreclosure sale. Similarly, the purpose of the Zillow Home Value Index is to give consumers insight into the home value trends for homes that are not being sold out of foreclosure status. Zillow research indicates that homes sold as foreclosures have typical discounts relative to non-foreclosure sales of between 20 and 40 percent, depending on the foreclosure saturation of the market. This is not to say that the Zestimate is not influenced by foreclosure resales. Zestimates are, in fact, influenced by foreclosure sales, but the pathway of this influence is through the downward pressure foreclosure sales put on non-foreclosure sale prices. It is the price signal observed in the latter that we are attempting to measure and, in turn, predict with the Zestimate.
Market Segments Within each region, we calculate the ZHVI for various subsets of homes (or mar...
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterThe provided Python code is developed to extract data from the Federal Reserve Economic Data (FRED) regarding Bachelor's or Higher degree education in the United States, specifically at the state and county levels. The code generates data based on the current date and is available up until the year 2021.
This code is useful for research purposes, particularly for conducting comparative analyses involving educational and economic indicators. There are two distinct CSV files associated with this code. One file contains information on the percentage of Bachelor's or Higher degree holders among residents of all USA states, while the other file provides data on states, counties, and municipalities throughout the entire USA.
The extraction process involves applying different criteria, including content filtering (such as title, frequency, seasonal adjustment, and unit) and collaborative filtering based on item similarity. For the first CSV file, the algorithm extracts data for each state in the USA and assigns corresponding state names to the respective FRED codes using a loop. Similarly, for the second CSV file, data is extracted based on a given query, encompassing USA states, counties, and municipalities.