https://fred.stlouisfed.org/legal/#copyright-pre-approvalhttps://fred.stlouisfed.org/legal/#copyright-pre-approval
View data of the S&P 500, an index of the stocks of 500 leading companies in the US economy, which provides a gauge of the U.S. equity market.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset contains historical stock price data for Tesla Inc. (TSLA) spanning from 1995 to 2024. It provides an in-depth look at the performance of Tesla's stock over nearly three decades, covering various key financial indicators and metrics that have shaped the company's growth story.
Tesla, Inc. (TSLA) is one of the most recognized electric vehicle manufacturers in the world, and its stock has experienced substantial volatility, making it a popular asset for investors, analysts, and enthusiasts. From its IPO in 2010 to its meteoric rise in the following years, this dataset captures the evolution of its stock price and trading volume.
The dataset includes the following key columns:
Date: The date of the stock data.
Open: The opening price of Tesla's stock on a given date.
High: The highest price reached by Tesla's stock on that date.
Low: The lowest price reached by Tesla's stock on that date.
Close: The closing price of Tesla's stock on that date.
Adj Close: The adjusted closing price, which accounts for stock splits and dividends.
Volume: The total number of shares traded on that date.
Tesla's IPO and Early Performance: The dataset starts in 1995, a few years before Tesla's IPO in 2010. This gives users insight into the pre-IPO trading environment for the company and the broader market trends.
Post-IPO Growth: After Tesla went public in 2010, it experienced significant volatility, with periods of rapid growth and significant dips. The stock price and volume data reflect these shifts, helping users track Tesla's journey from a niche electric vehicle startup to one of the most valuable companies globally.
Stock Splits & Adjusted Close: The data includes adjusted close values, which provide a clear view of the stock's performance over time, accounting for stock splits and dividends. Notably, Tesla has undergone stock splits in recent years, and the "Adj Close" column allows users to view a consistent series of values.
2020-2024 Surge: Tesla's stock price saw a remarkable rise between 2020 and 2024, driven by its strong earnings reports, market optimism, and the overall growth of the electric vehicle and clean energy sectors. This period saw some of the most significant increases in Tesla's stock price, reflecting investor sentiment and broader trends in the stock market.
Market Volatility and External Factors: Users can analyze how external factors, such as changes in the global economy, the electric vehicle industry, and global events (like the COVID-19 pandemic), affected Tesla’s stock price.
Stock Price Prediction Models: Data scientists and machine learning practitioners can use this dataset to build models that predict Tesla's stock price based on historical data.
Technical Analysis: The dataset provides enough detail to perform technical analysis, such as moving averages, volatility analysis, and trend recognition.
Comparative Analysis: Analysts can compare Tesla's performance with other electric vehicle manufacturers or traditional automakers to gauge the company's market position.
Financial Insights and Investment Research: Investors can analyze key financial indicators, trading volume, and stock price movement to make informed decisions or study Tesla's financial growth.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This is not going to be an article or Op-Ed about Michael Jordan. Since 2009 we've been in the longest bull-market in history, that's 11 years and counting. However a few metrics like the stock market P/E, the call to put ratio and of course the Shiller P/E suggest a great crash is coming in-between the levels of 1929 and the dot.com bubble. Mean reversion historically is inevitable and the Fed's printing money experiment could end in disaster for the stock market in late 2021 or 2022. You can read Jeremy Grantham's Last Dance article here. You are likely well aware of Michael Burry's predicament as well. It's easier for you just to skim through two related videos on this topic of a stock market crash. Michael Burry's Warning see this YouTube. Jeremy Grantham's Warning See this YouTube. Typically when there is a major event in the world, there is a crash and then a bear market and a recovery that takes many many months. In March, 2020 that's not what we saw since the Fed did some astonishing things that means a liquidity sloth and the risk of a major inflation event. The pandemic represented the quickest decline of at least 30% in the history of the benchmark S&P 500, but the recovery was not correlated to anything but Fed intervention. Since the pandemic clearly isn't disappearing and many sectors such as travel, business travel, tourism and supply chain disruptions appear significantly disrupted - the so-called economic recovery isn't so great. And there's this little problem at the heart of global capitalism today, the stock market just keeps going up. Crashes and corrections typically occur frequently in a normal market. But the Fed liquidity and irresponsible printing of money is creating a scenario where normal behavior isn't occurring on the markets. According to data provided by market analytics firm Yardeni Research, the benchmark index has undergone 38 declines of at least 10% since the beginning of 1950. Since March, 2020 we've barely seen a down month. September, 2020 was flat-ish. The S&P 500 has more than doubled since those lows. Look at the angle of the curve: The S&P 500 was 735 at the low in 2009, so in this bull market alone it has gone up 6x in valuation. That's not a normal cycle and it could mean we are due for an epic correction. I have to agree with the analysts who claim that the long, long bull market since 2009 has finally matured into a fully-fledged epic bubble. There is a complacency, buy-the dip frenzy and general meme environment to what BigTech can do in such an environment. The weight of Apple, Amazon, Alphabet, Microsoft, Facebook, Nvidia and Tesla together in the S&P and Nasdaq is approach a ridiculous weighting. When these stocks are seen both as growth, value and companies with unbeatable moats the entire dynamics of the stock market begin to break down. Check out FANG during the pandemic. BigTech is Seen as Bullet-Proof me valuations and a hysterical speculative behavior leads to even higher highs, even as 2020 offered many younger people an on-ramp into investing for the first time. Some analysts at JP Morgan are even saying that until retail investors stop charging into stocks, markets probably don’t have too much to worry about. Hedge funds with payment for order flows can predict exactly how these retail investors are behaving and monetize them. PFOF might even have to be banned by the SEC. The risk-on market theoretically just keeps going up until the Fed raises interest rates, which could be in 2023! For some context, we're more than 1.4 years removed from the bear-market bottom of the coronavirus crash and haven't had even a 5% correction in nine months. This is the most over-priced the market has likely ever been. At the night of the dot-com bubble the S&P 500 was only 1,400. Today it is 4,500, not so many years after. Clearly something is not quite right if you look at history and the P/E ratios. A market pumped with liquidity produces higher earnings with historically low interest rates, it's an environment where dangerous things can occur. In late 1997, as the S&P 500 passed its previous 1929 peak of 21x earnings, that seemed like a lot, but nothing compared to today. For some context, the S&P 500 Shiller P/E closed last week at 38.58, which is nearly a two-decade high. It's also well over double the average Shiller P/E of 16.84, dating back 151 years. So the stock market is likely around 2x over-valued. Try to think rationally about what this means for valuations today and your favorite stock prices, what should they be in historical terms? The S&P 500 is up 31% in the past year. It will likely hit 5,000 before a correction given the amount of added liquidity to the system and the QE the Fed is using that's like a huge abuse of MMT, or Modern Monetary Theory. This has also lent to bubbles in the housing market, crypto and even commodities like Gold with long-term global GDP meeting many headwinds in the years ahead due to a...
This table contains 25 series, with data for years 1956 - present (not all combinations necessarily have data for all years). This table contains data described by the following dimensions (Not all combinations are available): Geography (1 items: Canada ...), Toronto Stock Exchange Statistics (25 items: Standard and Poor's/Toronto Stock Exchange Composite Index; high; Standard and Poor's/Toronto Stock Exchange Composite Index; close; Toronto Stock Exchange; oil and gas; closing quotations; Standard and Poor's/Toronto Stock Exchange Composite Index; low ...).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The main stock market index in Hong Kong (HK50) increased 3587 points or 17.88% since the beginning of 2025, according to trading on a contract for difference (CFD) that tracks this benchmark index from Hong Kong. Hong Kong Stock Market Index (HK50) - values, historical data, forecasts and news - updated on March of 2025.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset tabulates the St. Mary''S population over the last 20 plus years. It lists the population for each year, along with the year on year change in population, as well as the change in percentage terms for each year. The dataset can be utilized to understand the population change of St. Mary''S across the last two decades. For example, using this dataset, we can identify if the population is declining or increasing. If there is a change, when the population peaked, or if it is still growing and has not reached its peak. We can also compare the trend with the overall trend of United States population over the same period of time.
Key observations
In 2022, the population of St. Mary''S was 596, a 1.32% decrease year-by-year from 2021. Previously, in 2021, St. Mary''S population was 604, an increase of 0.33% compared to a population of 602 in 2020. Over the last 20 plus years, between 2000 and 2022, population of St. Mary''S increased by 105. In this period, the peak population was 604 in the year 2021. The numbers suggest that the population has already reached its peak and is showing a trend of decline. Source: U.S. Census Bureau Population Estimates Program (PEP).
When available, the data consists of estimates from the U.S. Census Bureau Population Estimates Program (PEP).
Data Coverage:
Variables / Data Columns
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for St. Mary''S Population by Year. You can refer the same here
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset tabulates the Reile''S Acres population over the last 20 plus years. It lists the population for each year, along with the year on year change in population, as well as the change in percentage terms for each year. The dataset can be utilized to understand the population change of Reile''S Acres across the last two decades. For example, using this dataset, we can identify if the population is declining or increasing. If there is a change, when the population peaked, or if it is still growing and has not reached its peak. We can also compare the trend with the overall trend of United States population over the same period of time.
Key observations
In 2022, the population of Reile''S Acres was 841, a 4.73% increase year-by-year from 2021. Previously, in 2021, Reile''S Acres population was 803, an increase of 12.31% compared to a population of 715 in 2020. Over the last 20 plus years, between 2000 and 2022, population of Reile''S Acres increased by 587. In this period, the peak population was 841 in the year 2022. The numbers suggest that the population has not reached its peak yet and is showing a trend of further growth. Source: U.S. Census Bureau Population Estimates Program (PEP).
When available, the data consists of estimates from the U.S. Census Bureau Population Estimates Program (PEP).
Data Coverage:
Variables / Data Columns
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for Reile''S Acres Population by Year. You can refer the same here
Envestnet®| Yodlee®'s Online Purchase Data (Aggregate/Row) Panels consist of de-identified, near-real time (T+1) USA credit/debit/ACH transaction level data – offering a wide view of the consumer activity ecosystem. The underlying data is sourced from end users leveraging the aggregation portion of the Envestnet®| Yodlee®'s financial technology platform.
Envestnet | Yodlee Consumer Panels (Aggregate/Row) include data relating to millions of transactions, including ticket size and merchant location. The dataset includes de-identified credit/debit card and bank transactions (such as a payroll deposit, account transfer, or mortgage payment). Our coverage offers insights into areas such as consumer, TMT, energy, REITs, internet, utilities, ecommerce, MBS, CMBS, equities, credit, commodities, FX, and corporate activity. We apply rigorous data science practices to deliver key KPIs daily that are focused, relevant, and ready to put into production.
We offer free trials. Our team is available to provide support for loading, validation, sample scripts, or other services you may need to generate insights from our data.
Investors, corporate researchers, and corporates can use our data to answer some key business questions such as: - How much are consumers spending with specific merchants/brands and how is that changing over time? - Is the share of consumer spend at a specific merchant increasing or decreasing? - How are consumers reacting to new products or services launched by merchants? - For loyal customers, how is the share of spend changing over time? - What is the company’s market share in a region for similar customers? - Is the company’s loyal user base increasing or decreasing? - Is the lifetime customer value increasing or decreasing?
Additional Use Cases: - Use spending data to analyze sales/revenue broadly (sector-wide) or granular (company-specific). Historically, our tracked consumer spend has correlated above 85% with company-reported data from thousands of firms. Users can sort and filter by many metrics and KPIs, such as sales and transaction growth rates and online or offline transactions, as well as view customer behavior within a geographic market at a state or city level. - Reveal cohort consumer behavior to decipher long-term behavioral consumer spending shifts. Measure market share, wallet share, loyalty, consumer lifetime value, retention, demographics, and more.) - Study the effects of inflation rates via such metrics as increased total spend, ticket size, and number of transactions. - Seek out alpha-generating signals or manage your business strategically with essential, aggregated transaction and spending data analytics.
Use Cases Categories (Our data provides an innumerable amount of use cases, and we look forward to working with new ones): 1. Market Research: Company Analysis, Company Valuation, Competitive Intelligence, Competitor Analysis, Competitor Analytics, Competitor Insights, Customer Data Enrichment, Customer Data Insights, Customer Data Intelligence, Demand Forecasting, Ecommerce Intelligence, Employee Pay Strategy, Employment Analytics, Job Income Analysis, Job Market Pricing, Marketing, Marketing Data Enrichment, Marketing Intelligence, Marketing Strategy, Payment History Analytics, Price Analysis, Pricing Analytics, Retail, Retail Analytics, Retail Intelligence, Retail POS Data Analysis, and Salary Benchmarking
Investment Research: Financial Services, Hedge Funds, Investing, Mergers & Acquisitions (M&A), Stock Picking, Venture Capital (VC)
Consumer Analysis: Consumer Data Enrichment, Consumer Intelligence
Market Data: AnalyticsB2C Data Enrichment, Bank Data Enrichment, Behavioral Analytics, Benchmarking, Customer Insights, Customer Intelligence, Data Enhancement, Data Enrichment, Data Intelligence, Data Modeling, Ecommerce Analysis, Ecommerce Data Enrichment, Economic Analysis, Financial Data Enrichment, Financial Intelligence, Local Economic Forecasting, Location-based Analytics, Market Analysis, Market Analytics, Market Intelligence, Market Potential Analysis, Market Research, Market Share Analysis, Sales, Sales Data Enrichment, Sales Enablement, Sales Insights, Sales Intelligence, Spending Analytics, Stock Market Predictions, and Trend Analysis
Welcome to the official source for Employee Payroll Costing data for the City of Chicago. This dataset offers a clean, comprehensive view of the City's payroll information by employee. About the Dataset: This has been extracted from the City of Chicago's Financial Management and Purchasing System (FMPS). FMPS is the system used to process all financial transactions made by the City of Chicago, ensuring accuracy and transparency in fiscal operations. This dataset includes useful details like employee name, pay element, pay period, fund, appropriation, department, and job title. Data Disclaimer: The following data disclaimer governs your use of the dataset extracted from the Payroll Costing module of the City of Chicago's Financial Management and Purchasing System (FMPS) or (FMPS Payroll Costing). Point-in-Time Extract: The dataset provided herein, represents a point-in-time extract from the FMPS Payroll Costing module and may not reflect real-time or up-to-date data. Financial Statement Disclaimer – Timeframe and Limitations: This dataset is provided without audit. It is essential to note that this dataset is not a component of the City's Annual Comprehensive Financial Report (ACFR). As such, it remains preliminary and is subject to the end-of-year reconciliation process inherent to the City's annual financial procedures outlined in the ACFR. Note on Pay Elements: All pay elements available in the FMPS Payroll Costing module have been included in this dataset. Previously published datasets, such as "Employee Overtime and Supplemental Earnings," contained only a subset of these pay elements. Payroll Period: The dataset's timeframe is organized into 24 payroll periods. It is important to understand that these periods may or may not directly correspond to specific earnings periods. Aggregating Data: The CIty of Chicago often has employees with the same name (including middle initials). It is vital to use the unique employee identifier code (EMPLOYEE DATASET ID) when aggregating at the employee level to avoid duplication. Data Subject to Change: This dataset is subject to updates and modifications due to the course of business, including activities such as canceling, adjusting, and reissuing checks. Data Disclosure Exemptions: Information disclosed in this dataset is subject to FOIA Exemption Act, 5 ILCS 140/7 (Link:https://www.ilga.gov/legislation/ilcs/documents/000501400K7.htm)
On October 29, 1929, the U.S. experienced the most devastating stock market crash in it's history. The Wall Street Crash of 1929 set in motion the Great Depression, which lasted for twelve years and affected virtually all industrialized countries. In the United States, GDP fell to it's lowest recorded level of just 57 billion U.S dollars in 1933, before rising again shortly before the Second World War. After the war, GDP fluctuated, but it increased gradually until the Great Recession in 2008. Real GDP Real GDP allows us to compare GDP over time, by adjusting all figures for inflation. In this case, all numbers have been adjusted to the value of the US dollar in FY2012. While GDP rose every year between 1946 and 2008, when this is adjusted for inflation it can see that the real GDP dropped at least once in every decade except the 1960s and 2010s. The Great Recession Apart from the Great Depression, and immediately after WWII, there have been two times where both GDP and real GDP dropped together. The first was during the Great Recession, which lasted from December 2007 until June 2009 in the US, although its impact was felt for years after this. After the collapse of the financial sector in the US, the government famously bailed out some of the country's largest banking and lending institutions. Since recovery began in late 2009, US GDP has grown year-on-year, and reached 21.4 trillion dollars in 2019. The coronavirus pandemic and the associated lockdowns then saw GDP fall again, for the first time in a decade. As economic recovery from the pandemic has been compounded by supply chain issues, inflation, and rising global geopolitical instability, it remains to be seen what the future holds for the U.S. economy.
Notice of data discontinuation: Since the start of the pandemic, AP has reported case and death counts from data provided by Johns Hopkins University. Johns Hopkins University has announced that they will stop their daily data collection efforts after March 10. As Johns Hopkins stops providing data, the AP will also stop collecting daily numbers for COVID cases and deaths. The HHS and CDC now collect and visualize key metrics for the pandemic. AP advises using those resources when reporting on the pandemic going forward.
April 9, 2020
April 20, 2020
April 29, 2020
September 1st, 2020
February 12, 2021
new_deaths
column.February 16, 2021
The AP is using data collected by the Johns Hopkins University Center for Systems Science and Engineering as our source for outbreak caseloads and death counts for the United States and globally.
The Hopkins data is available at the county level in the United States. The AP has paired this data with population figures and county rural/urban designations, and has calculated caseload and death rates per 100,000 people. Be aware that caseloads may reflect the availability of tests -- and the ability to turn around test results quickly -- rather than actual disease spread or true infection rates.
This data is from the Hopkins dashboard that is updated regularly throughout the day. Like all organizations dealing with data, Hopkins is constantly refining and cleaning up their feed, so there may be brief moments where data does not appear correctly. At this link, you’ll find the Hopkins daily data reports, and a clean version of their feed.
The AP is updating this dataset hourly at 45 minutes past the hour.
To learn more about AP's data journalism capabilities for publishers, corporations and financial institutions, go here or email kromano@ap.org.
Use AP's queries to filter the data or to join to other datasets we've made available to help cover the coronavirus pandemic
Filter cases by state here
Rank states by their status as current hotspots. Calculates the 7-day rolling average of new cases per capita in each state: https://data.world/associatedpress/johns-hopkins-coronavirus-case-tracker/workspace/query?queryid=481e82a4-1b2f-41c2-9ea1-d91aa4b3b1ac
Find recent hotspots within your state by running a query to calculate the 7-day rolling average of new cases by capita in each county: https://data.world/associatedpress/johns-hopkins-coronavirus-case-tracker/workspace/query?queryid=b566f1db-3231-40fe-8099-311909b7b687&showTemplatePreview=true
Join county-level case data to an earlier dataset released by AP on local hospital capacity here. To find out more about the hospital capacity dataset, see the full details.
Pull the 100 counties with the highest per-capita confirmed cases here
Rank all the counties by the highest per-capita rate of new cases in the past 7 days here. Be aware that because this ranks per-capita caseloads, very small counties may rise to the very top, so take into account raw caseload figures as well.
The AP has designed an interactive map to track COVID-19 cases reported by Johns Hopkins.
@(https://datawrapper.dwcdn.net/nRyaf/15/)
<iframe title="USA counties (2018) choropleth map Mapping COVID-19 cases by county" aria-describedby="" id="datawrapper-chart-nRyaf" src="https://datawrapper.dwcdn.net/nRyaf/10/" scrolling="no" frameborder="0" style="width: 0; min-width: 100% !important;" height="400"></iframe><script type="text/javascript">(function() {'use strict';window.addEventListener('message', function(event) {if (typeof event.data['datawrapper-height'] !== 'undefined') {for (var chartId in event.data['datawrapper-height']) {var iframe = document.getElementById('datawrapper-chart-' + chartId) || document.querySelector("iframe[src*='" + chartId + "']");if (!iframe) {continue;}iframe.style.height = event.data['datawrapper-height'][chartId] + 'px';}}});})();</script>
Johns Hopkins timeseries data - Johns Hopkins pulls data regularly to update their dashboard. Once a day, around 8pm EDT, Johns Hopkins adds the counts for all areas they cover to the timeseries file. These counts are snapshots of the latest cumulative counts provided by the source on that day. This can lead to inconsistencies if a source updates their historical data for accuracy, either increasing or decreasing the latest cumulative count. - Johns Hopkins periodically edits their historical timeseries data for accuracy. They provide a file documenting all errors in their timeseries files that they have identified and fixed here
This data should be credited to Johns Hopkins University COVID-19 tracking project
Single and multi-family (less than 7 units) property characteristics collected and maintained by the Assessor's Office for all of Cook County, from 1999 to present. The office uses this data primarily for valuation and reporting. When working with Parcel Index Numbers (PINs) make sure to zero-pad them to 14 digits. Some datasets may lose leading zeros for PINs when downloaded. Current property class codes, their levels of assessment, and descriptions can be found on the Assessor's website. Note that class codes details can change across time. This data is improvement-level - 'improvements' are individual buildings on a parcel. Each row in a given year corresponds to a building e.g. two rows for the same parcel in one year means a parcel has more than one building. Data will be updated bi-weekly. Rowcount and characteristics for the current year are only final once the Assessor has certified the assessment roll for all townships. Depending on the time of year, some third-party and internal data will be missing for the most recent year. Assessments mailed this year represent values from last year, so this isn't an issue. By the time the Data Department models values for this year, those data will have populated. NOTE: The Assessor's Office has recently changed the way Home Improvement Exemptions (HIEs) are tracked in its data. HIEs "freeze" a property's characteristics for a period of time with the intention of encouraging owners to improve their property without fear of assessment increases. Historically, the updated, "improved" characteristics were saved in a separate file. However, in more recent years, the improved characteristics are saved in the main characteristics file. As such, the records in this data set from before 2021 do NOT include HIE characteristic updates, while those after and including 2021 DO include those updates. For more information on HIEs, see the Assessor's Data Department wiki. For more information on how this data is used to estimate property values, see the Assessor's residential modeling code on GitHub. Township codes can be found in the legend of this map. For more information on the sourcing of attached data and the preparation of this dataset, see the Assessor's Standard Operating Procedures for Open Data on GitHub. Read about the Assessor's 2023 Open Data Refresh.
The American Community Survey (ACS) is an ongoing survey that provides data every year -- giving communities the current information they need to plan investments and services. The ACS covers a broad range of topics about social, economic, demographic, and housing characteristics of the U.S. population. Much of the ACS data provided on the Census Bureau's Web site are available separately by age group, race, Hispanic origin, and sex. Summary files, Subject tables, Data profiles, and Comparison profiles are available for the nation, all 50 states, the District of Columbia, Puerto Rico, every congressional district, every metropolitan area, and all counties and places with populations of 65,000 or more. Subject tables provide an overview of the estimates available in a particular topic. The data are presented as population counts and percentages. There are over 16,000 variables in this dataset.
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The size of the US Data Center Industry market was valued at USD XX Million in 2023 and is projected to reach USD XXX Million by 2032, with an expected CAGR of 6.00% during the forecast period.A data center is a facility that keeps computer systems and networking equipment housed, processing, and transmitting data. It represents the infrastructure on which organizations carry out their IT operations and host websites, email servers, and database servers. Data centers, therefore, are imperative to any size business: small start-ups or large enterprise since they enable digital transformation, thus making business applications available.The US data center industry is one of the largest and most developed in the world. The country boasts robust digital infrastructure, abundant energy resources, and a highly skilled workforce, making it an attractive destination for data center operators. Some of the drivers of the US data center market are the growing trend of cloud computing, internet of things (IoT), and high-performance computing requirements.Top-of-the-line technology companies along with cloud service providers set up major data center footprints in the US, mostly in key regions such as Silicon Valley and Northern Virginia, Dallas, for example. These data centers support applications such as e-commerce-a manner of accessing streaming services-whose development depends on its artificial intelligence financial service type. As demand increases concerning data center capacity, therefore, the US data centre industry will continue to prosper as the world's hub for reliable and scalable solutions. Recent developments include: February 2023: The expansion of Souther Telecom to its data center in Atlanta, Georgia, at 345 Courtland Street, was announced by H5 Data Centers, a colocation and wholesale data center operator. One of the top communication service providers in the southeast is Southern Telecom. Customers in Alabama, Georgia, Florida, and Mississippi will receive better service due to the expansion of this low-latency fiber optic network.December 2022: DigitalBridge Group, Inc. and IFM Investors announced completing their previously announced transaction in which funds affiliated with the investment management platform of DigitalBridge and an affiliate of IFM Investors acquired all outstanding common shares of Switch, Inc. for USD approximately USD 11 billion, including the repayment of outstanding debt.October 2022: Three additional data centers in Charlotte, Nashville, and Louisville have been made available to Flexential's cloud customers, according to the supplier of data center colocation, cloud computing, and connectivity. By the end of the year, clients will have access to more than 220MW of hybrid IT capacity spread across 40 data centers in 19 markets, which is well aligned with Flexential's 2022 ambition to add 33MW of new, sustainable data center development projects.. Key drivers for this market are: , High Mobile penetration, Low Tariff, and Mature Regulatory Authority; Successful Privatization and Liberalization Initiatives. Potential restraints include: , Difficulties in Customization According to Business Needs. Notable trends are: OTHER KEY INDUSTRY TRENDS COVERED IN THE REPORT.
This dataset reflects reported incidents of crime (with the exception of murders where data exists for each victim) that have occurred in the City of Chicago over the past year, minus the most recent seven days of data. Data is extracted from the Chicago Police Department's CLEAR (Citizen Law Enforcement Analysis and Reporting) system. In order to protect the privacy of crime victims, addresses are shown at the block level only and specific locations are not identified. Should you have questions about this dataset, you may contact the Research & Development Division of the Chicago Police Department at 312.745.6071 or RandD@chicagopolice.org. Disclaimer: These crimes may be based upon preliminary information supplied to the Police Department by the reporting parties that have not been verified. The preliminary crime classifications may be changed at a later date based upon additional investigation and there is always the possibility of mechanical or human error. Therefore, the Chicago Police Department does not guarantee (either expressed or implied) the accuracy, completeness, timeliness, or correct sequencing of the information and the information should not be used for comparison purposes over time. The Chicago Police Department will not be responsible for any error or omission, or for the use of, or the results obtained from the use of this information. All data visualizations on maps should be considered approximate and attempts to derive specific addresses are strictly prohibited.
The Chicago Police Department is not responsible for the content of any off-site pages that are referenced by or that reference this web page other than an official City of Chicago or Chicago Police Department web page. The user specifically acknowledges that the Chicago Police Department is not responsible for any defamatory, offensive, misleading, or illegal conduct of other users, links, or third parties and that the risk of injury from the foregoing rests entirely with the user. The unauthorized use of the words "Chicago Police Department," "Chicago Police," or any colorable imitation of these words or the unauthorized use of the Chicago Police Department logo is unlawful. This web page does not, in any way, authorize such use. Data is updated daily Tuesday through Sunday. The dataset contains more than 65,000 records/rows of data and cannot be viewed in full in Microsoft Excel. Therefore, when downloading the file, select CSV from the Export menu. Open the file in an ASCII text editor, such as Wordpad, to view and search. To access a list of Chicago Police Department - Illinois Uniform Crime Reporting (IUCR) codes, go to http://bit.ly/rk5Tpc.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Data is collected because of public interest in how the City’s budget is being spent on salary and overtime pay for all municipal employees. Data is input into the City's Personnel Management System (“PMS”) by the respective user Agencies. Each record represents the following statistics for every city employee: Agency, Last Name, First Name, Middle Initial, Agency Start Date, Work Location Borough, Job Title Description, Leave Status as of the close of the FY (June 30th), Base Salary, Pay Basis, Regular Hours Paid, Regular Gross Paid, Overtime Hours worked, Total Overtime Paid, and Total Other Compensation (i.e. lump sum and/or retro payments). This data can be used to analyze how the City's financial resources are allocated and how much of the City's budget is being devoted to overtime. The reader of this data should be aware that increments of salary increases received over the course of any one fiscal year will not be reflected. All that is captured, is the employee's final base and gross salary at the end of the fiscal year.
NOTE: As a part of FISA-OPA’s routine process for reviewing and releasing Citywide Payroll Data, data for some agencies (specifically NYC Police Department (NYPD) and the District Attorneys’ Offices (Manhattan, Kings, Queens, Richmond, Bronx, and Special Narcotics)) have been redacted since they are exempt from disclosure pursuant to the Freedom of Information Law, POL § 87(2)(f), on the ground that disclosure of the information could endanger the life and safety of the public servants listed thereon. They are further exempt from disclosure pursuant to POL § 87(2)(e)(iii), on the ground that any release of the information would identify confidential sources or disclose confidential information relating to a criminal investigation, and POL § 87(2)(e)(iv), on the ground that disclosure would reveal non-routine criminal investigative techniques or procedures.
This is a dataset hosted by the City of New York. The city has an open data platform found here and they update their information according the amount of data that is brought in. Explore New York City using Kaggle and all of the data sources available through the City of New York organization page!
This dataset is maintained using Socrata's API and Kaggle's API. Socrata has assisted countless organizations with hosting their open data and has been an integral part of the process of bringing more data to the public.
Cover photo by Dean Rose on Unsplash
Unsplash Images are distributed under a unique Unsplash License.
This dataset contains aggregate data on violent index victimizations at the quarter level of each year (i.e., January – March, April – June, July – September, October – December), from 2001 to the present (1991 to present for Homicides), with a focus on those related to gun violence. Index crimes are 10 crime types selected by the FBI (codes 1-4) for special focus due to their seriousness and frequency. This dataset includes only those index crimes that involve bodily harm or the threat of bodily harm and are reported to the Chicago Police Department (CPD). Each row is aggregated up to victimization type, age group, sex, race, and whether the victimization was domestic-related. Aggregating at the quarter level provides large enough blocks of incidents to protect anonymity while allowing the end user to observe inter-year and intra-year variation. Any row where there were fewer than three incidents during a given quarter has been deleted to help prevent re-identification of victims. For example, if there were three domestic criminal sexual assaults during January to March 2020, all victims associated with those incidents have been removed from this dataset. Human trafficking victimizations have been aggregated separately due to the extremely small number of victimizations. This dataset includes a " GUNSHOT_INJURY_I " column to indicate whether the victimization involved a shooting, showing either Yes ("Y"), No ("N"), or Unknown ("UKNOWN.") For homicides, injury descriptions are available dating back to 1991, so the "shooting" column will read either "Y" or "N" to indicate whether the homicide was a fatal shooting or not. For non-fatal shootings, data is only available as of 2010. As a result, for any non-fatal shootings that occurred from 2010 to the present, the shooting column will read as “Y.” Non-fatal shooting victims will not be included in this dataset prior to 2010; they will be included in the authorized dataset, but with "UNKNOWN" in the shooting column. The dataset is refreshed daily, but excludes the most recent complete day to allow CPD time to gather the best available information. Each time the dataset is refreshed, records can change as CPD learns more about each victimization, especially those victimizations that are most recent. The data on the Mayor's Office Violence Reduction Dashboard is updated daily with an approximately 48-hour lag. As cases are passed from the initial reporting officer to the investigating detectives, some recorded data about incidents and victimizations may change once additional information arises. Regularly updated datasets on the City's public portal may change to reflect new or corrected information. How does this dataset classify victims? The methodology by which this dataset classifies victims of violent crime differs by victimization type: Homicide and non-fatal shooting victims: A victimization is considered a homicide victimization or non-fatal shooting victimization depending on its presence in CPD's homicide victims data table or its shooting victims data table. A victimization is considered a homicide only if it is present in CPD's homicide data table, while a victimization is considered a non-fatal shooting only if it is present in CPD's shooting data tables and absent from CPD's homicide data table. To determine the IUCR code of homicide and non-fatal shooting victimizations, we defer to the incident IUCR code available in CPD's Crimes, 2001-present dataset (available on the City's open data portal). If the IUCR code in CPD's Crimes dataset is inconsistent with the homicide/non-fatal shooting categorization, we defer to CPD's Victims dataset. For a criminal homicide, the only sensible IUCR codes are 0110 (first-degree murder) or 0130 (second-degree murder). For a non-fatal shooting, a sensible IUCR code must signify a criminal sexual assault, a robbery, or, most commonly, an aggravated battery. In rare instances, the IUCR code in CPD's Crimes and Vi
https://www.icpsr.umich.edu/web/ICPSR/studies/38544/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/38544/terms
The Check-In Dataset is the second public-use dataset in the Dunham's Data series, a unique data collection created by Kate Elswit (Royal Central School of Speech and Drama, University of London) and Harmony Bench (The Ohio State University) to explore questions and problems that make the analysis and visualization of data meaningful for dance history through the case study of choreographer Katherine Dunham. The Check-In Dataset accounts for the comings and goings of Dunham's nearly 200 dancers, drummers, and singers and discerns who among them were working in the studio and theatre together over the fourteen years from 1947 to 1960. As with the Everyday Itinerary Dataset, the first public-use dataset from Dunham's Data, data on check-ins come from scattered sources. Due to information available, it has a greater level of ambiguity as many dates are approximated in order to achieve accurate chronological sequence. By showing who shared time and space together, the Check-In Dataset can be used to trace potential lines of transmission of embodied knowledge within and beyond the Dunham Company. Dunham's Data: Digital Methods for Dance Historical Inquiry is funded by the United Kingdom Arts and Humanities Research Council (AHRC AH/R012989/1, 2018-2022) and is part of a larger suite of ongoing digital collaborations by Bench and Elswit, Movement on the Move. The Dunham's Data team also includes digital humanities postdoctoral research assistant Antonio Jiménez-Mavillard and dance history postdoctoral research assistants Takiyah Nur Amin and Tia-Monique Uzor. For more information about Dunham's Data, please see the Dunham's Data website. Also, visit the Dunham's Data research blog to view the interactive visualizations based on the Dunham's Data.
A complete, historic universe of Cook County parcels with attached geographic, governmental, and spatial data. When working with Parcel Index Numbers (PINs) make sure to zero-pad them to 14 digits. Some datasets may lose leading zeros for PINs when downloaded. Additional notes:Data is attached via spatial join (st_contains) to each parcel's centroid. Centroids are based on Cook County parcel shapefiles. Older properties may be missing coordinates and thus also missing attached spatial data (usually they are missing a parcel boundary in the shapefile). Newer properties may be missing a mailing or property address, as they need to be assigned one by the postal service. Attached spatial data does NOT go all the way back to 1999. It is only available for more recent years, primarily those after 2012. The universe contains data for the current tax year, which may not be complete or final. PINs can still be added and removed to the universe up until the Board of Review closes appeals. Data will be updated monthly. Rowcount and characteristics for a given year are final once the Assessor has certified the assessment roll for all townships. Depending on the time of year, some third-party and internal data will be missing for the most recent year. Assessments mailed this year represent values from last year, so this isn't an issue. By the time the Data Department models values for this year, those data will have populated. Current property class codes, their levels of assessment, and descriptions can be found on the Assessor's website. Note that class codes details can change across time. Due to decrepencies between the systems used by the Assessor and Clerk's offices, tax_district_code is not currently up-to-date in this table. For more information on the sourcing of attached data and the preparation of this dataset, see the Assessor's data architecture repo on GitLab. Read about the Assessor's 2022 Open Data Refresh.
Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
This dataset contains the Town's Year-to-Date Budget and Actuals for Fiscal Years 2016 through 2019. Fiscal years run from July 1 to June 30.The data comes from the Town's Enterprise Resource Planning (ERP) software and is subject to change until the year's final audit is complete, which typically occurs by October of the following fiscal year. For example, revenues received may be posted back a previous month or expenditures may be reclassified from one expense category to another throughout the year. This data is maintained in a flexible way to produce a variety of financial reports as required by law, including the Town's annually Adopted Budget and Comprehensive Annual Financial Report (CAFR).These reports can be found on the Town's website through the following links:Town of Chapel Hill Adopted Budget Town of Chapel Hill CAFR
https://fred.stlouisfed.org/legal/#copyright-pre-approvalhttps://fred.stlouisfed.org/legal/#copyright-pre-approval
View data of the S&P 500, an index of the stocks of 500 leading companies in the US economy, which provides a gauge of the U.S. equity market.