Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Charts, Histograms, and Time Series • Create a histogram graph from band values of an image collection • Create a time series graph from band values of an image collection
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Figures in scientific publications are critically important because they often show the data supporting key findings. Our systematic review of research articles published in top physiology journals (n = 703) suggests that, as scientists, we urgently need to change our practices for presenting continuous data in small sample size studies. Papers rarely included scatterplots, box plots, and histograms that allow readers to critically evaluate continuous data. Most papers presented continuous data in bar and line graphs. This is problematic, as many different data distributions can lead to the same bar or line graph. The full data may suggest different conclusions from the summary statistics. We recommend training investigators in data presentation, encouraging a more complete presentation of data, and changing journal editorial policies. Investigators can quickly make univariate scatterplots for small sample size studies using our Excel templates.
Facebook
TwitterBy Center for Municipal Finance [source]
The project that led to the creation of this dataset received funding from the Center for Corporate and Securities Law at the University of San Diego School of Law. The dataset itself can be accessed through a GitHub repository or on its dedicated website.
In terms of columns contained in this dataset, it encompasses a range of variables relevant to analyzing credit ratings. However, specific details about these columns are not provided in the given information. To acquire a more accurate understanding of the column labels and their corresponding attributes or measurements present in this dataset, further exploration or referencing additional resources may be required
Understanding the Data
The dataset consists of several columns that provide essential information about credit ratings and fixed income securities. Familiarize yourself with the column names and their meanings to better understand the data:
- Column 1: [Credit Agency]
- Column 2: [Issuer Name]
- Column 3: [CUSIP/ISIN]
- Column 4: [Rating Type]
- Column 5: [Rating Source]
- Column 6: [Rating Date]
Exploratory Data Analysis (EDA)
Before diving into detailed analysis, start by performing exploratory data analysis to get an overview of the dataset.
Identify Unique Values: Explore each column's unique values to understand rating agencies, issuers, rating types, sources, etc.
Frequency Distribution: Analyze the frequency distribution of various attributes like credit agencies or rating types to identify any imbalances or biases in the data.
Data Visualization
Visualizing your data can provide insights that are difficult to derive from tabular representation alone. Utilize various visualization techniques such as bar charts, pie charts, histograms, or line graphs based on your specific objectives.
For example:
- Plotting a histogram of each credit agency's ratings can help you understand their distribution across different categories.
- A time-series line graph can show how ratings have evolved over time for specific issuers or industries.
Analyzing Ratings Performance
One of the main objectives of using credit rating datasets is to assess the performance and accuracy of different credit agencies. Conducting a thorough analysis can help you understand how ratings have changed over time and evaluate the consistency of each agency's ratings.
Rating Changes Over Time: Analyze how ratings for specific issuers or industries have changed over different periods.
Comparing Rating Agencies: Compare ratings from different agencies to identify any discrepancies or trends. Are there consistent differences in their assessments?
Detecting Rating Trends
The dataset allows you to detect trends and correlations between various factors related to
- Credit Rating Analysis: This dataset can be used for analyzing credit ratings and trends of various fixed income securities. It provides historical credit rating data from different rating agencies, allowing researchers to study the performance, accuracy, and consistency of these ratings over time.
- Comparative Analysis: The dataset allows for comparative analysis between different agencies' credit ratings for a specific security or issuer. Researchers can compare the ratings assigned by different agencies and identify any discrepancies or differences in their assessments. This analysis can help in understanding variations in methodologies and improving the transparency of credit rating processes
If you use this dataset in your research, please credit the original authors. Data Source
License: Dataset copyright by authors - You are free to: - Share - copy and redistribute the material in any medium or format for any purpose, even commercially. - Adapt - remix, transform, and build upon the material for any purpose, even commercially. - You must: - Give appropriate credit - Provide a link to the license, and indicate if changes were made. - ShareAlike - You must distribute your contributions under the same license as the original. - Keep intact - all ...
Facebook
TwitterThis dataset contains information from Turkey's largest online real estate and car sales platform. The dataset covers a 3-month period from January 1, 2023, to March 31, 2023, and focuses solely on Volkswagen brand cars. The dataset consists of 13 variables, including customer_id, advertisement_number, brand, model, variant, year, kilometer, color, transmission, fuel, city, ad_date, and price.
The dataset provides valuable insights into the sales and advertising trends for Volkswagen cars in Turkey during the first quarter of 2023. The data can be used to identify patterns and trends in consumer behavior, such as which models are most popular, the most common transmission type, and the most common fuel type. The data can also be used to evaluate the effectiveness of advertising campaigns and to identify which cities have the highest demand for Volkswagen cars.
Overall, this dataset provides a rich source of information for anyone interested in the automotive industry in Turkey or for those who want to explore the trends in Volkswagen car sales during the first quarter of 2023.
Here are the descriptions of the variables in the dataset:
customer_id: Unique identifier for the customer who placed the advertisement
advertisement_number: Unique identifier number for the advertisement
brand: The brand of the car (in this dataset, it is always Volkswagen)
model: The model of the car (e.g., Golf, Polo, Passat, etc.)
variant: The variant of the car (e.g., 1.6 FSI Midline, 2.0 TDI Comfortline, etc.)
year: The year that the car was manufactured
kilometer: The distance that the car has been driven (in kilometers)
color: The color of the car
transmission: The type of transmission (manual or automatic)
fuel: The type of fuel used by the car (e.g., petrol, diesel, hybrid, etc.)
city: The city where the advertisement was placed
ad_date: The date when the advertisement was placed
price: The asking price for the car
Here are some possible analyses and insights that can be derived from this dataset:
Trend analysis: It is possible to analyze the trend of Volkswagen car sales over the three-month period covered by the dataset. This can be done by plotting the number of advertisements placed over time.
Model popularity analysis: It is possible to determine which Volkswagen car models are the most popular based on the number of advertisements placed for each model. This can be done by grouping the data by model and counting the number of advertisements for each model.
Price analysis: It is possible to analyze the distribution of prices for Volkswagen cars. This can be done by creating a histogram of the prices.
Kilometer analysis: It is possible to analyze the distribution of kilometers driven for Volkswagen cars. This can be done by creating a histogram of the kilometer values.
Geographic analysis: It is possible to analyze the distribution of Volkswagen car sales across different cities. This can be done by grouping the data by city and counting the number of advertisements for each city.
Correlation analysis: It is possible to analyze the correlations between different variables, such as the year and price of the car or the kilometer and price of the car. This can be done by creating scatterplots of the variables and calculating correlation coefficients.
Data Cleaning: Some data cleaning processes can be performed on the dataset. Firstly, the missing values can be checked, and the missing values may need to be filled or removed from the dataset. Additionally, the date formats in the dataset and the data types of the variables can be checked and adjusted accordingly. Outliers in the dataset may also need to be checked and corrected or removed.
These cleaning processes in the dataset will help obtain healthier results for data analysis and machine learning algorithms.
As a result, this dataset is a workable dataset for data cleaning and a valuable resource that interested parties can use in their data analysis and machine learning projects.
All of these analyses can be visualized using various graphs and charts, such as line charts, histograms, and scatterplots.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset compares four cities FIXED-line broadband internet speeds: - Melbourne, AU - Bangkok, TH - Shanghai, CN - Los Angeles, US - Alice Springs, AU
ERRATA: 1.Data is for Q3 2020, but some files are labelled incorrectly as 02-20 of June 20. They all should read Sept 20, or 09-20 as Q3 20, rather than Q2. Will rename and reload. Amended in v7.
*lines of data for each geojson file; a line equates to a 600m^2 location, inc total tests, devices used, and average upload and download speed - MEL 16181 locations/lines => 0.85M speedtests (16.7 tests per 100people) - SHG 31745 lines => 0.65M speedtests (2.5/100pp) - BKK 29296 lines => 1.5M speedtests (14.3/100pp) - LAX 15899 lines => 1.3M speedtests (10.4/100pp) - ALC 76 lines => 500 speedtests (2/100pp)
Geojsons of these 2* by 2* extracts for MEL, BKK, SHG now added, and LAX added v6. Alice Springs added v15.
This dataset unpacks, geospatially, data summaries provided in Speedtest Global Index (linked below). See Jupyter Notebook (*.ipynb) to interrogate geo data. See link to install Jupyter.
** To Do Will add Google Map versions so everyone can see without installing Jupyter. - Link to Google Map (BKK) added below. Key:Green > 100Mbps(Superfast). Black > 500Mbps (Ultrafast). CSV provided. Code in Speedtestv1.1.ipynb Jupyter Notebook. - Community (Whirlpool) surprised [Link: https://whrl.pl/RgAPTl] that Melb has 20% at or above 100Mbps. Suggest plot Top 20% on map for community. Google Map link - now added (and tweet).
** Python melb = au_tiles.cx[144:146 , -39:-37] #Lat/Lon extract shg = tiles.cx[120:122 , 30:32] #Lat/Lon extract bkk = tiles.cx[100:102 , 13:15] #Lat/Lon extract lax = tiles.cx[-118:-120, 33:35] #lat/Lon extract ALC=tiles.cx[132:134, -22:-24] #Lat/Lon extract
Histograms (v9), and data visualisations (v3,5,9,11) will be provided. Data Sourced from - This is an extract of Speedtest Open data available at Amazon WS (link below - opendata.aws).
**VERSIONS v.24 Add tweet and google map of Top 20% (over 100Mbps locations) in Mel Q322. Add v.1.5 MEL-Superfast notebook, and CSV of results (now on Google Map; link below). v23. Add graph of 2022 Broadband distribution, and compare 2020 - 2022. Updated v1.4 Jupyter notebook. v22. Add Import ipynb; workflow-import-4cities. v21. Add Q3 2022 data; five cities inc ALC. Geojson files. (2020; 4.3M tests 2022; 2.9M tests)
v20. Speedtest - Five Cities inc ALC. v19. Add ALC2.ipynb. v18. Add ALC line graph. v17. Added ipynb for ALC. Added ALC to title.v16. Load Alice Springs Data Q221 - csv. Added Google Map link of ALC. v15. Load Melb Q1 2021 data - csv. V14. Added Melb Q1 2021 data - geojson. v13. Added Twitter link to pics. v12 Add Line-Compare pic (fastest 1000 locations) inc Jupyter (nbn-intl-v1.2.ipynb). v11 Add Line-Compare pic, plotting Four Cities on a graph. v10 Add Four Histograms in one pic. v9 Add Histogram for Four Cities. Add NBN-Intl.v1.1.ipynb (Jupyter Notebook). v8 Renamed LAX file to Q3, rather than 03. v7 Amended file names of BKK files to correctly label as Q3, not Q2 or 06. v6 Added LAX file. v5 Add screenshot of BKK Google Map. v4 Add BKK Google map(link below), and BKK csv mapping files. v3 replaced MEL map with big key version. Prev key was very tiny in top right corner. v2 Uploaded MEL, SHG, BKK data and Jupyter Notebook v1 Metadata record
** LICENCE AWS data licence on Speedtest data is "CC BY-NC-SA 4.0", so use of this data must be: - non-commercial (NC) - reuse must be share-alike (SA)(add same licence). This restricts the standard CC-BY Figshare licence.
** Other uses of Speedtest Open Data; - see link at Speedtest below.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Carefully extracted subset from the ImageNet dataset for Night Vision colorization task.
The night vision images are created by preprocessing the actual images. Preprocessing steps included: 1. Converting to grayscale 2. Equalizing histogram 3. Adding noise 4. Making the image darker 5. Adding a vignette effect 6. Resizing the image to 224x224
The colorful images are provided to use as a ground truth, and the night vision images are also provided.
Only preprocessing needed while training the model is taking the ground truths as RGB properly. Also, normalizing the night vision images may be useful.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Charts, Histograms, and Time Series • Create a histogram graph from band values of an image collection • Create a time series graph from band values of an image collection