Facebook
TwitterNumber and value of loans by category
Facebook
TwitterThis dataset is designed for beginners to practice regression problems, particularly in the context of predicting house prices. It contains 1000 rows, with each row representing a house and various attributes that influence its price. The dataset is well-suited for learning basic to intermediate-level regression modeling techniques.
Beginner Regression Projects: This dataset can be used to practice building regression models such as Linear Regression, Decision Trees, or Random Forests. The target variable (house price) is continuous, making this an ideal problem for supervised learning techniques.
Feature Engineering Practice: Learners can create new features by combining existing ones, such as the price per square foot or age of the house, providing an opportunity to experiment with feature transformations.
Exploratory Data Analysis (EDA): You can explore how different features (e.g., square footage, number of bedrooms) correlate with the target variable, making it a great dataset for learning about data visualization and summary statistics.
Model Evaluation: The dataset allows for various model evaluation techniques such as cross-validation, R-squared, and Mean Absolute Error (MAE). These metrics can be used to compare the effectiveness of different models.
The dataset is highly versatile for a range of machine learning tasks. You can apply simple linear models to predict house prices based on one or two features, or use more complex models like Random Forest or Gradient Boosting Machines to understand interactions between variables.
It can also be used for dimensionality reduction techniques like PCA or to practice handling categorical variables (e.g., neighborhood quality) through encoding techniques like one-hot encoding.
This dataset is ideal for anyone wanting to gain practical experience in building regression models while working with real-world features.
Facebook
TwitterFiscal Year 2010 life insurance payments, face value of Insurance, and total number of policies by state. Data were derived from Actuarial reports, including FY 2010 Statement of Cash Flows, FY 2010 Policy Exhibit, and FY 2010 State of Residency Report.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset provides both the number and value of insurance policies issued in Qatar, disaggregated by type of insurance (e.g., cars, property) and the nationality of the insurance company. It allows analysis of insurance activity from both volume and financial perspectives.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The methodology is the core component of any research-related work. The methods used to gain the results are shown in the methodology. Here, the whole research implementation is done using python. There are different steps involved to get the entire research work done which is as follows:
1. Acquire Personality Dataset
The kaggle machine learning dataset is a collection of datasets, data generators which are used by machine learning community for analysis purpose. The personality prediction dataset is acquired from the kaggle website. This dataset was collected (2016-2018) through an interactive on-line personality test. The personality test was constructed from the IPIP. The personality prediction dataset can be downloaded in zip file format just by clicking on the link available. The personality prediction file consists of two subject CSV files (test.csv & train.csv). The test.csv file has 0 missing values, 7 attributes, and final label output. Also, the dataset has multivariate characteristics. Here, data-preprocessing is done for checking inconsistent behaviors or trends.
2. Data preprocessing
After, Data acquisition the next step is to clean and preprocess the data. The Dataset available has numerical type features. The target value is a five-level personality consisting of serious,lively,responsible,dependable & extraverted. The preprocessed dataset is further split into training and testing datasets. This is achieved by passing feature value, target value, test size to the train-test split method of the scikit-learn package. After splitting of data, the training data is sent to the following Logistic regression & SVM design is used for training the artificial neural networks then test data is used to predict the accuracy of the trained network model.
3. Feature Extraction
The following items were presented on one page and each was rated on a five point scale using radio buttons. The order on page was EXT1, AGR1, CSN1, EST1, OPN1, EXT2, etc. The scale was labeled 1=Disagree, 3=Neutral, 5=Agree
EXT1 I am the life of the party.
EXT2 I don't talk a lot.
EXT3 I feel comfortable around people.
EXT4 I am quiet around strangers.
EST1 I get stressed out easily.
EST2 I get irritated easily.
EST3 I worry about things.
EST4 I change my mood a lot.
AGR1 I have a soft heart.
AGR2 I am interested in people.
AGR3 I insult people.
AGR4 I am not really interested in others.
CSN1 I am always prepared.
CSN2 I leave my belongings around.
CSN3 I follow a schedule.
CSN4 I make a mess of things.
OPN1 I have a rich vocabulary.
OPN2 I have difficulty understanding abstract ideas.
OPN3 I do not have a good imagination.
OPN4 I use difficult words.
4. Training the Model
Train/Test is a method to measure the accuracy of your model. It is called Train/Test because you split the the data set into two sets: a training set and a testing set. 80% for training, and 20% for testing. You train the model using the training set.In this model we trained our dataset using linear_model.LogisticRegression() & svm.SVC() from sklearn Package
5. Personality Prediction Output
After the training of the designed neural network, the testing of Logistic Regression & SVM is performed using Cohen_kappa_score & Accuracy Score.
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This dataset is a collection of 12,478 social media comments found on the official Facebook pages of ten Philippine newspapers, The Philippine Daily Inquirer, Manila Bulletin, The Philippine Star, The Manila Times, Sunstar Cebu, Sunstar Davao, Cebu Daily News, The Freeman, Sunstar Davao, MindaNews, and The Mindanao Times, spanning the years 2015, 2017 and 2019. The comments contain terms related to the Moro identity and the Mamasapano Clash, the Marawi Siege and the establishment of BARMM in the southern Philippines, allowing researchers to study semantic fields with regard to Muslims and the relationship between the texts and the source newspaper, their region of origin, and political administration, among other variables. All comments in the dataset were downloaded through Facebook's Graph API via Facepager (Jünger & Keyling, 2019).
One CSV file (MMB151719SOCMED_v2.csv) is provided, along with a codebook that contains descriptions of the variables and codes used in the CSV file, and a Readme document with a changelog.
Each social media comment is annotated with the following metadata:
object_id: identifier associated with the comment;
message: the textual string of the comment;
message_proc: the textual string of the comment after pre-processing;
lang_label: categorical value for the language of the comment (Tagalog (Filipino), Cebuano, English, Taglish, Bislog, Bislish, Trilingual or Other);
from_name: identifier of public pages (not profiles of individuals) leaving comments (NaN for profiles of individuals, 'NAME' for public pages besides the newspapers, otherwise, the page name of the newspaper);
created_time: Facebook Graph API's-generated string for the date and time the comment was posted;
month_year: categorical value in the form string+YY (e.g. Jun-15) of the month and year when the comment was posted;
year: numerical value in the form YY;
newspaper: categorical value for the newspaper Facebook page under which the comment was found;
corpus: categorical value for comments from the main corpus or the side (control) corpus;
administration: categorical value for political administration (pbsa = President Benigno Aquino III, prrd = President Rodrigo Roa Duterte);
count: numerical value referring to the number of string sequences without spaces;
The dataset may only be used for non-commercial purposes and is licensed under the CC BY-NC-SA 4.0 DEED.
V2 - 05/06/2024
Corrections
Corrections made to region to include Luzon, Visayas and Mindanao (as opposed to Mindanao, non-Mindanao);
Corrections made to administration coding.
This dataset is described by:
Cruz, F. A. (2024). A Multilingual Collection of Facebook Comments on the Moro Identity and Armed Conflict in the Southern Philippines. Journal of Open Humanities Data, 10(1), 41. DOI: https://doi.org/10.5334/johd.219
Bibiliography
Jünger, J., & Keyling, T. (2019). Facepager: An application for automated data retrieval on the web (4.5.3) [Computer software]. https://github.com/strohne/Facepager/
Facebook
TwitterThis dataset (located by latitude and longitude) is a subset of the geochemical dataset found in Chap. C, Appendix 8, Disc 1, and used in this study of the Pittsburgh coal bed. That dataset is a compilation of data from the U.S. Geological Survey's (USGS) National Coal Resources Data System (NCRDS) USCHEM (U.S. geoCHEMical), The Pennsylvania State University (PSU), the West Virginia Economic and Geological Survey (WVGES), and the Ohio Division of Geological Survey (OHGS) coal quality databases as well as published U.S. Bureau of Mines (USBM) data. The metadata file for the complete dataset is found in Chap. C, Appendix 9, Disc 1 (please see it for more detailed information on this geochemical dataset). This subset of the geochemical data for the Pittsburgh coal bed includes ash yield, sulfur content, SO2 value, gross calorific value, arsenic content and mercury content for these records, as well as the ranking of these values, which is described later under the attributes in this metadata file. Analytical techniques are described in the references in Chap. C, Appendix 10, Disc 1. The analytical data are stored as text fields because many of the parameters contain letter qualifiers appearing after the numerical data values. The following is a list of the possible qualifier values: L - less than, G - greater than, N - not detected, or H - interference that cannot be easily resolved. Not all of these codes may be in this database.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Comparison of asymptotic and exact p-values.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset reflects incidents of crime in the City of Los Angeles dating back to 2020. This data is transcribed from original crime reports that are typed on paper and therefore there may be some inaccuracies within the data.
Variable Definitions Variable: Description: Type DR_NO Division of Records Number: Official file number made up of a 2 digit year, area ID, and 5 digits Alphanumeric/String Date Rptd Date reported for the crime- MM/DD/YYYY Date and Time DATE OCC Crime occured on the date -MM/DD/YYYY Date and Time TIME OCC The time the crime occured- In 24 hour military time. Numeric/String AREA The LAPD has 21 Community Police Stations referred to as Geographic Areas within the department. These Geographic Areas are sequentially numbered from 1-21. Numeric/String AREA NAME The 21 Geographic Areas or Patrol Divisions are also given a name designation that references a landmark or the surrounding community that it is responsible for. For example 77th Street Division is located at the intersection of South Broadway and 77th Street, serving neighborhoods in South Los Angeles. String Rpt Dist No A four-digit code that represents a sub-area within a Geographic Area. All crime records reference the "RD" that it occurred in for statistical comparisons. Find LAPD Reporting Districts on the LA City GeoHub at http://geohub.lacity.org/datasets/c4f83909b81d4786aa8ba8a74a4b4db1_4 Numeric/String Part 1-2 Part 1 refers to serious felonies and Part 2 refers to less serious crimes Numeric Crm Cd Indicates the crime committed. (Same as Crime Code 1) Numeric/String Crm Cd Desc Defines the Crime Code provided. String Mocodes Modus Operandi: Activities associated with the suspect in commission of the crime.See attached PDF for list of MO Codes in numerical order. https://data.lacity.org/api/views/y8tr-7khq/files/3a967fbd-f210-4857-bc52-60230efe256c?download=true&filename=MO%20CODES%20(numerical%20order).pdf Numeric/String Vict Age Age of the victim Numeric/String Vict Sex F - Female M - Male X - Unknown String Vict Descent Descent Code: A - Other Asian B - Black C - Chinese D - Cambodian F - Filipino G - Guamanian H - Hispanic/Latin/Mexican I - American Indian/Alaskan Native J - Japanese K - Korean L - Laotian O - Other P - Pacific Islander S - Samoan U - Hawaiian V - Vietnamese W - White X - Unknown Z - Asian Indian String Premis Cd The type of structure, vehicle, or location where the crime took place. Numeric Premis Desc Defines the Premise Code provided. String Weapon Used Cd The type of weapon used in the crime. Numeric/String Weapon Desc Defines the Weapon Used Code provided. String Status Status of the case. (IC is the default) String Status Desc Defines the Status Code provided. String Crm Cd 1 Indicates the crime committed. Crime Code 1 is the primary and most serious one. Crime Code 2, 3, and 4 are respectively less serious offenses. Lower crime class numbers are more serious. Numeric/String Crm Cd 2 May contain a code for an additional crime, less serious than Crime Code 1. Numeric/String Crm Cd 3 May contain a code for an additional crime, less serious than Crime Code 1. Numeric/String Crm Cd 4 May contain a code for an additional crime, less serious than Crime Code 1. Numeric/String LOCATION Street address of crime incident rounded to the nearest hundred block to maintain anonymity String Cross Street Cross Street of rounded Address String LAT Latitude Numeric LON Longtitude Numeric
*NOTE: For the Data type which we have mentioned as "Numeric/string", the original source desribed them as a plain text but the dataset consists of numerical values for that variable, so for easy understanding we have mentioned "Numeric/string".
Suggested Data Preprocessing:
Recommended Usage: This dataset can be used to find the crime trends, their reasons, correlation between crimes and events, future predection of crimes by undertanding crime patterns, visualizations of crime trends over years or places or gender or type of weapons, and many more.
Facebook
Twitterhttps://www.nist.gov/open/licensehttps://www.nist.gov/open/license
This document details the data registration process for the previously published datasets from Additive Manufacturing Metrology Testbed (AMMT) parts, "Overhang Part X4," generated at the National Institute of Standards and Technology (NIST). The two datasets —one for process monitoring and the other for XCT inspection—covering four overhang parts, along with their descriptions were published in 2020. The published data have been well-received by the community, advancing the understanding of laser powder bed fusion additive manufacturing (AM). In the last four years, the NIST team encountered numerous questions regarding the published datasets, as the raw data were not easily interpretable for mining process-structure relationships. To support a wider range of research efforts across multiple disciplines, the NIST team conducted additional data analysis, resulting in a fully registered and well-documented dataset for publication. This document provides a detailed overview of the data processing pipeline and the multi-modal data registration techniques employed, including preprocessing, feature extraction, and data alignment. The final registered dataset consists solely of numerical values, fully aligned with the machine coordinate system. Key features of the registered data include process parameters, laser power, in-situ melt pool characteristics, in-situ layerwise optical intensity, and ex-situ XCT voxel values. Additionally, this document provides uncertainty analysis for each feature to help users better select data for their applications and evaluate their results. It can also serve as a framework for processing similar datasets collected on the same testbed in future research.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains the raw numerical values used to generate Extended Data Figure 2, which examines anxiety-like behavior in the open-field test (OFT) following repeated ketamine administration.Included data:- Quantification of time spent in the outer zone and center zone of the open field (Panel 2b)- Number of entries into the center zone for each mouse (Panel 2b)Structure:- The worksheet includes per-mouse measurements for outer-zone time, center-zone time, and center-zone entries.- All values represent unprocessed raw data directly used to generate the plots in Extended Data Figure 2.This dataset provides the numerical values underlying the analysis shown in Extended Data Figure 2
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset and scripts used for manuscript: High consistency and repeatability in the breeding migrations of a benthic shark.
Project title: High consistency and repeatability in the breeding migrations of a benthic sharkDate:23/04/2024
Folders:- 1_Raw_data - Perpendicular_Point_068151, Sanctuary_Point_068088, SST raw data, sst_nc_files, IMOS_animal_measurements, IMOS_detections, PS&Syd&JB tags, rainfall_raw, sample_size, Point_Perpendicular_2013_2019, Sanctuary_Point_2013_2019, EAC_transport- 2_Processed_data - SST (anomaly, historic_sst, mean_sst_31_years, week_1992_sst:week_2022_sst including week_2019_complete_sst) - Rain (weekly_rain, weekly_rainfall_completed) - Clean (clean, cleaned_data, cleaned_gam, cleaned_pj_data)- 3_Script_processing_data - Plots(dual_axis_plot (Fig. 1 & Fig. 4).R, period_plot (Fig. 2).R, sd_plot (Fig. 5).R, sex_plot (Fig. 3).R - cleaned_data.R, cleaned_data_gam.R, weekly_rainfall_completed.R, descriptive_stats.R, sst.R, sst_2019b.R, sst_anomaly.R- 4_Script_analyses - gam.R, gam_eac.R, glm.R, lme.R, Repeatability.R- 5_Output_doc - Plots (arrival_dual_plot_with_anomaly (Fig. 1).png, period_plot (Fig.2).png, sex_arrival_departure (Fig. 3).png, departure_dual_plot_with_anomaly (Fig. 4).png, standard deviation plot (Fig. 5).png) - Tables (gam_arrival_eac_selection_table.csv (Table S2), gam_departure_eac_selection_table (Table S5), gam_arrival_selection_table (Table. S3), gam_departure_selection_table (Table. S6), glm_arrival_selection_table, glm_departure_selection_table, lme_arrival_anova_table, lme_arrival_selection_table (Table S4), lme_departure_anova_table, lme_departure_selection_table (Table. S8))
Descriptions of scripts and files used:- cleaned_data.R: script to extract detections of sharks at Jervis Bay. Calculate arrival and departure dates over the seven breeding seasons. Add sex and length for each individual. Extract moon phase (numerical value) and period of the day from arrival and departure times. - IMOS_detections.csv: raw data file with detections of Port Jackson sharks over different sites in Australia. - IMOS_animal_measurements.csv: raw data file with morphological data of Port Jackson sharks - PS&Syd&JB tags: file with measurements and sex identification of sharks (different from IMOS, it was used to complete missing sex and length). - cleaned_data.csv: file with arrival and departure dates of the final sample size of sharks (N=49) with missing sex and length for some individuals. - clean.csv: completed file using PS&Syd&JB tags, note: tag ID 117393679 was wrongly identified as a male in IMOS and correctly identified as a female in PS&Syd&JB tags file as indicated by its large size. - cleaned_pj_data: Final data file with arrival and departure dates, sex, length, moon phase (numerical) and period of the day.
weekly_rainfall_completed.R: script to calculate average weekly rainfall and correlation between the two weather stations used (Point perpendicular and Sanctuary point). - weekly_rain.csv: file with the corresponding week number (1-28) for each date (01-06-2013 to 13-12-2019) - weekly_rainfall_completed.csv: file with week number (1-28), year (2013-2019) and weekly rainfall average completed with Sanctuary Point for week 2 of 2017 - Point_Perpendicular_2013_2019: Rainfall (mm) from 01-01-2013 to 31-12-2020 at the Point Perpendicular weather station - Sanctuary_Point_2013_2019: Rainfall (mm) from 01-01-2013 to 31-12-2020 at the Sanctuary Point weather station - IDCJAC0009_068088_2017_Data.csv: Rainfall (mm) from 01-01-2017 to 31-12-2017 at the Sanctuary Point weather station (to fill in missing value for average rainfall of week 2 of 2017)
cleaned_data_gam.R: script to calculate weekly counts of sharks to run gam models and add weekly averages of rainfall and sst anomaly - cleaned_pj_data.csv - anomaly.csv: weekly (1-28) average sst anomalies for Jervis Bay (2013-2019) - weekly_rainfall_completed.csv: weekly (1-28) average rainfall for Jervis Bay (2013-2019_ - sample_size.csv: file with the number of sharks tagged (13-49) for each year (2013-2019)
sst.R: script to extract daily and weekly sst from IMOS nc files from 01-05 until 31-12 for the following years: 1992:2022 for Jervis Bay - sst_raw_data: folder with all the raw weekly (1:28) csv files for each year (1992:2022) to fill in with sst data using the sst script - sst_nc_files: folder with all the nc files downloaded from IMOS from the last 31 years (1992-2022) at the sensor (IMOS - SRS - SST - L3S-Single Sensor - 1 day - night time – Australia). - SST: folder with the average weekly (1-28) sst data extracted from the nc files using the sst script for each of the 31 years (to calculate temperature anomaly).
sst_2019b.R: script to extract daily and weekly sst from IMOS nc file for 2019 (missing value for week 19) for Jervis Bay - week_2019_sst: weekly average sst 2019 with a missing value for week 19 - week_2019b_sst: sst data from 2019 with another sensor (IMOS – SRS – MODIS - 01 day - Ocean Colour-SST) to fill in the gap of week 19 - week_2019_complete_sst: completed average weekly sst data from the year 2019 for weeks 1-28.
sst_anomaly.R: script to calculate mean weekly sst anomaly for the study period (2013-2019) using mean historic weekly sst (1992-2022) - historic_sst.csv: mean weekly (1-28) and yearly (1992-2022) sst for Jervis Bay - mean_sst_31_years.csv: mean weekly (1-28) sst across all years (1992-2022) for Jervis Bay - anomaly.csv: mean weekly and yearly sst anomalies for the study period (2013-2019)
Descriptive_stats.R: script to calculate minimum and maximum length of sharks, mean Julian arrival and departure dates per individual per year, mean Julian arrival and departure dates per year for all sharks (Table. S10), summary of standard deviation of julian arrival dates (Table. S9) - cleaned_pj_data.csv
gam.R: script used to run the Generalized additive model for rainfall and sea surface temperature - cleaned_gam.csv
glm.R: script used to run the Generalized linear mixed models for the period of the day and moon phase - cleaned_pj_data.csv - sample_size.csv
lme.R: script used to run the Linear mixed model for sex and size - cleaned_pj_data.csv
Repeatability.R: script used to run the Repeatability for Julian arrival and Julian departure dates - cleaned_pj_data.csv
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Time series data for the statistic School age population, secondary education, male (number) and country Haiti. Indicator Definition:Male population of the age-group theoretically corresponding to secondary education as indicated by theoretical entrance age and duration.The indicator "School age population, secondary education, male (number)" stands at 821.74 Thousand as of 12/31/2020, the highest value at least since 12/31/1971, the period currently displayed. Regarding the One-Year-Change of the series, the current value constitutes an increase of 0.6006 percent compared to the value the year prior.The 1 year change in percent is 0.6006.The 3 year change in percent is 1.60.The 5 year change in percent is 2.85.The 10 year change in percent is 6.82.The Serie's long term average value is 572.19 Thousand. It's latest available value, on 12/31/2020, is 43.62 percent higher, compared to it's long term average value.The Serie's change in percent from it's minimum value, on 12/31/1970, to it's latest available value, on 12/31/2020, is +145.21%.The Serie's change in percent from it's maximum value, on 12/31/2020, to it's latest available value, on 12/31/2020, is 0.0%.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This paper explores a unique dataset of all the SET ratings provided by students of one university in Poland at the end of the winter semester of the 2020/2021 academic year. The SET questionnaire used by this university is presented in Appendix 1. The dataset is unique for several reasons. It covers all SET surveys filled by students in all fields and levels of study offered by the university. In the period analysed, the university was entirely in the online regime amid the Covid-19 pandemic. While the expected learning outcomes formally have not been changed, the online mode of study could have affected the grading policy and could have implications for some of the studied SET biases. This Covid-19 effect is captured by econometric models and discussed in the paper. The average SET scores were matched with the characteristics of the teacher for degree, seniority, gender, and SET scores in the past six semesters; the course characteristics for time of day, day of the week, course type, course breadth, class duration, and class size; the attributes of the SET survey responses as the percentage of students providing SET feedback; and the grades of the course for the mean, standard deviation, and percentage failed. Data on course grades are also available for the previous six semesters. This rich dataset allows many of the biases reported in the literature to be tested for and new hypotheses to be formulated, as presented in the introduction section. The unit of observation or the single row in the data set is identified by three parameters: teacher unique id (j), course unique id (k) and the question number in the SET questionnaire (n ϵ {1, 2, 3, 4, 5, 6, 7, 8, 9} ). It means that for each pair (j,k), we have nine rows, one for each SET survey question, or sometimes less when students did not answer one of the SET questions at all. For example, the dependent variable SET_score_avg(j,k,n) for the triplet (j=Calculus, k=John Smith, n=2) is calculated as the average of all Likert-scale answers to question nr 2 in the SET survey distributed to all students that took the Calculus course taught by John Smith. The data set has 8,015 such observations or rows. The full list of variables or columns in the data set included in the analysis is presented in the attached filesection. Their description refers to the triplet (teacher id = j, course id = k, question number = n). When the last value of the triplet (n) is dropped, it means that the variable takes the same values for all n ϵ {1, 2, 3, 4, 5, 6, 7, 8, 9}.Two attachments:- word file with variables description- Rdata file with the data set (for R language).Appendix 1. Appendix 1. The SET questionnaire was used for this paper. Evaluation survey of the teaching staff of [university name] Please, complete the following evaluation form, which aims to assess the lecturer’s performance. Only one answer should be indicated for each question. The answers are coded in the following way: 5- I strongly agree; 4- I agree; 3- Neutral; 2- I don’t agree; 1- I strongly don’t agree. Questions 1 2 3 4 5 I learnt a lot during the course. ○ ○ ○ ○ ○ I think that the knowledge acquired during the course is very useful. ○ ○ ○ ○ ○ The professor used activities to make the class more engaging. ○ ○ ○ ○ ○ If it was possible, I would enroll for the course conducted by this lecturer again. ○ ○ ○ ○ ○ The classes started on time. ○ ○ ○ ○ ○ The lecturer always used time efficiently. ○ ○ ○ ○ ○ The lecturer delivered the class content in an understandable and efficient way. ○ ○ ○ ○ ○ The lecturer was available when we had doubts. ○ ○ ○ ○ ○ The lecturer treated all students equally regardless of their race, background and ethnicity. ○ ○
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Time series data for the statistic School age population, primary education, female (number) and country Marshall Islands. Indicator Definition:Female population of the age-group theoretically corresponding to primary education as indicated by theoretical entrance age and duration.The indicator "School age population, primary education, female (number)" stands at 4.40 Thousand as of 12/31/2020, the lowest value since 12/31/2011. Regarding the One-Year-Change of the series, the current value constitutes a decrease of -1.19 percent compared to the value the year prior.The 1 year change in percent is -1.19.The 3 year change in percent is -3.13.The 5 year change in percent is -4.64.The 10 year change in percent is 2.45.The Serie's long term average value is 3.65 Thousand. It's latest available value, on 12/31/2020, is 20.40 percent higher, compared to it's long term average value.The Serie's change in percent from it's minimum value, on 12/31/1970, to it's latest available value, on 12/31/2020, is +141.41%.The Serie's change in percent from it's maximum value, on 12/31/1995, to it's latest available value, on 12/31/2020, is -9.86%.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Time series data for the statistic School age population, lower secondary education, male (number) and country Montenegro. Indicator Definition:Male population of the age-group theoretically corresponding to lower secondary education as indicated by theoretical entrance age and duration.The indicator "School age population, lower secondary education, male (number)" stands at 16.58 Thousand as of 12/31/2020, the highest value since 12/31/2016. Regarding the One-Year-Change of the series, the current value constitutes an increase of 0.503 percent compared to the value the year prior.The 1 year change in percent is 0.503.The 3 year change in percent is 0.9988.The 5 year change in percent is -0.2886.The 10 year change in percent is -5.09.The Serie's long term average value is 20.43 Thousand. It's latest available value, on 12/31/2020, is 18.82 percent lower, compared to it's long term average value.The Serie's change in percent from it's minimum value, on 12/31/2017, to it's latest available value, on 12/31/2020, is +0.999%.The Serie's change in percent from it's maximum value, on 12/31/1976, to it's latest available value, on 12/31/2020, is -33.04%.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Time series data for the statistic School age population, upper secondary education, both sexes (number) and country Bermuda. Indicator Definition:Population of the age-group theoretically corresponding to upper secondary education as indicated by theoretical entrance age and duration.The indicator "School age population, upper secondary education, both sexes (number)" stands at 2.82 Thousand as of 12/31/2020, the lowest value since 12/31/1998. Regarding the One-Year-Change of the series, the current value constitutes a decrease of -1.40 percent compared to the value the year prior.The 1 year change in percent is -1.40.The 3 year change in percent is -4.57.The 5 year change in percent is -7.54.The 10 year change in percent is -14.72.The Serie's long term average value is 2.53 Thousand. It's latest available value, on 12/31/2020, is 11.39 percent higher, compared to it's long term average value.The Serie's change in percent from it's minimum value, on 12/31/1995, to it's latest available value, on 12/31/2020, is +81.95%.The Serie's change in percent from it's maximum value, on 12/31/2005, to it's latest available value, on 12/31/2020, is -18.74%.
Facebook
TwitterThis dataset (located by latitude and longitude) is a subset of the geochemical dataset found in Chap. F, Appendix 7, Disc 1, and used in this study of the Fire Clay coal zone. That dataset is a compilation of data from the U.S. Geological Survey's (USGS) National Coal Resources Data System (NCRDS) USCHEM (U.S. geoCHEMical), and the Kentucky Geological Survey (KGS) Kentucky Coal Resources Information System (KCRIS) databases. The metadata file for the complete dataset is found in Chap. F, Appendix 8, Disc 1 (please see it for more detailed information on this geochemical dataset). This subset of the geochemical data for the Fire Clay coal zone includes ash yield, sulfur content, SO2 value, gross calorific value, arsenic content and mercury content for these records, as well as the ranking of these values, which is described later under the attributes in this metadata file. Analytical techniques are described in the references in Chap. F, Appendix 9, Disc 1. The analytical data are stored as text fields because many of the parameters contain letter qualifiers appearing after the numerical data values. The following is a list of the possible qualifier values: L - less than, G - greater than, N - not detected, or H - interference that cannot be easily resolved. Not all of these codes may be in this database.
Facebook
TwitterThis dataset (located by latitude and longitude) is a subset of the geochemical dataset found in Chap. D, Appendix 8, Disc 1, and used in this study of the Upper Freeport coal bed. That dataset is a compilation of data from the U.S. Geological Survey's (USGS) National Coal Resources Data System (NCRDS) USCHEM (U.S. geoCHEMical), The Pennsylvania State University (PSU), the West Virginia Economic and Geological Survey (WVGES), and the Ohio Division of Geological Survey (OHGS) coal quality databases as well as published U.S. Bureau of Mines (USBM) data. The metadata file for the complete dataset is found in Chap. D, Appendix 9, Disc 1 (please see it for more detailed information on this geochemical dataset). This subset of the geochemical data for the Upper Freeport coal bed includes ash yield, sulfur content, SO2 value, gross calorific value, arsenic content and mercury content for these records, as well as the ranking of these values, which is described later under the attributes in this metadata file. Analytical techniques are described in the references in Chap. D, Appendix 10, Disc 1. The analytical data are stored as text fields because many of the parameters contain letter qualifiers appearing after the numerical data values. The following is a list of the possible qualifier values: L - less than, G - greater than, N - not detected, or H - interference that cannot be easily resolved. Not all of these codes may be in this database.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Time series data for the statistic School age population, secondary education, male (number) and country Syrian Arab Republic. Indicator Definition:Male population of the age-group theoretically corresponding to secondary education as indicated by theoretical entrance age and duration.The indicator "School age population, secondary education, male (number)" stands at 0.9958 Million as of 12/31/2020, the lowest value since 12/31/1992. Regarding the One-Year-Change of the series, the current value constitutes a decrease of -0.018 percent compared to the value the year prior.The 1 year change in percent is -0.018.The 3 year change in percent is -6.17.The 5 year change in percent is -17.02.The 10 year change in percent is -48.79.The Serie's long term average value is 1.08 Million. It's latest available value, on 12/31/2020, is 8.12 percent lower, compared to it's long term average value.The Serie's change in percent from it's minimum value, on 12/31/1970, to it's latest available value, on 12/31/2020, is +118.03%.The Serie's change in percent from it's maximum value, on 12/31/2010, to it's latest available value, on 12/31/2020, is -48.79%.
Facebook
TwitterNumber and value of loans by category