According to the survey conducted by KuRunData among 1,043 Chinese consumers, around 61.1 percent of respondents gathered information on beverages from e-commerce platforms via online shopping platforms. Approximately 23 percent of respondents learned about beverages on WeChat.
This dataset contains the metadata of the datasets published in 77 Dataverse installations, information about each installation's metadata blocks, and the list of standard licenses that dataset depositors can apply to the datasets they publish in the 36 installations running more recent versions of the Dataverse software. The data is useful for reporting on the quality of dataset and file-level metadata within and across Dataverse installations. Curators and other researchers can use this dataset to explore how well Dataverse software and the repositories using the software help depositors describe data. How the metadata was downloaded The dataset metadata and metadata block JSON files were downloaded from each installation on October 2 and October 3, 2022 using a Python script kept in a GitHub repo at https://github.com/jggautier/dataverse-scripts/blob/main/other_scripts/get_dataset_metadata_of_all_installations.py. In order to get the metadata from installations that require an installation account API token to use certain Dataverse software APIs, I created a CSV file with two columns: one column named "hostname" listing each installation URL in which I was able to create an account and another named "apikey" listing my accounts' API tokens. The Python script expects and uses the API tokens in this CSV file to get metadata and other information from installations that require API tokens. How the files are organized ├── csv_files_with_metadata_from_most_known_dataverse_installations │ ├── author(citation).csv │ ├── basic.csv │ ├── contributor(citation).csv │ ├── ... │ └── topic_classification(citation).csv ├── dataverse_json_metadata_from_each_known_dataverse_installation │ ├── Abacus_2022.10.02_17.11.19.zip │ ├── dataset_pids_Abacus_2022.10.02_17.11.19.csv │ ├── Dataverse_JSON_metadata_2022.10.02_17.11.19 │ ├── hdl_11272.1_AB2_0AQZNT_v1.0.json │ ├── ... │ ├── metadatablocks_v5.6 │ ├── astrophysics_v5.6.json │ ├── biomedical_v5.6.json │ ├── citation_v5.6.json │ ├── ... │ ├── socialscience_v5.6.json │ ├── ACSS_Dataverse_2022.10.02_17.26.19.zip │ ├── ADA_Dataverse_2022.10.02_17.26.57.zip │ ├── Arca_Dados_2022.10.02_17.44.35.zip │ ├── ... │ └── World_Agroforestry_-_Research_Data_Repository_2022.10.02_22.59.36.zip └── dataset_pids_from_most_known_dataverse_installations.csv └── licenses_used_by_dataverse_installations.csv └── metadatablocks_from_most_known_dataverse_installations.csv This dataset contains two directories and three CSV files not in a directory. One directory, "csv_files_with_metadata_from_most_known_dataverse_installations", contains 18 CSV files that contain the values from common metadata fields of all 77 Dataverse installations. For example, author(citation)_2022.10.02-2022.10.03.csv contains the "Author" metadata for all published, non-deaccessioned, versions of all datasets in the 77 installations, where there's a row for each author name, affiliation, identifier type and identifier. The other directory, "dataverse_json_metadata_from_each_known_dataverse_installation", contains 77 zipped files, one for each of the 77 Dataverse installations whose dataset metadata I was able to download using Dataverse APIs. Each zip file contains a CSV file and two sub-directories: The CSV file contains the persistent IDs and URLs of each published dataset in the Dataverse installation as well as a column to indicate whether or not the Python script was able to download the Dataverse JSON metadata for each dataset. For Dataverse installations using Dataverse software versions whose Search APIs include each dataset's owning Dataverse collection name and alias, the CSV files also include which Dataverse collection (within the installation) that dataset was published in. One sub-directory contains a JSON file for each of the installation's published, non-deaccessioned dataset versions. The JSON files contain the metadata in the "Dataverse JSON" metadata schema. The other sub-directory contains information about the metadata models (the "metadata blocks" in JSON files) that the installation was using when the dataset metadata was downloaded. I saved them so that they can be used when extracting metadata from the Dataverse JSON files. The dataset_pids_from_most_known_dataverse_installations.csv file contains the dataset PIDs of all published datasets in the 77 Dataverse installations, with a column to indicate if the Python script was able to download the dataset's metadata. It's a union of all of the "dataset_pids_..." files in each of the 77 zip files. The licenses_used_by_dataverse_installations.csv file contains information about the licenses that a number of the installations let depositors choose when creating datasets. When I collected ... Visit https://dataone.org/datasets/sha256%3Ad27d528dae8cf01e3ea915f450426c38fd6320e8c11d3e901c43580f997a3146 for complete metadata about this dataset.
The displayed data on the usage of apps to get information about football shows results of the Statista European Football Benchmark conducted in England in 2018. Some 26 percent of respondents stated that they use apps from football clubs to get information about football.
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
This dataset contains raw response data to a nano-survey that was conducted in Indonesia and Kenya on the demand for open financial data. You can read more about the project here: (http://bit.ly/OpenDemand). A nano-survey is an innovative technology that extends a brief survey to a random sampling of internet users. Note: "NA" indicates "No Answer."
This survey reveals the different social networks used by French social media followers to find out about a brand, a company, a product, a service or a company director in 2019. From the graph it is visible that nearly 75 percent of respondents relied on Facebook to find this type of information, followed by Youtube with 70 percent preference. About 28 percent of social network users were rather fond of Twitter.
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
A parenting plan outlines how parents will raise their children after separation or divorce. It describes how parents not living together will care for and make important decisions about their children in both homes. You can agree to any type of parenting arrangement, but you should focus on what is in the best interests of your children. This checklist identifies important issues for you to consider when creating your parenting plan. It will help you identify questions to discuss with the other parent, including: How you will make decisions about your children (for example, together or individually), when each of you will spend time with your children, how you will share information and communicate with the other parent.
RISE is a Reclamation open data system for viewing, accessing, and downloading Reclamation's water and water-related data. With RISE you can:
Find Reclamation data by searching the catalog or browsing the map .
Query time series data for specific dates, parameters, and locations, then plot or download the data
Obtain machine-readable data through an Application Programming Interface (API) for integration into tools and analyses.
View geospatial data on a map , download it for offline analysis, or get a web service connection to add to your own map.
RISE helps fulfill Reclamation’s responsibilities under the OPEN Government Data Act to make data assets available in open and machine-readable formats. RISE is the replacement for the Reclamation Water Information System (RWIS).
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
language: en
Model Description: GPT-2 Large is the 774M parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-large')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, I can do language modeling. In fact, this is one of the reasons I use languages. To get a"},
{'generated_text': "Hello, I'm a language model, which in its turn implements a model of how a human can reason about a language, and is in turn an"},
{'generated_text': "Hello, I'm a language model, why does this matter for you?
When I hear new languages, I tend to start thinking in terms"},
{'generated_text': "Hello, I'm a language model, a functional language...
I don't need to know anything else. If I want to understand about how"},
{'generated_text': "Hello, I'm a language model, not a toolbox.
In a nutshell, a language model is a set of attributes that define how"}]
Here is how to use this model to get the features of a given text in PyTorch:
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
model = GPT2Model.from_pretrained('gpt2-large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
and in TensorFlow:
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
model = TFGPT2Model.from_pretrained('gpt2-large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
In their model card about GPT-2, OpenAI wrote:
The primary intended users of these models are AI researchers and practitioners.
We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
In their model card about GPT-2, OpenAI wrote:
Here are some secondary use cases we believe are likely:
- Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
- Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
- Entertainment: Creation of games, chat bots, and amusing generations.
In their model card about GPT-2, OpenAI wrote:
Because large-scale language models like GPT-2 ...
911 Public Safety Answering Point (PSAP) service area boundaries in New Mexico According to the National Emergency Number Association (NENA), a Public Safety Answering Point (PSAP) is a facility equipped and staffed to receive 9-1-1 calls. The service area is the geographic area within which a 911 call placed using a landline is answered at the associated PSAP. This dataset only includes primary PSAPs. Secondary PSAPs, backup PSAPs, and wireless PSAPs have been excluded from this dataset. Primary PSAPs receive calls directly, whereas secondary PSAPs receive calls that have been transferred by a primary PSAP. Backup PSAPs provide service in cases where another PSAP is inoperable. Most military bases have their own emergency telephone systems. To connect to such system from within a military base it may be necessary to dial a number other than 9 1 1. Due to the sensitive nature of military installations, TGS did not actively research these systems. If civilian authorities in surrounding areas volunteered information about these systems or if adding a military PSAP was necessary to fill a hole in civilian provided data, TGS included it in this dataset. Otherwise military installations are depicted as being covered by one or more adjoining civilian emergency telephone systems. In some cases areas are covered by more than one PSAP boundary. In these cases, any of the applicable PSAPs may take a 911 call. Where a specific call is routed may depend on how busy the applicable PSAPS are (i.e. load balancing), operational status (i.e. redundancy), or time of date / day of week. If an area does not have 911 service, TGS included that area in the dataset along with the address and phone number of their dispatch center. These are areas where someone must dial a 7 or 10 digit number to get emergency services. These records can be identified by a "Y" in the [NON911EMNO] field. This indicates that dialing 911 inside one of these areas does not connect one with emergency services. This dataset was constructed by gathering information about PSAPs from state level officials. In some cases this was geospatial information, in others it was tabular. This information was supplemented with a list of PSAPs from the Federal Communications Commission (FCC). Each PSAP was researched to verify its tabular information. In cases where the source data was not geospatial, each PSAP was researched to determine its service area in terms of existing boundaries (e.g. city and county boundaries). In some cases existing boundaries had to be modified to reflect coverage areas (e.g. "entire county north of Country Road 30"). However, there may be cases where minor deviations from existing boundaries are not reflected in this dataset, such as the case where a particular PSAPs coverage area includes an entire county, and the homes and businesses along a road which is partly in another county. Text fields in this dataset have been set to all upper case to facilitate consistent database engine search results. All diacritics (e.g., the German umlaut or the Spanish tilde) have been replaced with their closest equivalent English character to facilitate use with database systems that may not support diacritics.
We provide instructions, codes and datasets for replicating the article by Kim, Lee and McCulloch (2024), "A Topic-based Segmentation Model for Identifying Segment-Level Drivers of Star Ratings from Unstructured Text Reviews." This repository provides a user-friendly R package for any researchers or practitioners to apply A Topic-based Segmentation Model with Unstructured Texts (latent class regression with group variable selection) to their datasets. First, we provide a R code to replicate the illustrative simulation study: see file 1. Second, we provide the user-friendly R package with a very simple example code to help apply the model to real-world datasets: see file 2, Package_MixtureRegression_GroupVariableSelection.R and Dendrogram.R. Third, we provide a set of codes and instructions to replicate the empirical studies of customer-level segmentation and restaurant-level segmentation with Yelp reviews data: see files 3-a, 3-b, 4-a, 4-b. Note, due to the dataset terms of use by Yelp and the restriction of data size, we provide the link to download the same Yelp datasets (https://www.kaggle.com/datasets/yelp-dataset/yelp-dataset/versions/6). Fourth, we provided a set of codes and datasets to replicate the empirical study with professor ratings reviews data: see file 5. Please see more details in the description text and comments of each file. [A guide on how to use the code to reproduce each study in the paper] 1. Full codes for replicating Illustrative simulation study.txt -- [see Table 2 and Figure 2 in main text]: This is R source code to replicate the illustrative simulation study. Please run from the beginning to the end in R. In addition to estimated coefficients (posterior means of coefficients), indicators of variable selections, and segment memberships, you will get dendrograms of selected groups of variables in Figure 2. Computing time is approximately 20 to 30 minutes 3-a. Preprocessing raw Yelp Reviews for Customer-level Segmentation.txt: Code for preprocessing the downloaded unstructured Yelp review data and preparing DV and IVs matrix for customer-level segmentation study. 3-b. Instruction for replicating Customer-level Segmentation analysis.txt -- [see Table 10 in main text; Tables F-1, F-2, and F-3 and Figure F-1 in Web Appendix]: Code for replicating customer-level segmentation study with Yelp data. You will get estimated coefficients (posterior means of coefficients), indicators of variable selections, and segment memberships. Computing time is approximately 3 to 4 hours. 4-a. Preprocessing raw Yelp reviews_Restaruant Segmentation (1).txt: R code for preprocessing the downloaded unstructured Yelp data and preparing DV and IVs matrix for restaurant-level segmentation study. 4-b. Instructions for replicating restaurant-level segmentation analysis.txt -- [see Tables 5, 6 and 7 in main text; Tables E-4 and E-5 and Figure H-1 in Web Appendix]: Code for replicating restaurant-level segmentation study with Yelp. you will get estimated coefficients (posterior means of coefficients), indicators of variable selections, and segment memberships. Computing time is approximately 10 to 12 hours. [Guidelines for running Benchmark models in Table 6] Unsupervised Topic model: 'topicmodels' package in R -- after determining the number of topics(e.g., with 'ldatuning' R package), run 'LDA' function in the 'topicmodels'package. Then, compute topic probabilities per restaurant (with 'posterior' function in the package) which can be used as predictors. Then, conduct prediction with regression Hierarchical topic model (HDP): 'gensimr' R package -- 'model_hdp' function for identifying topics in the package (see https://radimrehurek.com/gensim/models/hdpmodel.html or https://gensimr.news-r.org/). Supervised topic model: 'lda' R package -- 'slda.em' function for training and 'slda.predict' for prediction. Aggregate regression: 'lm' default function in R. Latent class regression without variable selection: 'flexmix' function in 'flexmix' R package. Run flexmix with a certain number of segments (e.g., 3 segments in this study). Then, with estimated coefficients and memberships, conduct prediction of dependent variable per each segment. Latent class regression with variable selection: 'Unconstraind_Bayes_Mixture' function in Kim, Fong and DeSarbo(2012)'s package. Run the Kim et al's model (2012) with a certain number of segments (e.g., 3 segments in this study). Then, with estimated coefficients and memberships, we can do prediction of dependent variables per each segment. The same R package ('KimFongDeSarbo2012.zip') can be downloaded at: https://sites.google.com/scarletmail.rutgers.edu/r-code-packages/home 5. Instructions for replicating Professor ratings review study.txt -- [see Tables G-1, G-2, G-4 and G-5, and Figures G-1 and H-2 in Web Appendix]: Code to replicate the Professor ratings reviews study. Computing time is approximately 10 hours. [A list of the versions of R, packages, and computer...
Elevate your marketing and sales strategies with our Global Email Address Data, providing unmatched access to a vast collection of email addresses, phone numbers, and comprehensive B2B and B2C contact information. Our data solutions empower businesses to enrich their outreach efforts, enabling effective online marketing and competitive intelligence.
Designed to enhance your data-driven strategies, our offerings include critical insights such as email address data, phone number data, B2B contact data, and B2C contact data. With our extensive resources, you can build strong connections and effectively engage your target audiences.
Key Features:
Targeted Email Address Data: Access a diverse range of email information essential for executing tailored online marketing campaigns and connecting with key business stakeholders.
Comprehensive Phone Number Data: Utilize our extensive phone number database to enhance telemarketing efforts, improve customer interactions, and facilitate direct outreach.
Dynamic B2B and B2C Contact Data: Our detailed contact data helps refine your messaging strategy, ensuring it reaches the right audience—from C-suite executives to critical consumer segments.
Exclusive CEO Contact Information: Gain direct access to verified CEO contact data, ideal for high-level networking and forging strategic partnerships.
Strategic Use Cases Supported by Our Data:
Online Marketing: Leverage our email and phone data to drive precise online marketing initiatives, enhancing customer engagement and lead generation efforts.
Data Enrichment: Improve database accuracy with our comprehensive data enrichment services, providing a solid foundation for well-informed business decisions.
B2B Data Enrichment: Tailor your B2B databases effectively, enhancing the quality of business contact data to boost outreach initiatives and operational workflows.
Sales Data Enrichment: Amplify your sales strategies with enriched contact data that drives higher conversion rates and overall sales success.
Competitive Intelligence: Gain insights into market trends, competitor activities, and industry shifts using our detailed contact data, giving you an edge in your field.
Why Choose Success.ai?
Unmatched Data Precision: Our commitment to delivering a 99% accuracy rate ensures that you receive reliable data to support your strategic objectives.
Global Reach with Tailored Solutions: Our database encompasses global markets while being finely tuned to cater to local business needs, providing pertinent information relevant to your operations.
Affordable Pricing with Best Value: We guarantee the most cost-effective data solutions available, ensuring maximum value without compromising quality.
Ethical Data Practices: Commitment to compliance with international data privacy standards ensures responsible and legally sound utilization of our data.
Get Started with Success.ai Today: Partner with Success.ai to harness the full potential of high-quality contact data. Whether your goal is to enhance online marketing efforts, enrich sales databases, or gain strategic competitive insights, our comprehensive data solutions can propel your business forward.
Contact us today to discover how we can customize our offerings to meet your specific business needs!
We'll beat any price on the market!
The Find Health Center tool is a locator tool designed to make data and information concerning Federally-Funded Health Centers more readily available to our users. It is intended to help people in greatest need for health care locate where they could obtain care in their particular location. The user is able to search for health centers nearest to a specific complete address, city and state, state and county, or ZIP code. The search results (health centers) are returned in groups of ten (numbered from one to ten) and are sorted by increasing distance away from the center of the search area (address or county). For each health center entry in the list the user is provided the health center name, address, approximate distance from the center point of the search, telephone number, website address (where available), and a link for driving directions. The user has the option of viewing the search results either on a map or as text (default) and both views provide links to get more detailed information for each returned opportunity.
Young people who were in Year 11 in the 2020-2021 academic year were drawn as a clustered and stratified random sample from the National Pupil Database held by the DfE, as well as from a separate sample of independent schools from DfE's Get Information about Schools database. The parents/guardians of the sampled young people were also invited to take part in COSMO. Data from parents/guardians complement the data collected from young people.
Further information about the study may be found on the COVID Social Mobility and Opportunities Study (COSMO) webpage.
COSMO Wave 1, 2021-2022
Data collection in Wave 1 was carried out between September 2021 and April 2022. Young people and parents/guardians were first invited to a web survey. In addition to receiving online reminders, some non-respondents were followed up via face-to-face visits over the winter and throughout spring.
Latest edition information:
The fourth edition (April 2024) follows the release of Wave 2 data. For this edition, a longitudinal parents dataset has been deposited, to help data users find core background information from parents who took part in either Wave 1 or Wave 2, in one place. A new version of the young person data file (version 2.1) has also been deposited. This file now includes weight variables for researchers who wish to analyse complete households, where, in addition to a young person taking part at Wave 1, a parent had taken part at either Wave (1 or Wave 2). The COSMO Wave 1 Data User Guide Version 2.1 explains these updates in detail.
Further information about the study may be found on the COSMO website.
The main purpose of the Household Income Expenditure Survey (HIES) 2016 was to offer high quality and nationwide representative household data that provided information on incomes and expenditure in order to update the Consumer Price Index (CPI), improve National Accounts statistics, provide agricultural data and measure poverty as well as other socio-economic indicators. These statistics were urgently required for evidence-based policy making and monitoring of implementation results supported by the Poverty Reduction Strategy (I & II), the AfT and the Liberia National Vision 2030. The survey was implemented by the Liberia Institute of Statistics and Geo-Information Services (LISGIS) over a 12-month period, starting from January 2016 and was completed in January 2017. LISGIS completed a total of 8,350 interviews, thus providing sufficient observations to make the data statistically significant at the county level. The data captured the effects of seasonality, making it the first of its kind in Liberia. Support for the survey was offered by the Government of Liberia, the World Bank, the European Union, the Swedish International Development Corporation Agency, the United States Agency for International Development and the African Development Bank. The objectives of the 2016 HIES were:
National
Sample survey data [ssd]
The original sample design for the HIES exploited two-phased clustered sampling methods, encompassing a nationally representative sample of households in every quarter and was obtained using the 2008 National Housing and Population Census sampling frame. The procedures used for each sampling stage are as follows:
i. First stage
Selection of sample EAs. The sample EAs for the 2016 HIES were selected within each stratum systematically with Probability Proportional to Size from the ordered list of EAs in the sampling frame. They are selected separately for each county by urban/rural stratum. The measure of size for each EA was based on the number of households from the sampling frame of EAs based on the 2008 Liberia Census. Within each stratum the EAs were ordered geographically by district, clan and EA codes. This provided implicit geographic stratification of the sampling frame.
ii. Second stage
Selection of sample households within a sample EA. A random systematic sample of 10 households were selected from the listing for each sample EA. Using this type of table, the supervisor only has to look up the total number of households listed, and a specific systematic sample of households is identified in the corresponding row of the table.
Face-to-face [f2f]
There were three questionnaires administered for this survey: 1. Household and Individual Questionnaire 2. Market Price Questionnaire 3. Agricultural Recall Questionnaire
The data entry clerk for each team, using data entry software called CSPro, entered data for each household in the field. For each household, an error report was generated on-site, which identified key problems with the data collected (outliers, incorrect entries, inconsistencies with skip patterns, basic filters for age and gender specific questions etc.). The Supervisor along with the Data Entry Clerk and the Enumerator that collected the data reviewed these errors. Callbacks were made to households if necessary to verify information and rectify the errors while in that EA.
Once the data were collected in each EA, they were sent to LISGIS headquarters for further processing along with EA reports for each area visited. The HIES Technical committee converted the data into STATA and ran several consistency checks to manage overall data quality and prepared reports to identify key problems with the data set and called the field teams to update them about the same. Monthly reports were prepared by summarizing observations from data received from the field alongside statistics on data collection status to share with the field teams and LISGIS Management.
The Survey of Sports Habits in Spain is a structural statistical operation developed by the Ministry as part of the National Statistical Plan. Aimed at people aged 15 and over, its main purpose is to obtain indicators relating to the sports habits of Spaniards. The sample design of the project has been carried out in collaboration with the National Institute of Statistics.
In 2023, 59 percent of Polish respondents did not ask for sources of answers when searching for information using ChatGPT.
The World Bank Group is interested in gauging the views of clients and partners who are either involved in development in Senegal or who observe activities related to social and economic development. The following survey will give the World Bank Group's team that works in Senegal, greater insight into how the Bank's work is perceived. This is one tool the World Bank Group uses to assess the views of its stakeholders, and to develop more effective strategies that support development in Senegal. A local independent firm has been hired to oversee the logistics of this survey.
This survey was designed to achieve the following objectives: - Assist the World Bank Group in gaining a better understanding of how stakeholders in Senegal perceive the Bank Group; - Obtain systematic feedback from stakeholders in Senegal regarding: · Their views regarding the general environment in Senegal; · Their overall attitudes toward the World Bank Group in Senegal; · Overall impressions of the World Bank Group's effectiveness and results, knowledge work and activities, and communication and information sharing in Senegal; · Perceptions of the World Bank Group's future role in Senegal. · Use data to help inform Senegal country team's strategy.
Stakeholders in Senegal
Stakeholders in Senegal
Sample survey data [ssd]
Between April and July 2014, 2826 stakeholders of the WBG in Senegal were invited to provide their opinions on the WBG's work in the country by participating in a country opinion survey. Participants were drawn from the office of the President, Prime Minster; elected members of the National Assembly; ministries/ministerial departments; consultants/contractors working on WBG-supported projects/programs; PMUs; local governments; bilateral and multilateral agencies; private sector organizations; financial sector/private banks; NGOs; the media; independent government institutions; trade unions; academia/research institutes/think tanks; judiciary branch; and other organizations.
Face-to-face [f2f]
The Questionnaire consists of following sections:
A. General Issues Facing Senegal: Respondents were asked to indicate whether Senegal is headed in the right direction, what they thought were the most important development priorities, which areas would contribute most to reducing poverty and generating economic growth, and how "shared prosperity" would be best achieved.
B. Overall Attitudes toward the World Bank Group (WBG): their familiarity with the WBG, its effectiveness in Senegal, WBG staff preparedness to help Senegal solve its development challenges, the WBG's local presence, the WBG's capacity building in Senegal, their agreement with various statements regarding the WBG's work, and the extent to which the WBG is an effective development partner. Respondents were asked to indicate the WBG's greatest values and weaknesses, the most effective instruments in helping reduce poverty in Senegal, and in which sectoral areas the WBG should focus most of its resources.
C. World Bank Group's Effectiveness and Results: Respondents were asked to rate the extent to which the WBG's work helps achieve development results in Senegal, the extent to which the WBG meets Senegal's needs for knowledge services and financial instruments, the extent Senegal received value for the WBG's fee-based services, the importance for the WBG to be involved in thirty one development areas, and the WBG's level of effectiveness across these areas, such as public sector governance/reform, education, and agricultural development.
D. The World Bank Group's Knowledge Work and Activities: Respondents were asked how often they use the WBG's knowledge work, and were asked to rate the effectiveness and quality of the WBG's knowledge work and activities, including how significant of a contribution it makes to development results and its technical quality.
E. Working with the World Bank Group: Respondents were asked to rate their level of agreement with a series of statements regarding working with the WBG, such as the WBG's "Safeguard Policy" requirements being reasonable, and disbursing funds promptly. The respondents were also asked to rate the contribution of the WBG's technical assistance to solving Senegal's development challenges.
F. The Future Role of the World Bank Group in Senegal: Respondents were asked to indicate what the WBG should do to make itself of greater value in Senegal and which services the WBG should offer more of in the country.
G. Communication and Information Sharing: Respondents were asked to indicate how they get information about economic and social development issues, how they prefer to receive information from the WBG, and their usage and evaluation of the WBG's websites. Respondents were also asked about their awareness of the WBG's Access to Information policy, were asked to rate WBG's responsiveness to information requests, value of its social media channels, levels of easiness to find information they needed, the levels of easiness to navigate the WBG websites, and whether they use WBG data more often than before.
H. Background Information: Respondents were asked to indicate their current position, specialization, whether they professionally collaborate with the WBG, their exposure to the WBG in Senegal, which WBG agencies they work with, their geographic locations, and whether they think that the IFC and the World Bank work well together.
A total of 269 stakeholders participated in the survey (10% response rate).
This dataset presents the information for the portal Find My Master (TMM) for the academic years 2019-2020, 2020-2021 and 2020-2021. This information is provided by accredited institutions for the award of National Master’s Degrees (DNMs). The TMM portal is intended to identify all existing DNMs. It does not identify other degrees conferring a master’s degree or other graduate degrees. _For more information, see the documentation dedicated to the three datasets relating to information for the Find My Master (TMM) portal. 1 variable was added to this dataset for the academic year 2021-2022: Usage name of the accredited institution (etab_name_use)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is about books and is filtered where the book is International business information : how to find it, how to use it, featuring 7 columns including author, BNB id, book, book publisher, and ISBN. The preview is ordered by publication date (descending).
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
Get information on research plots for the Guide Effectiveness Monitoring Program This dataset includes ecological information for Guide Effectiveness Monitoring (GEM) Program site locations. The GEM Program evalutes the effectiveness of forest management guides on songbird occupancy rate and community structure. Learn about the procedures and protocols used for this study in the Centre for Northern Forest Ecosystem Research Technical Report 004: Effectiveness Monitoring of Forest Management Guides.
According to the survey conducted by KuRunData among 1,043 Chinese consumers, around 61.1 percent of respondents gathered information on beverages from e-commerce platforms via online shopping platforms. Approximately 23 percent of respondents learned about beverages on WeChat.