Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
One of the most commonly used molecular inputs for ionic liquids and deep eutectic solvents (DESs) in the literature are the critical properties and acentric factors, which can be easily determined using the modified Lydersen–Joback–Reid (LJR) method with Lee–Kesler mixing rules. However, the method used in the literature is generally applicable only to binary mixtures of DESs. Nevertheless, ternary DESs are considered to be more interesting and may provide further tailorability for developing task-specific DESs for particular applications. Therefore, in this work, a new framework for estimating the critical properties and the acentric factor of ternary DESs based on their molecular structures is presented by adjusting the framework reported in the literature with an extended version of the Lee–Kesler mixing rules. The presented framework was applied to a data set consisting of 87 ternary DESs with 334 distinct compositions. For validation, the estimated critical properties and acentric factors were used to predict the densities of the ternary DESs. The results showed excellent agreement between the experimental and calculated data, with an average absolute relative deviation (AARD) of 5.203% for ternary DESs and 5.712% for 260 binary DESs (573 compositions). The developed methodology was incorporated into a user-friendly Excel worksheet for computing the critical properties and acentric factors of any ternary or binary DES, which is provided in the Supporting Information. This work promotes the creation of robust, accessible, and user-friendly models capable of predicting the properties of new ternary DESs based on critical properties, thus saving time and resources.
Facebook
TwitterMarket basket analysis with Apriori algorithm
The retailer wants to target customers with suggestions on itemset that a customer is most likely to purchase .I was given dataset contains data of a retailer; the transaction data provides data around all the transactions that have happened over a period of time. Retailer will use result to grove in his industry and provide for customer suggestions on itemset, we be able increase customer engagement and improve customer experience and identify customer behavior. I will solve this problem with use Association Rules type of unsupervised learning technique that checks for the dependency of one data item on another data item.
Association Rule is most used when you are planning to build association in different objects in a set. It works when you are planning to find frequent patterns in a transaction database. It can tell you what items do customers frequently buy together and it allows retailer to identify relationships between the items.
Assume there are 100 customers, 10 of them bought Computer Mouth, 9 bought Mat for Mouse and 8 bought both of them. - bought Computer Mouth => bought Mat for Mouse - support = P(Mouth & Mat) = 8/100 = 0.08 - confidence = support/P(Mat for Mouse) = 0.08/0.09 = 0.89 - lift = confidence/P(Computer Mouth) = 0.89/0.10 = 8.9 This just simple example. In practice, a rule needs the support of several hundred transactions, before it can be considered statistically significant, and datasets often contain thousands or millions of transactions.
Number of Attributes: 7
https://user-images.githubusercontent.com/91852182/145270162-fc53e5a3-4ad1-4d06-b0e0-228aabcf6b70.png">
First, we need to load required libraries. Shortly I describe all libraries.
https://user-images.githubusercontent.com/91852182/145270210-49c8e1aa-9753-431b-a8d5-99601bc76cb5.png">
Next, we need to upload Assignment-1_Data. xlsx to R to read the dataset.Now we can see our data in R.
https://user-images.githubusercontent.com/91852182/145270229-514f0983-3bbb-4cd3-be64-980e92656a02.png">
https://user-images.githubusercontent.com/91852182/145270251-6f6f6472-8817-435c-a995-9bc4bfef10d1.png">
After we will clear our data frame, will remove missing values.
https://user-images.githubusercontent.com/91852182/145270286-05854e1a-2b6c-490e-ab30-9e99e731eacb.png">
To apply Association Rule mining, we need to convert dataframe into transaction data to make all items that are bought together in one invoice will be in ...
Facebook
TwitterIn this project, I analysed the employees of an organization located in two distinct countries using Excel. This project covers:
1) How to approach a data analysis project 2) How to systematically clean data 3) Doing EDA with Excel formulas & tables 4) How to use Power Query to combine two datasets 5) Statistical Analysis of data 6) Using formulas like COUNTIFS, SUMIFS, XLOOKUP 7) Making an information finder with your data 8) Male vs. Female Analysis with Pivot tables 9) Calculating Bonuses based on business rules 10) Visual analytics of data with 4 topics 11) Analysing the salary spread (Histograms & Box plots) 12) Relationship between Salary & Rating 13) Staff growth over time - trend analysis 14) Regional Scorecard to compare NZ with India
Including various Excel features such as: 1) Using Tables 2) Working with Power Query 3) Formulas 4) Pivot Tables 5) Conditional formatting 6) Charts 7) Data Validation 8) Keyboard Shortcuts & tricks 9) Dashboard Design
Facebook
TwitterThe Ontario government, generates and maintains thousands of datasets. Since 2012, we have shared data with Ontarians via a data catalogue. Open data is data that is shared with the public. Click here to learn more about open data and why Ontario releases it. Ontario’s Open Data Directive states that all data must be open, unless there is good reason for it to remain confidential. Ontario’s Chief Digital and Data Officer also has the authority to make certain datasets available publicly. Datasets listed in the catalogue that are not open will have one of the following labels: If you want to use data you find in the catalogue, that data must have a licence – a set of rules that describes how you can use it. A licence: Most of the data available in the catalogue is released under Ontario’s Open Government Licence. However, each dataset may be shared with the public under other kinds of licences or no licence at all. If a dataset doesn’t have a licence, you don’t have the right to use the data. If you have questions about how you can use a specific dataset, please contact us. The Ontario Data Catalogue endeavors to publish open data in a machine readable format. For machine readable datasets, you can simply retrieve the file you need using the file URL. The Ontario Data Catalogue is built on CKAN, which means the catalogue has the following features you can use when building applications. APIs (Application programming interfaces) let software applications communicate directly with each other. If you are using the catalogue in a software application, you might want to extract data from the catalogue through the catalogue API. Note: All Datastore API requests to the Ontario Data Catalogue must be made server-side. The catalogue's collection of dataset metadata (and dataset files) is searchable through the CKAN API. The Ontario Data Catalogue has more than just CKAN's documented search fields. You can also search these custom fields. You can also use the CKAN API to retrieve metadata about a particular dataset and check for updated files. Read the complete documentation for CKAN's API. Some of the open data in the Ontario Data Catalogue is available through the Datastore API. You can also search and access the machine-readable open data that is available in the catalogue. How to use the API feature: Read the complete documentation for CKAN's Datastore API. The Ontario Data Catalogue contains a record for each dataset that the Government of Ontario possesses. Some of these datasets will be available to you as open data. Others will not be available to you. This is because the Government of Ontario is unable to share data that would break the law or put someone's safety at risk. You can search for a dataset with a word that might describe a dataset or topic. Use words like “taxes” or “hospital locations” to discover what datasets the catalogue contains. You can search for a dataset from 3 spots on the catalogue: the homepage, the dataset search page, or the menu bar available across the catalogue. On the dataset search page, you can also filter your search results. You can select filters on the left hand side of the page to limit your search for datasets with your favourite file format, datasets that are updated weekly, datasets released by a particular organization, or datasets that are released under a specific licence. Go to the dataset search page to see the filters that are available to make your search easier. You can also do a quick search by selecting one of the catalogue’s categories on the homepage. These categories can help you see the types of data we have on key topic areas. When you find the dataset you are looking for, click on it to go to the dataset record. Each dataset record will tell you whether the data is available, and, if so, tell you about the data available. An open dataset might contain several data files. These files might represent different periods of time, different sub-sets of the dataset, different regions, language translations, or other breakdowns. You can select a file and either download it or preview it. Make sure to read the licence agreement to make sure you have permission to use it the way you want. Read more about previewing data. A non-open dataset may be not available for many reasons. Read more about non-open data. Read more about restricted data. Data that is non-open may still be subject to freedom of information requests. The catalogue has tools that enable all users to visualize the data in the catalogue without leaving the catalogue – no additional software needed. Have a look at our walk-through of how to make a chart in the catalogue. Get automatic notifications when datasets are updated. You can choose to get notifications for individual datasets, an organization’s datasets or the full catalogue. You don’t have to provide and personal information – just subscribe to our feeds using any feed reader you like using the corresponding notification web addresses. Copy those addresses and paste them into your reader. Your feed reader will let you know when the catalogue has been updated. The catalogue provides open data in several file formats (e.g., spreadsheets, geospatial data, etc). Learn about each format and how you can access and use the data each file contains. A file that has a list of items and values separated by commas without formatting (e.g. colours, italics, etc.) or extra visual features. This format provides just the data that you would display in a table. XLSX (Excel) files may be converted to CSV so they can be opened in a text editor. How to access the data: Open with any spreadsheet software application (e.g., Open Office Calc, Microsoft Excel) or text editor. Note: This format is considered machine-readable, it can be easily processed and used by a computer. Files that have visual formatting (e.g. bolded headers and colour-coded rows) can be hard for machines to understand, these elements make a file more human-readable and less machine-readable. A file that provides information without formatted text or extra visual features that may not follow a pattern of separated values like a CSV. How to access the data: Open with any word processor or text editor available on your device (e.g., Microsoft Word, Notepad). A spreadsheet file that may also include charts, graphs, and formatting. How to access the data: Open with a spreadsheet software application that supports this format (e.g., Open Office Calc, Microsoft Excel). Data can be converted to a CSV for a non-proprietary format of the same data without formatted text or extra visual features. A shapefile provides geographic information that can be used to create a map or perform geospatial analysis based on location, points/lines and other data about the shape and features of the area. It includes required files (.shp, .shx, .dbt) and might include corresponding files (e.g., .prj). How to access the data: Open with a geographic information system (GIS) software program (e.g., QGIS). A package of files and folders. The package can contain any number of different file types. How to access the data: Open with an unzipping software application (e.g., WinZIP, 7Zip). Note: If a ZIP file contains .shp, .shx, and .dbt file types, it is an ArcGIS ZIP: a package of shapefiles which provide information to create maps or perform geospatial analysis that can be opened with ArcGIS (a geographic information system software program). A file that provides information related to a geographic area (e.g., phone number, address, average rainfall, number of owl sightings in 2011 etc.) and its geospatial location (i.e., points/lines). How to access the data: Open using a GIS software application to create a map or do geospatial analysis. It can also be opened with a text editor to view raw information. Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A text-based format for sharing data in a machine-readable way that can store data with more unconventional structures such as complex lists. How to access the data: Open with any text editor (e.g., Notepad) or access through a browser. Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A text-based format to store and organize data in a machine-readable way that can store data with more unconventional structures (not just data organized in tables). How to access the data: Open with any text editor (e.g., Notepad). Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A file that provides information related to an area (e.g., phone number, address, average rainfall, number of owl sightings in 2011 etc.) and its geospatial location (i.e., points/lines). How to access the data: Open with a geospatial software application that supports the KML format (e.g., Google Earth). Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. This format contains files with data from tables used for statistical analysis and data visualization of Statistics Canada census data. How to access the data: Open with the Beyond 20/20 application. A database which links and combines data from different files or applications (including HTML, XML, Excel, etc.). The database file can be converted to a CSV/TXT to make the data machine-readable, but human-readable formatting will be lost. How to access the data: Open with Microsoft Office Access (a database management system used to develop application software). A file that keeps the original layout and
Facebook
TwitterNearly sixty years ago, in a publication with a growing rate of citation ever since, JR Platt presented “strong inference†(SI) as an accumulative method of inductive inference to produce much more rapid progress than others. The article offered persuasive testimony for the use of multiple working hypotheses combined with disconfirmation. It is often cited as an exemplar of scientific practice. However, the article provides no evidence of greater efficacy. Over a 34 year period a total 780 matched trials were completed in 56 labs in a university course in statistical science. The reduction from random (18.9 cards) to selected cards was 7.2 cards, compared to a further reduction of 0.3 cards from selected to SI. In 46% of the 780 trials, the number of cards to infer a rule was greater for strong inference than for a less rigid experimental method. Based on the evidence, strong inference added little additional strength beyond that of less rigidly structured experiments., Using inductive cards as a model (Gardner 1959 Inductive Cards. Scientific American 200:160), I devised a lab for a course in statistics for graduate and upper level undergraduate university students. Students worked in groups of three or four. One person (Nature) devises a rule for placing cards in piles. The other students in the group work together to infer a rule for cards placed by Nature according to the unknown rule. On the first round cards are drawn from a shuffled deck. This is an observational study with an uncontrolled random component. On the second round (Selected cards) each rule moves to a different group, where students chose cards to present to Nature (an experimental study). On the third round a new group applies the strong inference (SI) method to a rule. The lab required students to list multiple working hypotheses at each step, list one or more “crucial test†cards, present them to Nature for placement, and disconfirm one or more hypotheses. The procedure is repea..., The data were intially stored as ascii (.txt) files (1989 to 2003). This data was moved to excel files in 2023. Data from 2004 through 2022 were stored as excel files. Â
Facebook
TwitterThe National Institute of Standards and Technology (NIST) provides a Cybersecurity Framework (CSF) for benchmarking and measuring the maturity level of cybersecurity programs across all industries. The City uses this framework and toolset to measure and report on its internal cybersecurity program. The foundation for this measure is the Framework Core, a set of cybersecurity activities, desired outcomes, and applicable references that are common across critical infrastructure/industry sectors. These activities come from the National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF) published standard, along with the information security and customer privacy controls it references (NIST 800 Series Special Publications). The Framework Core presents industry standards, guidelines, and practices in a manner that allows for communication of cybersecurity activities and outcomes across the organization from the executive level to the implementation/operations level. The Framework Core consists of five concurrent and continuous functions: identify, protect, detect, respond, and recover. When considered together, these functions provide a high-level, strategic view of the lifecycle of an organization’s management of cybersecurity risk. The Framework Core identifies underlying key categories and subcategories for each function, and matches them with example references, such as existing standards, guidelines, and practices for each subcategory. This page provides data for the Cybersecurity performance measure. Cybersecurity Framework (CSF) scores by each CSF category per fiscal year quarter (Performance Measure 5.12) The performance measure dashboard is available at 5.12 Cybersecurity. Additional InformationSource: Maturity assessment /https://www.nist.gov/topics/cybersecurityContact: Scott CampbellContact E-Mail: Scott_Campbell@tempe.govData Source Type: ExcelPreparation Method: The data is a summary of a detailed and confidential analysis of the city's cybersecurity program. Maturity scores of subcategories within NIST CFS are combined, averaged, and rolled up to a summary score for each major category.Publish Frequency: AnnualPublish Method: ManualData Dictionary
Facebook
TwitterAI Generated Summary: The Ontario Data Catalogue is a data portal providing access to open datasets generated and maintained by the Ontario government. It allows users to search, access, visualize, and download data in various machine-readable formats, often through APIs, while also indicating licensing terms and data update frequencies. The catalogue also provides tools for data visualization and notifications for dataset updates. About: The Ontario government generates and maintains thousands of datasets. Since 2012, we have shared data with Ontarians via a data catalogue. Open data is data that is shared with the public. Click here to learn more about open data and why Ontario releases it. Ontario’s Digital and Data Directive states that all data must be open, unless there is good reason for it to remain confidential. Ontario’s Chief Digital and Data Officer also has the authority to make certain datasets available publicly. Datasets listed in the catalogue that are not open will have one of the following labels: If you want to use data you find in the catalogue, that data must have a licence – a set of rules that describes how you can use it. A licence: Most of the data available in the catalogue is released under Ontario’s Open Government Licence. However, each dataset may be shared with the public under other kinds of licences or no licence at all. If a dataset doesn’t have a licence, you don’t have the right to use the data. If you have questions about how you can use a specific dataset, please contact us. The Ontario Data Catalogue endeavors to publish open data in a machine readable format. For machine readable datasets, you can simply retrieve the file you need using the file URL. The Ontario Data Catalogue is built on CKAN, which means the catalogue has the following features you can use when building applications. APIs (Application programming interfaces) let software applications communicate directly with each other. If you are using the catalogue in a software application, you might want to extract data from the catalogue through the catalogue API. Note: All Datastore API requests to the Ontario Data Catalogue must be made server-side. The catalogue's collection of dataset metadata (and dataset files) is searchable through the CKAN API. The Ontario Data Catalogue has more than just CKAN's documented search fields. You can also search these custom fields. You can also use the CKAN API to retrieve metadata about a particular dataset and check for updated files. Read the complete documentation for CKAN's API. Some of the open data in the Ontario Data Catalogue is available through the Datastore API. You can also search and access the machine-readable open data that is available in the catalogue. How to use the API feature: Read the complete documentation for CKAN's Datastore API. The Ontario Data Catalogue contains a record for each dataset that the Government of Ontario possesses. Some of these datasets will be available to you as open data. Others will not be available to you. This is because the Government of Ontario is unable to share data that would break the law or put someone's safety at risk. You can search for a dataset with a word that might describe a dataset or topic. Use words like “taxes” or “hospital locations” to discover what datasets the catalogue contains. You can search for a dataset from 3 spots on the catalogue: the homepage, the dataset search page, or the menu bar available across the catalogue. On the dataset search page, you can also filter your search results. You can select filters on the left hand side of the page to limit your search for datasets with your favourite file format, datasets that are updated weekly, datasets released by a particular ministry, or datasets that are released under a specific licence. Go to the dataset search page to see the filters that are available to make your search easier. You can also do a quick search by selecting one of the catalogue’s categories on the homepage. These categories can help you see the types of data we have on key topic areas. When you find the dataset you are looking for, click on it to go to the dataset record. Each dataset record will tell you whether the data is available, and, if so, tell you about the data available. An open dataset might contain several data files. These files might represent different periods of time, different sub-sets of the dataset, different regions, language translations, or other breakdowns. You can select a file and either download it or preview it. Make sure to read the licence agreement to make sure you have permission to use it the way you want. A non-open dataset may be not available for many reasons. Read more about non-open data. Read more about restricted data. Data that is non-open may still be subject to freedom of information requests. The catalogue has tools that enable all users to visualize the data in the catalogue without leaving the catalogue – no additional software needed. Get automatic notifications when datasets are updated. You can choose to get notifications for individual datasets, an organization’s datasets or the full catalogue. You don’t have to provide and personal information – just subscribe to our feeds using any feed reader you like using the corresponding notification web addresses. Copy those addresses and paste them into your reader. Your feed reader will let you know when the catalogue has been updated. The catalogue provides open data in several file formats (e.g., spreadsheets, geospatial data, etc). Learn about each format and how you can access and use the data each file contains. A file that has a list of items and values separated by commas without formatting (e.g. colours, italics, etc.) or extra visual features. This format provides just the data that you would display in a table. XLSX (Excel) files may be converted to CSV so they can be opened in a text editor. How to access the data: Open with any spreadsheet software application (e.g., Open Office Calc, Microsoft Excel) or text editor. Note: This format is considered machine-readable, it can be easily processed and used by a computer. Files that have visual formatting (e.g. bolded headers and colour-coded rows) can be hard for machines to understand, these elements make a file more human-readable and less machine-readable. A file that provides information without formatted text or extra visual features that may not follow a pattern of separated values like a CSV. How to access the data: Open with any word processor or text editor available on your device (e.g., Microsoft Word, Notepad). A spreadsheet file that may also include charts, graphs, and formatting. How to access the data: Open with a spreadsheet software application that supports this format (e.g., Open Office Calc, Microsoft Excel). Data can be converted to a CSV for a non-proprietary format of the same data without formatted text or extra visual features. A shapefile provides geographic information that can be used to create a map or perform geospatial analysis based on location, points/lines and other data about the shape and features of the area. It includes required files (.shp, .shx, .dbt) and might include corresponding files (e.g., .prj). How to access the data: Open with a geographic information system (GIS) software program (e.g., QGIS). A package of files and folders. The package can contain any number of different file types. How to access the data: Open with an unzipping software application (e.g., WinZIP, 7Zip). Note: If a ZIP file contains .shp, .shx, and .dbt file types, it is an ArcGIS ZIP: a package of shapefiles which provide information to create maps or perform geospatial analysis that can be opened with ArcGIS (a geographic information system software program). A file that provides information related to a geographic area (e.g., phone number, address, average rainfall, number of owl sightings in 2011 etc.) and its geospatial location (i.e., points/lines). How to access the data: Open using a GIS software application to create a map or do geospatial analysis. It can also be opened with a text editor to view raw information. Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A text-based format for sharing data in a machine-readable way that can store data with more unconventional structures such as complex lists. How to access the data: Open with any text editor (e.g., Notepad) or access through a browser. Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A text-based format to store and organize data in a machine-readable way that can store data with more unconventional structures (not just data organized in tables). How to access the data: Open with any text editor (e.g., Notepad). Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A file that provides information related to an area (e.g., phone number, address, average rainfall, number of owl sightings in 2011 etc.) and its geospatial location (i.e., points/lines). How to access the data: Open with a geospatial software application that supports the KML format (e.g., Google Earth). Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. This format contains files with data from tables used for statistical analysis and data visualization of Statistics Canada census data. How to access the data: Open with the Beyond 20/20 application. A database which links and combines data from different files or
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Original data underpinning the figures in the associated publication. Each file is the source data for the paper figure it references.
Publication content: The Arabidopsis genome contains three members of the TTG1 (TRANSPARENT TESTA GLABRA 1) WDR subgroup of the WDR family, with very different reported roles. TTG1 is a regulator of epidermal cell differentiation, and of the production of pigments, while LWD1 (LIGHT-REGULATED WD1) and LWD2 (LIGHT-REGULATED WD2) are regulators of the circadian clock. We discovered a new central role for TTG1 WDR proteins as regulators of the circadian system, demonstrated by a lack of detectable circadian rhythms in a triple lwd1lwd2ttg1 mutant. We have demonstrated that there has been subfunctionalisation by protein changes within the angiosperms, with some TTG1 WDR proteins developing a stronger role in circadian clock regulation while losing the protein characteristics essential for pigment production and epidermal cell specification, and others weakening their ability to drive circadian clock regulation.
File source_fig2.xlsx is an excel spreadsheet containing leaf pixel position and CCA:luminescence data for triple mutant (lwd1 lwd2 ttg1), wild type and various single or double mutant combinations of these 3 genes, all in Arabidopsis. See figure 2 of main manuscript.
File source_fig3.xlsx is an excel spreadsheet containing CCAi:LUC luminescence data for various mutant combinations and wild type Arabidopsis. See figure 3 of main manuscript.
File source_fig5.xlsx is an excel spreadsheet containing CCA@LUC luminescence data for wild type, mutant and transgenic lines (expressing the TTG1, LWD1 or LWD2 gene from the CaMV 35S promoter in various mutant backgrounds). See figure 5 of main manuscript.
File source_fig6.xlsx is an excel spreadsheet containing CCA:LUC luminescence data for various mutants expressing the Marchantia polymorpha or Amborella trichopoda genes from the TTG family. See figure 6 of main manuscript.
source_ED_fig 2. xlsx is an excel spreadsheet showing anthocyanin content in mg/g of dry weight. Comparison between WT, ttg1-1 mutant and ttg-1-1 mutant ectopically expressing TTG1, LWD1, LWD2, MpWDR1, MpWDR2, MpWDR3, AmLWD. See extended data figure 2 of main manuscript.
source_ED_fig3.xlsx is an excel spreadsheet providing root hair count in 2.5 mm of the first 5 mm of the root of individuals of the mutant and transgenic genotypes used throughout this study. See extended data figure 3 of main manuscript.
source_ED_fig4 is a pdf file showing the original electrophoretic gel images which were combined to make extended data figure 4 of the main manuscript.
source_ED_fig5.xlsx is an excel spreadsheet showing number of rosette and cauline leaves at flowering of wild type and mutant and transgenic lines used in this study. See main manuscript extended data figure 5.
source_ED_fig6.xlsx is an excel spreadsheet showing CCA:LUC luminescence data for the wild type, transgenic and mutant lines used in this study, alongside expression data of LWD2 in the ttg1lwd1 double mutant compared to WT in three biological replicates, data obtained by qRT-PCR with LWD2 specific primers and reference gene UBQ10. See extended data figure 6 of main manuscript.
source_ED_fig7.xlsx is an excel spreadsheet showing trichome numbers on the leaf edge of the ttg1 mutant, double mutants and the triple mutant used in this study. Data represent total trichome number on a plant with 9 leaves. See extended data figure 7 of main manuscript.
source_ED_fig8.pdf is a pdf file showing the electrophoretic gels from which extended data figure 8 of the main manuscript was constructed.
source_ED_fig9.xlsx is an excel spreadsheet showing rosette and cauline leaf number at flowering of individuals of the wild type, mutant and transgenic genotypes used in this study. See extended data figure 9 of main manuscript.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
PRIMAP-crf is a processed version of data reported by countries to the United Nations Framework Convention on Climate Change (UNFCCC) in the Common Reporting Format (CRF). The processing has three key aspects: 1) Data from individual countries and years are combined into one file. 2) Data is re-organised to follow the IPCC 2006 hierarchical categorisation. 3) ‘Baskets’ of gases are calculated according to different global warming potential estimates from each of the three most recent IPCC reports. All Annex I Parties to the United Nations Framework Convention on Climate Change (UNFCCC) are required to report domestic emissions on an annual basis in a 'Common Reporting Format' (CRF). In 2015, the CRF data reporting was updated to follow the more recent 2006 guidelines from the IPCC and the structure of the reporting tables was modified accordingly. However, the hierarchical categorisation of data in the IPCC 2006 guidelines is not readily extracted from the reporting tables. We present the PRIMAP-crf data as a re-constructed hierarchical dataset according to the IPCC 2006 guidelines. Furthermore, the data is organised in a series of tables containing all available countries and years for each GHG individual gas and category reported. In addition to single gases, the Kyoto basket of greenhouse gases (CO2, N2O, CH4, HFCs, PFCs, SF6, and NF3) is provided according to multiple global warming potentials. The dataset was produced using the PRIMAP emissions module. Key processing steps include; extracting data from submitted CRF excel spreadsheets, mapping CRF categories to IPCC 2006 categories, constructing missing categories from available data, and aggregating single gases to gas baskets. The processed data is available under an Creative Commons Attribution 4.0 International License (CC BY 4.0).
Facebook
TwitterThe General Household Survey-Panel (GHS-Panel) is implemented in collaboration with the World Bank Living Standards Measurement Study (LSMS) team as part of the Integrated Surveys on Agriculture (ISA) program. The objectives of the GHS-Panel include the development of an innovative model for collecting agricultural data, interinstitutional collaboration, and comprehensive analysis of welfare indicators and socio-economic characteristics. The GHS-Panel is a nationally representative survey of approximately 5,000 households, which are also representative of the six geopolitical zones. The 2023/24 GHS-Panel is the fifth round of the survey with prior rounds conducted in 2010/11, 2012/13, 2015/16 and 2018/19. The GHS-Panel households were visited twice: during post-planting period (July - September 2023) and during post-harvest period (January - March 2024).
National
• Households • Individuals • Agricultural plots • Communities
The survey covered all de jure households excluding prisons, hospitals, military barracks, and school dormitories.
Sample survey data [ssd]
The original GHS‑Panel sample was fully integrated with the 2010 GHS sample. The GHS sample consisted of 60 Primary Sampling Units (PSUs) or Enumeration Areas (EAs), chosen from each of the 37 states in Nigeria. This resulted in a total of 2,220 EAs nationally. Each EA contributed 10 households to the GHS sample, resulting in a sample size of 22,200 households. Out of these 22,200 households, 5,000 households from 500 EAs were selected for the panel component, and 4,916 households completed their interviews in the first wave.
After nearly a decade of visiting the same households, a partial refresh of the GHS‑Panel sample was implemented in Wave 4 and maintained for Wave 5. The refresh was conducted to maintain the integrity and representativeness of the sample. The refresh EAs were selected from the same sampling frame as the original GHS‑Panel sample in 2010. A listing of households was conducted in the 360 EAs, and 10 households were randomly selected in each EA, resulting in a total refresh sample of approximately 3,600 households.
In addition to these 3,600 refresh households, a subsample of the original 5,000 GHS‑Panel households from 2010 were selected to be included in the new sample. This “long panel” sample of 1,590 households was designed to be nationally representative to enable continued longitudinal analysis for the sample going back to 2010. The long panel sample consisted of 159 EAs systematically selected across Nigeria’s six geopolitical zones.
The combined sample of refresh and long panel EAs in Wave 5 that were eligible for inclusion consisted of 518 EAs based on the EAs selected in Wave 4. The combined sample generally maintains both the national and zonal representativeness of the original GHS‑Panel sample.
Although 518 EAs were identified for the post-planting visit, conflict events prevented interviewers from visiting eight EAs in the North West zone of the country. The EAs were located in the states of Zamfara, Katsina, Kebbi and Sokoto. Therefore, the final number of EAs visited both post-planting and post-harvest comprised 157 long panel EAs and 354 refresh EAs. The combined sample is also roughly equally distributed across the six geopolitical zones.
Computer Assisted Personal Interview [capi]
The GHS-Panel Wave 5 consisted of three questionnaires for each of the two visits. The Household Questionnaire was administered to all households in the sample. The Agriculture Questionnaire was administered to all households engaged in agricultural activities such as crop farming, livestock rearing, and other agricultural and related activities. The Community Questionnaire was administered to the community to collect information on the socio-economic indicators of the enumeration areas where the sample households reside.
GHS-Panel Household Questionnaire: The Household Questionnaire provided information on demographics; education; health; labour; childcare; early child development; food and non-food expenditure; household nonfarm enterprises; food security and shocks; safety nets; housing conditions; assets; information and communication technology; economic shocks; and other sources of household income. Household location was geo-referenced in order to be able to later link the GHS-Panel data to other available geographic data sets (forthcoming).
GHS-Panel Agriculture Questionnaire: The Agriculture Questionnaire solicited information on land ownership and use; farm labour; inputs use; GPS land area measurement and coordinates of household plots; agricultural capital; irrigation; crop harvest and utilization; animal holdings and costs; household fishing activities; and digital farming information. Some information is collected at the crop level to allow for detailed analysis for individual crops.
GHS-Panel Community Questionnaire: The Community Questionnaire solicited information on access to infrastructure and transportation; community organizations; resource management; changes in the community; key events; community needs, actions, and achievements; social norms; and local retail price information.
The Household Questionnaire was slightly different for the two visits. Some information was collected only in the post-planting visit, some only in the post-harvest visit, and some in both visits.
The Agriculture Questionnaire collected different information during each visit, but for the same plots and crops.
The Community Questionnaire collected prices during both visits, and different community level information during the two visits.
CAPI: Wave five exercise was conducted using Computer Assisted Person Interview (CAPI) techniques. All the questionnaires (household, agriculture, and community questionnaires) were implemented in both the post-planting and post-harvest visits of Wave 5 using the CAPI software, Survey Solutions. The Survey Solutions software was developed and maintained by the Living Standards Measurement Unit within the Development Economics Data Group (DECDG) at the World Bank. Each enumerator was given a tablet which they used to conduct the interviews. Overall, implementation of survey using Survey Solutions CAPI was highly successful, as it allowed for timely availability of the data from completed interviews.
DATA COMMUNICATION SYSTEM: The data communication system used in Wave 5 was highly automated. Each field team was given a mobile modem which allowed for internet connectivity and daily synchronization of their tablets. This ensured that head office in Abuja had access to the data in real-time. Once the interview was completed and uploaded to the server, the data was first reviewed by the Data Editors. The data was also downloaded from the server, and Stata dofile was run on the downloaded data to check for additional errors that were not captured by the Survey Solutions application. An excel error file was generated following the running of the Stata dofile on the raw dataset. Information contained in the excel error files were then communicated back to respective field interviewers for their action. This monitoring activity was done on a daily basis throughout the duration of the survey, both in the post-planting and post-harvest.
DATA CLEANING: The data cleaning process was done in three main stages. The first stage was to ensure proper quality control during the fieldwork. This was achieved in part by incorporating validation and consistency checks into the Survey Solutions application used for the data collection and designed to highlight many of the errors that occurred during the fieldwork.
The second stage cleaning involved the use of Data Editors and Data Assistants (Headquarters in Survey Solutions). As indicated above, once the interview is completed and uploaded to the server, the Data Editors review completed interview for inconsistencies and extreme values. Depending on the outcome, they can either approve or reject the case. If rejected, the case goes back to the respective interviewer’s tablet upon synchronization. Special care was taken to see that the households included in the data matched with the selected sample and where there were differences, these were properly assessed and documented. The agriculture data were also checked to ensure that the plots identified in the main sections merged with the plot information identified in the other sections. Additional errors observed were compiled into error reports that were regularly sent to the teams. These errors were then corrected based on re-visits to the household on the instruction of the supervisor. The data that had gone through this first stage of cleaning was then approved by the Data Editor. After the Data Editor’s approval of the interview on Survey Solutions server, the Headquarters also reviews and depending on the outcome, can either reject or approve.
The third stage of cleaning involved a comprehensive review of the final raw data following the first and second stage cleaning. Every variable was examined individually for (1) consistency with other sections and variables, (2) out of range responses, and (3) outliers. However, special care was taken to avoid making strong assumptions when resolving potential errors. Some minor errors remain in the data where the diagnosis and/or solution were unclear to the data cleaning team.
Response
Facebook
TwitterAttribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
This dataset contains raw data and edited data from the Dutch multicenter ATTEST study. In this study, we determined the prevalence of major cardioembolic sources detected with transthoracic echocardiography (TTE) in 1084 patients with ischemic stroke or transient ischemic attack (TIA) of undetermined cause.
In addition, a cost-effectiveness model was developed to compare three strategies of TTE assessment: 1. TTE in all patients, 2. TTE only in patients that also have major electrocardiogram (ECG) abnormalities and 3. no TTE.
The dataset contains:1. ATTEST study database: patient data that was entered in an online database, Castor EDC, and after completion downloaded as csv and SPSS files.
2. ATTEST study syntaxes: syntaxes used to edit data in SPSS
3. Cost-effectiveness model: excel model simulating three strategies of TTE assessment
4. Parameter sources: excel files combining results from literature searches used as parameters in the cost-effectiveness model
Availability:The database can be made available on request to be submitted to the Data Access Committee (project leaders: HM and MM).
Several conditions apply:- Participants provided informent consent, and agreed that their data may be shared with other researchers who are involved in the study or have an interest in viewing the data.- The project leaders (HM and MM) will decide if the request will be granted.- No extra costs are involved in receiving the data.- The dataset will be available after an embargo period of 2 years starting from 1 February 2022- The dataset may be used for 3 months from the moment permission is granted.- Data will be made available trough a downloadable link.- Data may only be used for verification purposes, it may not be used for creating new materials or for commercial purposes.- The dataset may not be linked to other datasets or be distributed in any other way. - Data must be handled in accordance with the Netherlands Code of Conduct for Research Integrity, the GDPR and other applicable laws and regulations.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Excel file titled “BWM hesitant combination 2” contains data on indicator weights and combined weight calculations in 12 sheets.
Facebook
TwitterThe ITERATE project is an attempt to quantify data on the characteristics of transnational terrorist groups, their activities which have international impact, and the environment in which they operate. ITERATE data are available in two formats: as MS Excel tables and as text-file narratives. The Excel Tables are available for download via this Dataverse record. For access to the text-file narratives, please contact the Map & Data Library, University of Toronto. ITERATE’s numeric version, in Excel format, covers 1968 through 2020, coding the incidents discussed in the textual chronologies. It combines four distinct and interrelated files which count the most salient components of international terrorism: Common File 1968-2019 - includes over 13,000 cases and 42 variables Hostage File 1968-2007 - coded characteristics of 1,410 terrorism incidents that involved hostage-taking, with 41 variables Fate File 1968-1987 - includes 974 incidents and14 variable Skyjack File 1968-1987 - includes over 372 skyjacking incidents and 27 variables All variables and values for the data have remained consistent. New values are added with the emergence of new terrorist groups and types of terrorist activity. Incident codes for each event are consistent across files, enabling immediate cross reference. See codebook for further information. The datasets are based upon, inter alia, an exhaustive search of major media, research, news and information services, including AP, UPI, Reuters, CNN , MSNB , AFP, The Washington Post, New York Times, LA Times, Time, Newsweek, and al-Jazeera. The chronology incorporates information obtained from interviews with government officials, scholars, and former hostages/others involved in international terrorist incidents. Additional sources include relevant manuscripts and scholarly publications dealing with the subject. Finally, the chronology uses many official government chronologies, including those published by the NCTC, FBI, CIA, US Department of State, and FAA. ITERATE Text and ITERATE Numeric data files were acquired in April 2021.
Facebook
TwitterAt CompanyData.com (BoldData), we deliver verified, high-quality company information sourced directly from official trade registers. Our France database gives you access to over 14,450,749 registered companies, making it one of the most extensive and accurate datasets available for the French market.
Each record includes in-depth firmographic data such as company name, registration number (SIREN/SIRET), NAF codes, legal form, revenue estimates, company size and ownership structures. For more targeted outreach, we also offer contact details including executive names, email addresses, phone numbers and mobile numbers where available.
Whether you need to comply with KYC and AML regulations, enrich your CRM systems, run B2B sales or marketing campaigns, or power AI and data models, our France company data is built to serve your goals with precision and reliability.
You can access the data in a format that fits your workflow: • Custom-built lists based on your specific criteria • Full national company databases for comprehensive analysis • Real time access through our API for up-to-date information • Easy-to-use formats including Excel and CSV • Data enrichment services to improve existing records
With a global database of 14,450,749 verified companies across 200+ countries, CompanyData.com (BoldData) combines local accuracy with international reach. Whether you're expanding in France or targeting new markets abroad, our data helps you minimize risk, improve targeting and make smarter business decisions.
Let CompanyData.com be your partner for trusted business intelligence in France and beyond.
Facebook
Twitterhttp://reference.data.gov.uk/id/open-government-licencehttp://reference.data.gov.uk/id/open-government-licence
A dataset providing information of the vehicle types and counts in several locations in Leeds. Purpose of the project The aim of this work was to examine the profile of vehicle types in Leeds, in order to compare local emissions with national predictions. Traffic was monitored for a period of one week at two Inner Ring Road locations in April 2016 and at seven sites around the city in June 2016. The vehicle registration data was then sent to the Department for Transport (Dft), who combined it with their vehicle type data, replacing the registration number with an anonymised ‘Unique ID’. The data is provided in three folders:- Raw Data – contains the data in the format it was received, and a sample of each format. Processed Data – the data after processing by LCC, lookup tables, and sample data. Outputs – Excel spreadsheets summarising the data for each site, for various time/dates. Initially a dataset was received for the Inner Ring Road (see file “IRR ANPR matched to DFT vehicle type list.csv”), with vehicle details, but with missing / uncertain data on the vehicles emissions Eurostandard class. Of the 820,809 recorded journeys, from the pseudo registration number field (UniqueID) it was determined that there were 229,891 unique vehicles, and 31,912 unique “vehicle types” based on the unique concatenated vehicle description fields. It was therefore decided to import the data into an MS Access database, create a table of vehicle types, and to add the necessary fields/data so that combined with the year of manufacture / vehicle registration, the appropriate Eurostandard could be determined for the particular vehicle. The criteria for the Eurostandards was derived mainly from www.dieselnet.com and summarised in a spreadsheet (“EuroStandards.xlsx”). Vehicle types were assigned to a “VehicleClass” (see “Lookup Tables.xlsx”) and “EU class” with additional fields being added for any modified data (Gross Vehicle Weight – “GVM_Mod”; Engine capacity – “EngineCC_mod”; No of passenger seats – “PassSeats”; and Kerb weight – “KerbWt”). Missing data was added from the internet lookups, extrapolation from known data, and by association – eg 99% of cars with an engine size Additional data was then received from the Inner Ring Road site, giving journey date/time and incorporating the Taxi data for licensed taxis in Leeds. Similar data for Sites 1-7 was also then received, and processed to determine the “VehicleClass” and “EU class”. A mixture of Update queries, and VBA processing was then used to provide the Level 1-6 breakdown of vehicle types (see “Lookup Tables.xlsx”). The data was then combined into one database, so that the required Excel spreadsheets could be exported for the required time/date periods (see “outputs” folder).
Facebook
TwitterThe CCQM_Retrospectoscope system combines a nominally complete database of results from Consultative Committee for the Amount of Substance: Metrology in Chemistry and Biology (CCQM) studies with a number of graphical tools for trying to make sense of the data. This system supports a diverse collection of often eye-opening appraisals of participation and measurement performance throughout the history of the CCQM activities. The appraisals include the bias, uncertainty, and degrees of equivalence of results submitted by individual national metrology or designated institutes (NMI|DIs); the relative performance of NMI|DIs, and the uncertainty function characteristic of entire Working Groups (WGs). The system is implemented in Excel using Microsoft?s Visual Basic for Applications (VBA) programs. It runs on both Windows and Macintosh platforms.
Facebook
TwitterList of the data tables as part of the Immigration system statistics Home Office release. Summary and detailed data tables covering the immigration system, including out-of-country and in-country visas, asylum, detention, and returns.
If you have any feedback, please email MigrationStatsEnquiries@homeoffice.gov.uk.
The Microsoft Excel .xlsx files may not be suitable for users of assistive technology.
If you use assistive technology (such as a screen reader) and need a version of these documents in a more accessible format, please email MigrationStatsEnquiries@homeoffice.gov.uk
Please tell us what format you need. It will help us if you say what assistive technology you use.
Immigration system statistics, year ending September 2025
Immigration system statistics quarterly release
Immigration system statistics user guide
Publishing detailed data tables in migration statistics
Policy and legislative changes affecting migration to the UK: timeline
Immigration statistics data archives
https://assets.publishing.service.gov.uk/media/691afc82e39a085bda43edd8/passenger-arrivals-summary-sep-2025-tables.ods">Passenger arrivals summary tables, year ending September 2025 (ODS, 31.5 KB)
‘Passengers refused entry at the border summary tables’ and ‘Passengers refused entry at the border detailed datasets’ have been discontinued. The latest published versions of these tables are from February 2025 and are available in the ‘Passenger refusals – release discontinued’ section. A similar data series, ‘Refused entry at port and subsequently departed’, is available within the Returns detailed and summary tables.
https://assets.publishing.service.gov.uk/media/691b03595a253e2c40d705b9/electronic-travel-authorisation-datasets-sep-2025.xlsx">Electronic travel authorisation detailed datasets, year ending September 2025 (MS Excel Spreadsheet, 58.6 KB)
ETA_D01: Applications for electronic travel authorisations, by nationality
ETA_D02: Outcomes of applications for electronic travel authorisations, by nationality
https://assets.publishing.service.gov.uk/media/6924812a367485ea116a56bd/visas-summary-sep-2025-tables.ods">Entry clearance visas summary tables, year ending September 2025 (ODS, 53.3 KB)
https://assets.publishing.service.gov.uk/media/691aebbf5a253e2c40d70598/entry-clearance-visa-outcomes-datasets-sep-2025.xlsx">Entry clearance visa applications and outcomes detailed datasets, year ending September 2025 (MS Excel Spreadsheet, 30.2 MB)
Vis_D01: Entry clearance visa applications, by nationality and visa type
Vis_D02: Outcomes of entry clearance visa applications, by nationality, visa type, and outcome
Additional data relating to in country and overse
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The tables specify connectivity of the nodes in the network as well as the numerical parameters governing each reaction in the network. (XLSX)
Facebook
TwitterAttribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
The Department of the Prime Minister and Cabinet is no longer maintaining this dataset. If you would like to take ownership of this dataset for ongoing maintenance please contact us.\r \r ---\r \r PLEASE READ BEFORE USING\r \r The data format has been updated to align with a tidy data style (http://vita.had.co.nz/papers/tidy-data.html).\r \r The data in this dataset is manually collected and combined in a csv format from the following state and territory portals:\r \r - https://www.cmtedd.act.gov.au/communication/holidays\r - https://www.nsw.gov.au/about-nsw/public-holidays\r - https://nt.gov.au/nt-public-holidays\r - https://www.qld.gov.au/recreation/travel/holidays/public\r - https://www.safework.sa.gov.au/resources/public-holidays\r - https://worksafe.tas.gov.au/topics/laws-and-compliance/public-holidays\r - https://business.vic.gov.au/business-information/public-holidays\r - https://www.commerce.wa.gov.au/labour-relations/public-holidays-western-australia\r \r The data API by default returns only the first 100 records. The JSON response will contain a key that shows the link for the next page of records.\r Alternatively you can view all records by updating the limit on the endpoint or using a query to select all records, i.e. /api/3/action/datastore_search_sql?sql=SELECT * from "{{resource_id}}".\r \r
Facebook
Twitterhttps://www.data.gov.uk/dataset/531860ee-eb97-47b6-97d9-2ee59174f590/materials-facility-waste-returns-data-january-to-december-2020#licence-infohttps://www.data.gov.uk/dataset/531860ee-eb97-47b6-97d9-2ee59174f590/materials-facility-waste-returns-data-january-to-december-2020#licence-info
Materials Facility Waste Return Data for January to December 2020. Please note Materials Facility Waste Return's Data prior to 2020 is available on the WRAP Portal here: https://mfrp.wrap.org.uk/ . An excel data extract of wastes received at Materials Facility sites (sites covered under the Material Facility regulations: https://www.legislation.gov.uk/uksi/2016/1154/schedule/9/made) including sampling data for mixed waste received above 125 tonnes. An excel data extract of waste removed from Materials Facility sites including sampling of specified output material (a batch of material produced from a separating process for mixed waste material and made up of one of the following kinds of target material in largest proportion: glass, metal, paper, plastic) the sampling frequency for specified output material is dependent on the material grade in question. Attribution Statement: © Environment Agency copyright and/or database right 2021. All rights reserved.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
One of the most commonly used molecular inputs for ionic liquids and deep eutectic solvents (DESs) in the literature are the critical properties and acentric factors, which can be easily determined using the modified Lydersen–Joback–Reid (LJR) method with Lee–Kesler mixing rules. However, the method used in the literature is generally applicable only to binary mixtures of DESs. Nevertheless, ternary DESs are considered to be more interesting and may provide further tailorability for developing task-specific DESs for particular applications. Therefore, in this work, a new framework for estimating the critical properties and the acentric factor of ternary DESs based on their molecular structures is presented by adjusting the framework reported in the literature with an extended version of the Lee–Kesler mixing rules. The presented framework was applied to a data set consisting of 87 ternary DESs with 334 distinct compositions. For validation, the estimated critical properties and acentric factors were used to predict the densities of the ternary DESs. The results showed excellent agreement between the experimental and calculated data, with an average absolute relative deviation (AARD) of 5.203% for ternary DESs and 5.712% for 260 binary DESs (573 compositions). The developed methodology was incorporated into a user-friendly Excel worksheet for computing the critical properties and acentric factors of any ternary or binary DES, which is provided in the Supporting Information. This work promotes the creation of robust, accessible, and user-friendly models capable of predicting the properties of new ternary DESs based on critical properties, thus saving time and resources.