Facebook
TwitterThis dataset is a cleaned and preprocessed version of the original Netflix Movies and TV Shows dataset available on Kaggle. All cleaning was done using Microsoft Excel — no programming involved.
🎯 What’s Included: - Cleaned Excel file (standardized columns, proper date format, removed duplicates/missing values) - A separate "formulas_used.txt" file listing all Excel formulas used during cleaning (e.g., TRIM, CLEAN, DATE, SUBSTITUTE, TEXTJOIN, etc.) - Columns like 'date_added' have been properly formatted into DMY structure - Multi-valued columns like 'listed_in' are split for better analysis - Null values replaced with “Unknown” for clarity - Duration field broken into numeric + unit components
🔍 Dataset Purpose: Ideal for beginners and analysts who want to: - Practice data cleaning in Excel - Explore Netflix content trends - Analyze content by type, country, genre, or date added
📁 Original Dataset Credit: The base version was originally published by Shivam Bansal on Kaggle: https://www.kaggle.com/shivamb/netflix-shows
📌 Bonus: You can find a step-by-step cleaning guide and the same dataset on GitHub as well — along with screenshots and formulas documentation.
Facebook
TwitterThe Florida Flood Hub for Applied Research and Innovation and the U.S. Geological Survey have developed projected future change factors for precipitation depth-duration-frequency (DDF) curves at 242 National Oceanic and Atmospheric Administration (NOAA) Atlas 14 stations in Florida. The change factors were computed as the ratio of projected future to historical extreme-precipitation depths fitted to extreme-precipitation data from downscaled climate datasets using a constrained maximum likelihood (CML) approach as described in https://doi.org/10.3133/sir20225093. The change factors correspond to the period 2020-59 (centered in 2040) or to the period 2050-89 (centered in the year 2070) as compared to the 1966-2005 historical period. A Microsoft Excel workbook is provided that tabulates best models for each downscaled climate dataset and for all downscaled climate datasets considered together. Best models were identified based on how well the models capture the climatology and interannual variability of four climate extreme indices using the Model Climatology Index (MCI) and the Model Variability Index (MVI) of Srivastava and others (2020). The four indices consist of annual maxima consecutive precipitation for durations of 1, 3, 5, and 7 days compared against the same indices computed based on the PRISM and SFWMD gridded precipitation datasets for five climate regions: climate region 1 in Northwest Florida, 2 in North Florida, 3 in North Central Florida, 4 in South Central Florida, and climate region 5 in South Florida. The PRISM dataset is based on the Parameter-elevation Relationships on Independent Slopes Model interpolation method of Daly and others (2008). The South Florida Water Management District’s (SFWMD) precipitation super-grid is a gridded precipitation dataset developed by modelers at the agency for use in hydrologic modeling (SFWMD, 2005). This dataset is considered by the SFWMD as the best available gridded rainfall dataset for south Florida and was used in addition to PRISM to identify best models in the South Central and South Florida climate regions. Best models were selected based on MCI and MVI evaluated within each individual downscaled dataset. In addition, best models were selected by comparison across datasets and referred to as "ALL DATASETS" hereafter. Due to the small sample size, all models in the using the Weather Research and Forecasting Model (JupiterWRF) dataset were considered as best models.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset contains the valuation template the researcher can use to retrieve real-time Excel stock price and stock price in Google Sheets. The dataset is provided by Finsheet, the leading financial data provider for spreadsheet users. To get more financial data, visit the website and explore their function. For instance, if a researcher would like to get the last 30 years of income statement for Meta Platform Inc, the syntax would be =FS_EquityFullFinancials("FB", "ic", "FY", 30) In addition, this syntax will return the latest stock price for Caterpillar Inc right in your spreadsheet. =FS_Latest("CAT") If you need assistance with any of the function, feel free to reach out to their customer support team. To get starter, install their Excel and Google Sheets add-on.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Linear trend analysis of time series is standard procedure in many scientific disciplines. If the number of data is large, a trend may be statistically significant even if data are scattered far from the trend line. This study introduces and tests a quality criterion for time trends referred to as statistical meaningfulness, which is a stricter quality criterion for trends than high statistical significance. The time series is divided into intervals and interval mean values are calculated. Thereafter, r2 and p values are calculated from regressions concerning time and interval mean values. If r2≥0.65 at p≤0.05 in any of these regressions, then the trend is regarded as statistically meaningful. Out of ten investigated time series from different scientific disciplines, five displayed statistically meaningful trends. A Microsoft Excel application (add-in) was developed which can perform statistical meaningfulness tests and which may increase the operationality of the test. The presented method for distinguishing statistically meaningful trends should be reasonably uncomplicated for researchers with basic statistics skills and may thus be useful for determining which trends are worth analysing further, for instance with respect to causal factors. The method can also be used for determining which segments of a time trend may be particularly worthwhile to focus on.
Facebook
TwitterClick here to open the ArcGIS Online Map Viewer and work through the examples shown belowTo add spreadsheet data to ArcGIS Online you need to log in.
Facebook
TwitterThe series consists of a list of files registered on the computer-based Records and Correspondence Management System (RCMS), under Registry 01 Corporate Management Division. It was created by exporting file data from the RCMS system into a Microsoft Excel spreadsheet. It is an artificial series, created by the Department of Justice at the request of PROV, to provide access to VPRS 12607 General Correspondence Files, Registry 01 Corporate Management Division.
The list captured the file number, key-term classification, file title, and certain additional information for each file.
Organisation of the Data:
The data is organised into 13 columns, or fields, presumably corresponding to discrete fields within the RCMS database.
The columns, from left to right, are as follows:
1. FILE.YEAR - The year the file was raised.
2. REGISTRY - The number of the registry in which the file has been registered on the RCMS system. The files referred to by this series were registered under Registry 01 Corporate Management Division.
3. FILE SEQUENCE - The sequential number allocated to each file as it is raised. Numbers start again from one each year.
4. FILE PART - The part number of the file.
The FILE.YEAR, REGISTRY, FILE SEQUENCE, and FILE PART fields, taken together, provide the file number.
5. KEY TERM - In theory, this is term used to describe the principle subject area of the file.
6. DESCRIPTOR.1, DESCRIPTOR.2 and DESCRIPTOR.3 (Columns 6 to 8) - In theory, these are narrower terms used to break the general subject area into smaller, more specific areas.
7. KWOC.1, KWOC.2, KWOC.3, and KWOC.4 (Key Word Out of Context) (Columns 9 to 12) - Provide for free text description of the file.
The KEY-TERM, DESCRIPTOR, and KWOC fields, taken together, provide the file title.
In practice, many different terms have been used in the key-term and descriptor fields. There appears to have been little control over the creation of new terms and the way in which the terms are used.
8. ADD.FILE.INFO (Additional File Information) - This field contains useful information about previous and subsequent files, related files, file closure, and so forth.
Identifying Top-numbered Files:
This series also records the original file numbers for files that have been top-numbered into VPRS 12607 from other correspondence registries that operated in the Law Department in the 1980's. The details are as follows:
Files top-numbered from the Central Correspondence Registry (VPRS 266 Inward Registered Correspondence 1857-1986) - the original file number is recorded in the field "ADD.FILE.INFO".
Files top-numbered from the Courts Management Division Registry (VPRS 12705 General Correspondence Files, Courts Management Division) - the original file number is recorded in the fields "KWOC 3" and "KWOC 4".
Files top-numbered from the Buildings and Property Registry - the original file number is recorded in the field "KWOC 4".
Files top-numbered from the Human Resource Management Registry - the original file number is recorded in the field "KWOC 4".
Files top-numbered from RCMS Registry 02 Courts and Tribunals Division - the original file number is recorded in the fields "KWOC 3" and "KWOC 4".
Researchers should not discount the possibility that file numbers may be recorded in fields other than those specified above.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundMicrosoft Excel automatically converts certain gene symbols, database accessions, and other alphanumeric text into dates, scientific notation, and other numerical representations. These conversions lead to subsequent, irreversible, corruption of the imported text. A recent survey of popular genomic literature estimates that one-fifth of all papers with supplementary gene lists suffer from this issue.ResultsHere, we present an open-source tool, Escape Excel, which prevents these erroneous conversions by generating an escaped text file that can be safely imported into Excel. Escape Excel is implemented in a variety of formats (http://www.github.com/pstew/escape_excel), including a command line based Perl script, a Windows-only Excel Add-In, an OS X drag-and-drop application, a simple web-server, and as a Galaxy web environment interface. Test server implementations are accessible as a Galaxy interface (http://apostl.moffitt.org) and simple non-Galaxy web server (http://apostl.moffitt.org:8000/).ConclusionsEscape Excel detects and escapes a wide variety of problematic text strings so that they are not erroneously converted into other representations upon importation into Excel. Examples of problematic strings include date-like strings, time-like strings, leading zeroes in front of numbers, and long numeric and alphanumeric identifiers that should not be automatically converted into scientific notation. It is hoped that greater awareness of these potential data corruption issues, together with diligent escaping of text files prior to importation into Excel, will help to reduce the amount of Excel-corrupted data in scientific analyses and publications.
Facebook
TwitterThis spreadsheet dataset (.csv file) contains annual land-use and land cover area in square kilometers (km2) by scenario, timestep, WEAP hydrologic zone, and 4 sub-regions within the broader California Central Valley, modeled using the LUCAS ST-Sim for the period 2011-2101 across 5 future scenarios. Four of the scenarios were developed as part of the Central Valley Landscape Conservation Project. The 4 original scenarios include a Bad-Business-As-Usual (BBAU; high water, poor management), California Dreamin’ (DREAM; high water availability, good management), Central Valley Dustbowl (DUST; low water availability, poor management), and Everyone Equally Miserable (EEM; low water availability, good management). These scenarios represent alternative plausible futures, capturing a range of climate variability, land management activities, and habitat restoration goals. We parameterized our models based on close interpretation of these four scenario narratives to best reflect stakeholder interests, adding a baseline Historical Business-As-Usual scenario (HBAU) for comparison. For these future map projections, the model was initialized in 2011 and run forward on an annual time step to 2101. The full methods and results of this research are described in detail in the parent manuscript “Integrated modeling of climate, land use, and water availability scenarios and their impacts on managed wetland habitat: A case study from California’s Central Valley” (2021).
Facebook
TwitterHOW TO: - Hierarchy using the category, subcategory & product fields (columns “Product Category” “Product SubCategory”, & “Product Name”). - Group the values of the column "Region" into 2 groups, alphabetically, based on the name of each region.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ABSTRACT Soil compression curve (CC) provides parameters to identify soil load-bearing capacity and susceptibility to compaction. An Excel add-in (ACC) incorporating graphical procedures for mathematical models for soil CC description and calculation of parameters was developed. By using the ACC, soil CC can be described by means of the Casagrande method, mathematically operationalized with the van Genuchten equation, with or without restrictions on its parameters, and by Dias Junior and Pierce method in its original form and also modified using the void ratio rather than soil bulk density. The ACC uses a single Excel spreadsheet for input and output data, in addition to a graphical interface and a tool for exporting editable charts. Compared to SAS statistical software, the ACC minimized the sum of squared residuals and estimated parameters of mathematical models with the same efficiency for 347 compression curves. The ACC programming script is available and can be modified or used as a framework for other programming projects.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The SCoA Excel spreadsheet contains all items from all six versions and shows the history of items as they were added, deleted, or carried through. It also shows the factor to which each items was aligned.
Facebook
TwitterWith this add in it is possible to create map templates from GIS files in KML format, and create choropleths with them. Providing you have access to KML format map boundary files, it is possible to create your own quick and easy choropleth maps in Excel. The KML format files can be converted from 'shape' files. Many shape files are available to download for free from the web, including from Ordnance Survey and the London Datastore. Standard mapping packages such as QGIS (free to download) and ArcGIS can convert the files to KML format. A sample of a KML file (London wards) can be downloaded from this page, so that users can easily test the tool out. Macros must be enabled for the tool to function. When creating the map using the Excel tool, the 'unique ID' should normally be the area code, the 'Name' should be the area name and then if required and there is additional data in the KML file, further 'data' fields can be added. These columns will appear below and to the right of the map. If not, data can be added later on next to the codes and names. In the add-in version of the tool the final control, 'Scale (% window)' should not normally be changed. With the default value 0.5, the height of the map is set to be half the total size of the user's Excel window. To run a choropleth, select the menu option 'Run Choropleth' to get this form. To specify the colour ramp for the choropleth, the user needs to enter the number of boxes into which the range is to be divided, and the colours for the high and low ends of the range, which is done by selecting coloured option boxes as appropriate. If wished, hit the 'Swap' button to change which colours are for the different ends of the range. Then hit the 'Choropleth' button. The default options for the colours of the ends of the choropleth colour range are saved in the add in, but different values can be selected but setting up a column range of up to twelve cells, anywhere in Excel, filled with the option colours wanted. Then use the 'Colour range' control to select this range, and hit apply, having selected high or low values as wished. The button 'Copy' sets up a sheet 'ColourRamp' in the active workbook with the default colours, which can just be extended or deleted with just a few cells, so saving the user time. The add-in was developed entirely within the Excel VBA IDE by Tim Lund. He is kindly distributing the tool for free on the Datastore but suggests that users who find the tool useful make a donation to the Shelter charity. It is not intended to keep the actively maintained, but if any users or developers would like to add more features, email the author. Acknowledgments Calculation of Excel freeform shapes from latitudes and longitudes is done using calculations from the Ordnance Survey.
Facebook
TwitterThis interactive sales dashboard is designed in Excel for B2C type of Businesses like Dmart, Walmart, Amazon, Shops & Supermarkets, etc. using Slicers, Pivot Tables & Pivot Chart.
The first column is the date of Selling. The second column is the product ID. The third column is quantity. The fourth column is sales types, like direct selling, are purchased by a wholesaler or ordered online. The fifth column is a mode of payment, which is online or in cash. You can update these two as per requirements. The last one is a discount percentage. if you want to offer any discount, you can add it here.
So, basically these are the four sheets mentioned above with different tasks.
However, a sales dashboard enables organizations to visualize their real-time sales data and boost productivity.
A dashboard is a very useful tool that brings together all the data in the forms of charts, graphs, statistics and many more visualizations which lead to data-driven and decision making.
Questions & Answers
Facebook
Twitterhttps://dataverse-staging.rdmc.unc.edu/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=hdl:1902.29/CD-10849https://dataverse-staging.rdmc.unc.edu/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=hdl:1902.29/CD-10849
"The Statistical Abstract of the United States, published since 1878, is the standard summary of statistics on the social, political, and economic organization of the United States. It is designed to serve as a convenient volume for statistical reference and as a guide to other statistical publications and sources. The latter function is served by the introductory text to each section, the source note appearing below each table, and Appendix I, which comprises the Guide to Sources of Statisti cs, the Guide to State Statistical Abstracts, and the Guide to Foreign Statistical Abstracts. The Statistical Abstract sections and tables are compiled into one Adobe PDF named StatAbstract2009.pdf. This PDF is bookmarked by section and by table and can be searched using the Acrobat Search feature. The Statistical Abstract on CD-ROM is best viewed using Adobe Acrobat 5, or any subsequent version of Acrobat or Acrobat Reader. The Statistical Abstract tables and the metropolitan areas tables from Appendix II are available as Excel(.xls or .xlw) spreadsheets. In most cases, these spreadsheet files offer the user direct access to more data than are shown either in the publication or Adobe Acrobat. These files usually contain more years of data, more geographic areas, and/or more categories of subjects than those shown in the Acrobat version. The extensive selection of statistics is provided for the United States, with selected data for regions, divisions, states, metropolitan areas, cities, and foreign countries from reports and records of government and private agencies. Software on the disc can be used to perform full-text searches, view official statistics, open tables as Lotus worksheets or Excel workbooks, and link directly to source agencies and organizations for supporting information. Except as indicated, figures are for the United States as presently constituted. Although emphasis in the Statistical Abstract is primarily given to national data, many tables present data for regions and individual states and a smaller number for metropolitan areas and cities.Statistics for the Commonwealth of Puerto Rico and for island areas of the United States are included in many state tables and are supplemented by information in Section 29. Additional information for states, cities, counties, metropolitan areas, and other small units, as well as more historical data are available in various supplements to the Abstract. Statistics in this edition are generally for the most recent year or period available by summer 2006. Each year over 1,400 tables and charts are reviewed and evaluated; new tables and charts of current interest are added, continuing series are updated, and less timely data are condensed or eliminated. Text notes and appendices are revised as appropriate. This year we have introduced 72 new tables covering a wide range of subject areas. These cover a variety of topics including: learning disability for children, people impacted by the hurricanes in the Gulf Coast area, employees with alternative work arrangements, adult computer and Internet users by selected characteristics, North America cruise industry, women- and minority-owned businesses, and the percentage of the adult population considered to be obese. Some of the annually surveyed topics are population; vital statistics; health and nutrition; education; law enforcement, courts and prison; geography and environment; elections; state and local government; federal government finances and employment; national defense and veterans affairs; social insurance and human services; labor force, employment, and earnings; income, expenditures, and wealth; prices; business enterprise; science and technology; agriculture; natural resources; energy; construction and housing; manufactures; domestic trade and services; transportation; information and communication; banking, finance, and insurance; arts, entertainment, and recreation; accommodation, food services, and other services; foreign commerce and aid; outlying areas; and comparative international statistics." Note to Users: This CD is part of a collection located in the Data Archive of the Odum Institute for Research in Social Science, at the University of North Carolina at Chapel Hill. The collection is located in Room 10, Manning Hall. Users may check the CDs out subscribing to the honor system. Items can be checked out for a period of two weeks. Loan forms are located adjacent to the collection.
Facebook
TwitterThe series consists of a list of files registered on the computer-based Records and Correspondence Management System (RCMS), under Registry 11 Correctional Services. It was created by exporting file data from the RCMS system into a Microsoft Excel spreadsheet. It is an artificial series, created by the Department of Justice at the request of Public Record Office Victoria, to provide access to VPRS 12700 General Correspondence Files, Registry 11 Correctional Services.
The list captured the file number, key-term classification, file title, and certain additional information for each file.
Organisation of the Data
The data is organised into 13 columns, or fields, presumably corresponding to discrete fields within the RCMS database.
The columns, from left to right, are as follows:
1. FILE.YEAR - The year the file was raised.
2. REGISTRY - The number of the registry in which the file has been registered on the RCMS system. The files referred to by this series were registered under Registry 11 Correctional Services.
3. FILE SEQUENCE - The sequential number allocated to each file as it is raised. Numbers start again from one each year.
4. FILE PART - The part number of the file.
The FILE.YEAR, REGISTRY, FILE SEQUENCE, and FILE PART fields, taken together, provide the file number.
5. KEY TERM - In theory, this is term used to describe the principle subject area of the file.
6. DESCRIPTOR.1, DESCRIPTOR.2 and DESCRIPTOR.3 (Columns 6 to 8) - In theory, these are narrower terms used to break the general subject area into smaller, more specific areas.
7. KWOC.1, KWOC.2, KWOC.3, and KWOC.4 (Key Word Out of Context) (Columns 9 to 12) - Provide for free text description of the file.
The KEY-TERM, DESCRIPTOR, and KWOC fields, taken together, provide the file title.
In practice, many different terms have been used in the key-term and descriptor fields. There appears to have been little control over the creation of new terms and the way in which the terms are used.
8. ADD.FILE.INFO (Additional File Information) - This field contains useful information about previous and subsequent files, related files, file closure, and so forth.
Facebook
TwitterThis spreadsheet dataset (.csv file) contains annual modeled output of land-use and land-cover change transitions in square kilometers (km2) by specified transition group, scenario, timestep, WEAP hydrologic zone, and 4 sub-regions within the broader California Central Valley, modeled using the LUCAS ST-SIM for the period 2011-2101 across 5 future scenarios. Four of the scenarios were developed as part of the Central Valley Landscape Conservation Project. The 4 original scenarios include a Bad-Business-As-Usual (BBAU; high water availability, poor management), California Dreamin’ (DREAM; high water availability, good management), Central Valley Dustbowl (DUST; low water availability, poor management), and Everyone Equally Miserable (EEM; low water availability, good management). These scenarios represent alternative plausible futures, capturing a range of climate variability, land management activities, and habitat restoration goals. We parameterized our models based on close interpretation of these four scenario narratives to best reflect stakeholder interests, adding a baseline Historical Business-As-Usual scenario (HBAU) for comparison. For these future map projections, the model was initialized in 2011 and run forward on an annual time step to 2101. The full methods and results of this research are described in detail in the parent manuscript "Integrated modeling of climate and land change impacts on future dynamic waterbird habitat – a case study from California’s Central Valley" (2021).
Facebook
TwitterExcel spreadsheets by species (4 letter code is abbreviation for genus and species used in study, year 2010 or 2011 is year data collected, SH indicates data for Science Hub, date is date of file preparation). The data in a file are described in a read me file which is the first worksheet in each file. Each row in a species spreadsheet is for one plot (plant). The data themselves are in the data worksheet. One file includes a read me description of the column in the date set for chemical analysis. In this file one row is an herbicide treatment and sample for chemical analysis (if taken). This dataset is associated with the following publication: Olszyk , D., T. Pfleeger, T. Shiroyama, M. Blakely-Smith, E. Lee , and M. Plocher. Plant reproduction is altered by simulated herbicide drift toconstructed plant communities. ENVIRONMENTAL TOXICOLOGY AND CHEMISTRY. Society of Environmental Toxicology and Chemistry, Pensacola, FL, USA, 36(10): 2799-2813, (2017).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Enterprise-Value-To-Ebitda-Ratio Time Series for Excel Force MSC Bhd. Excel Force MSC Berhad, together with its subsidiaries, develops, provides, and maintains software application solutions for the financial services industry in Malaysia. The company operates through Application Solutions, Maintenance Services, Application Services Provider, and Other segments. Its product portfolio includes CyberStock BTX, a bridging trader and exchange system platform that provides trading tools classes; and CyberStock ECOS, a stock broking solution which offers real time market information, place trades, and manage orders solution. In addition, the company provides CyberStock Mobile Trader, a mobile trading system that connects users smartphones to exchanges to manage trading activities; and CyberStock EDS, an exempt dealer system that provides advanced trading infrastructure and facilities for commercial banks. Further, it offers CyberStock SMF, a share margin financing system that enables financial institutions, brokerage firms, and banks to operate and manage margin financing services; and CyberStock CNS, a custodian and nominee system, which provides value-added services, such as trade settlement, cash balances investment, income collection, corporate actions processing, recordkeeping and reporting to custodian banks for domestic services. Additionally, the company provides CyberStock BOS, a back office system to manage enormous file and data; and offers network and security services. Excel Force MSC Berhad was founded in 1994 and is based in Petaling Jaya, Malaysia.
Facebook
TwitterExcel module for downloading cadastral sheets by municipality.
Format: DXF, EDIGEO Georeferencing: L93, CC Vintage: Last in force
For any bugs or requests do not hesitate to contact me.
Version X32 and X64
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Civil and geological engineers have used field variable-head permeability tests (VH tests or slug tests) for over one century to assess the local hydraulic conductivity of tested soils and rocks. The water level in the pipe or riser casing reaches, after some rest time, a static position or elevation, z2. Then, the water level position is changed rapidly, by adding or removing some water volume, or by inserting or removing a solid slug. Afterward, the water level position or elevation z1(t) is recorded vs. time t, yielding a difference in hydraulic head or water column defined as Z(t) = z1(t) - z2. The water level at rest is assumed to be the piezometric level or PL for the tested zone, before drilling a hole and installing test equipment. All equations use Z(t) or Z*(t) = Z(t) / Z(t=0). The water-level response vs. time may be a slow return to equilibrium (overdamped test), or an oscillation back to equilibrium (underdamped test). This document deals exclusively with overdamped tests. Their data may be analyzed using several methods, known to yield different results for the hydraulic conductivity. The methods fit in three groups: group 1 neglects the influence of the solid matrix strain, group 2 is for tests in aquitards with delayed strain caused by consolidation, and group 3 takes into account some elastic and instant solid matrix strain. This document briefly explains what is wrong with certain theories and why. It shows three ways to plot the data, which are the three diagnostic graphs. According to experience with thousands of tests, most test data are biased by an incorrect estimate z2 of the piezometric level at rest. The derivative or velocity plot does not depend upon this assumed piezometric level, but can verify its correctness. The document presents experimental results and explains the three-diagnostic graphs approach, which unifies the theories and, most important, yields a user-independent result. Two free spreadsheet files are provided. The spreadsheet "Lefranc-Test-English-Model" follows the Canadian standards and is used to explain how to treat correctly the test data to reach a user-independent result. The user does not modify this model spreadsheet but can make as many copies as needed, with different names. The user can treat any other data set in a copy, and can also modify any copy if needed. The second Excel spreadsheet contains several sets of data that can be used to practice with the copies of the model spreadsheet. En génie civil et géologique, on a utilisé depuis plus d'un siècle les essais in situ de perméabilité à niveau variable (essais VH ou slug tests), afin d'évaluer la conductivité hydraulique locale des sols et rocs testés. Le niveau d'eau dans le tuyau ou le tubage prend, après une période de repos, une position ou élévation statique, z2. Ensuite, on modifie rapidement la position du niveau d'eau, en ajoutant ou en enlevant rapi-dement un volume d'eau, ou en insérant ou retirant un objet solide. La position ou l'élévation du niveau d'eau, z1(t), est alors notée en fonction du temps, t, ce qui donne une différence de charge hydraulique définie par Z(t) = z1(t) - z2. Le niveau d'eau au repos est supposé être le niveau piézométrique pour la zone testée, avant de forer un trou et d'installer l'équipement pour un essai. Toutes les équations utilisent Z(t) ou Z*(t) = Z(t) / Z(t=0). La réponse du niveau d'eau avec le temps peut être soit un lent retour à l'équilibre (cas suramorti) soit une oscillation amortie retournant à l'équilibre (cas sous-amorti). Ce document ne traite que des cas suramortis. Leurs données peuvent être analysées à l'aide de plusieurs méthodes, connues pour donner des résultats différents pour la conductivité hydraulique. Les méthodes appartiennent à trois groupes : le groupe 1 néglige l'influence de la déformation de la matrice solide, le groupe 2 est pour les essais dans des aquitards avec une déformation différée causée par la consolidation, et le groupe 3 prend en compte une certaine déformation élastique et instantanée de la matrice solide. Ce document explique brièvement ce qui est incorrect dans les théories et pourquoi. Il montre trois façons de tracer les données, qui sont les trois graphiques de diagnostic. Selon l'expérience de milliers d'essais, la plupart des données sont biaisées par un estimé incorrect de z2, le niveau piézométrique supposé. Le graphe de la dérivée ou graphe des vitesses ne dépend pas de la valeur supposée pour le niveau piézomé-trique, mais peut vérifier son exactitude. Le document présente des résultats expérimentaux et explique le diagnostic à trois graphiques, qui unifie les théories et donne un résultat indépendant de l'utilisateur, ce qui est important. Deux fichiers Excel gratuits sont fournis. Le fichier"Lefranc-Test-English-Model" suit les normes canadiennes : il sert à expliquer comment traiter correctement les données d'essai pour avoir un résultat indépendant de l'utilisateur. Celui-ci ne modifie pas ce...
Facebook
TwitterThis dataset is a cleaned and preprocessed version of the original Netflix Movies and TV Shows dataset available on Kaggle. All cleaning was done using Microsoft Excel — no programming involved.
🎯 What’s Included: - Cleaned Excel file (standardized columns, proper date format, removed duplicates/missing values) - A separate "formulas_used.txt" file listing all Excel formulas used during cleaning (e.g., TRIM, CLEAN, DATE, SUBSTITUTE, TEXTJOIN, etc.) - Columns like 'date_added' have been properly formatted into DMY structure - Multi-valued columns like 'listed_in' are split for better analysis - Null values replaced with “Unknown” for clarity - Duration field broken into numeric + unit components
🔍 Dataset Purpose: Ideal for beginners and analysts who want to: - Practice data cleaning in Excel - Explore Netflix content trends - Analyze content by type, country, genre, or date added
📁 Original Dataset Credit: The base version was originally published by Shivam Bansal on Kaggle: https://www.kaggle.com/shivamb/netflix-shows
📌 Bonus: You can find a step-by-step cleaning guide and the same dataset on GitHub as well — along with screenshots and formulas documentation.