Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Figures in scientific publications are critically important because they often show the data supporting key findings. Our systematic review of research articles published in top physiology journals (n = 703) suggests that, as scientists, we urgently need to change our practices for presenting continuous data in small sample size studies. Papers rarely included scatterplots, box plots, and histograms that allow readers to critically evaluate continuous data. Most papers presented continuous data in bar and line graphs. This is problematic, as many different data distributions can lead to the same bar or line graph. The full data may suggest different conclusions from the summary statistics. We recommend training investigators in data presentation, encouraging a more complete presentation of data, and changing journal editorial policies. Investigators can quickly make univariate scatterplots for small sample size studies using our Excel templates.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This article describes a free, open-source collection of templates for the popular Excel (2013, and later versions) spreadsheet program. These templates are spreadsheet files that allow easy and intuitive learning and the implementation of practical examples concerning descriptive statistics, random variables, confidence intervals, and hypothesis testing. Although they are designed to be used with Excel, they can also be employed with other free spreadsheet programs (changing some particular formulas). Moreover, we exploit some possibilities of the ActiveX controls of the Excel Developer Menu to perform interactive Gaussian density charts. Finally, it is important to note that they can be often embedded in a web page, so it is not necessary to employ Excel software for their use. These templates have been designed as a useful tool to teach basic statistics and to carry out data analysis even when the students are not familiar with Excel. Additionally, they can be used as a complement to other analytical software packages. They aim to assist students in learning statistics, within an intuitive working environment. Supplementary materials with the Excel templates are available online.
The purpose of this project is to become comfortable with obtaining citizen science datasets and spreadsheet software systems (e.g., Excel), and to gain experience working with, analyzing, and visualizing scientific data. Students will work independently (pairs or small group optional) to create five different charts including graphs visualizing the data collected in the LOYNO Biodiversity Project.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
To create the dataset, the top 10 countries leading in the incidence of COVID-19 in the world were selected as of October 22, 2020 (on the eve of the second full of pandemics), which are presented in the Global 500 ranking for 2020: USA, India, Brazil, Russia, Spain, France and Mexico. For each of these countries, no more than 10 of the largest transnational corporations included in the Global 500 rating for 2020 and 2019 were selected separately. The arithmetic averages were calculated and the change (increase) in indicators such as profitability and profitability of enterprises, their ranking position (competitiveness), asset value and number of employees. The arithmetic mean values of these indicators for all countries of the sample were found, characterizing the situation in international entrepreneurship as a whole in the context of the COVID-19 crisis in 2020 on the eve of the second wave of the pandemic. The data is collected in a general Microsoft Excel table. Dataset is a unique database that combines COVID-19 statistics and entrepreneurship statistics. The dataset is flexible data that can be supplemented with data from other countries and newer statistics on the COVID-19 pandemic. Due to the fact that the data in the dataset are not ready-made numbers, but formulas, when adding and / or changing the values in the original table at the beginning of the dataset, most of the subsequent tables will be automatically recalculated and the graphs will be updated. This allows the dataset to be used not just as an array of data, but as an analytical tool for automating scientific research on the impact of the COVID-19 pandemic and crisis on international entrepreneurship. The dataset includes not only tabular data, but also charts that provide data visualization. The dataset contains not only actual, but also forecast data on morbidity and mortality from COVID-19 for the period of the second wave of the pandemic in 2020. The forecasts are presented in the form of a normal distribution of predicted values and the probability of their occurrence in practice. This allows for a broad scenario analysis of the impact of the COVID-19 pandemic and crisis on international entrepreneurship, substituting various predicted morbidity and mortality rates in risk assessment tables and obtaining automatically calculated consequences (changes) on the characteristics of international entrepreneurship. It is also possible to substitute the actual values identified in the process and following the results of the second wave of the pandemic to check the reliability of pre-made forecasts and conduct a plan-fact analysis. The dataset contains not only the numerical values of the initial and predicted values of the set of studied indicators, but also their qualitative interpretation, reflecting the presence and level of risks of a pandemic and COVID-19 crisis for international entrepreneurship.
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global graph database market size was valued at USD 1.5 billion in 2023 and is projected to reach USD 8.5 billion by 2032, growing at a CAGR of 21.2% from 2024 to 2032. The substantial growth of this market is driven primarily by increasing data complexity, advancements in data analytics technologies, and the rising need for more efficient database management systems.
One of the primary growth factors for the graph database market is the exponential increase in data generation. As organizations generate vast amounts of data from various sources such as social media, e-commerce platforms, and IoT devices, the need for sophisticated data management and analysis tools becomes paramount. Traditional relational databases struggle to handle the complexity and interconnectivity of this data, leading to a shift towards graph databases which excel in managing such intricate relationships.
Another significant driver is the growing adoption of artificial intelligence (AI) and machine learning (ML) technologies. These technologies rely heavily on connected data for predictive analytics and decision-making processes. Graph databases, with their inherent ability to model relationships between data points effectively, provide a robust foundation for AI and ML applications. This synergy between AI/ML and graph databases further accelerates market growth.
Additionally, the increasing prevalence of personalized customer experiences across industries like retail, finance, and healthcare is fueling demand for graph databases. Businesses are leveraging graph databases to analyze customer behaviors, preferences, and interactions in real-time, enabling them to offer tailored recommendations and services. This enhanced customer experience translates to higher customer satisfaction and retention, driving further adoption of graph databases.
From a regional perspective, North America currently holds the largest market share due to early adoption of advanced technologies and the presence of key market players. However, significant growth is also anticipated in the Asia-Pacific region, driven by rapid digital transformation, increasing investments in IT infrastructure, and growing awareness of the benefits of graph databases. Europe is also expected to witness steady growth, supported by stringent data management regulations and a strong focus on data privacy and security.
The graph database market can be segmented into two primary components: software and services. The software segment holds the largest market share, driven by extensive adoption across various industries. Graph database software is designed to create, manage, and query graph databases, offering features such as scalability, high performance, and efficient handling of complex data relationships. The growth in this segment is propelled by continuous advancements and innovations in graph database technologies. Companies are increasingly investing in research and development to enhance the capabilities of their graph database software products, catering to the evolving needs of their customers.
On the other hand, the services segment is also witnessing substantial growth. This segment includes consulting, implementation, and support services provided by vendors to help organizations effectively deploy and manage graph databases. As businesses recognize the benefits of graph databases, the demand for expert services to ensure successful implementation and integration into existing systems is rising. Additionally, ongoing support and maintenance services are crucial for the smooth operation of graph databases, driving further growth in this segment.
The increasing complexity of data and the need for specialized expertise to manage and analyze it effectively are key factors contributing to the growth of the services segment. Organizations often lack the in-house skills required to harness the full potential of graph databases, prompting them to seek external assistance. This trend is particularly evident in large enterprises, where the scale and complexity of data necessitate robust support services.
Moreover, the services segment is benefiting from the growing trend of outsourcing IT functions. Many organizations are opting to outsource their database management needs to specialized service providers, allowing them to focus on their core business activities. This shift towards outsourcing is further bolstering the demand for graph database services, driving market growth.
https://www.verifiedmarketresearch.com/privacy-policy/https://www.verifiedmarketresearch.com/privacy-policy/
Knowledge Graph Market size was valued at USD 7.19 Billion in 2024 and is expected to reach USD 4.1 Billion by 2032, growing at a CAGR of 18.1% from 2025 to 2032.
Knowledge Graph Market Drivers
Enhanced Data Integration and Analysis: Knowledge graphs excel at integrating and analyzing data from diverse sources, including structured, semi-structured, and unstructured data. This enables organizations to gain a holistic view of information and make more informed decisions. Improved Search and Information Retrieval: Knowledge graphs provide a more semantic understanding of information, enabling more accurate and relevant search results. Instead of just keyword matching, knowledge graphs understand the relationships between entities and provide more contextually relevant information. Personalized Experiences: Knowledge graphs can be used to personalize user experiences by understanding individual preferences, interests, and behaviors. This is crucial for applications like personalized recommendations, targeted advertising, and customer service. AI and Machine Learning: Knowledge graphs are essential for powering AI and machine learning applications, such as chatbots, recommendation systems, and fraud detection. They provide a structured representation of knowledge that AI/ML models can easily understand and utilize. Business Intelligence and Decision Making: Knowledge graphs can help businesses gain deeper insights into their customers, markets, and operations. They can be used to identify trends, predict future outcomes, and make more informed business decisions.
https://www.verifiedmarketresearch.com/privacy-policy/https://www.verifiedmarketresearch.com/privacy-policy/
Spreadsheet Software Market Size And Forecast
Spreadsheet Software Market size was valued at USD 10.05 Billion in 2023 and is expected to reach USD 14.55 Billion by 2031, with a CAGR of 7.8% from 2024-2031.
Global Spreadsheet Software Market Drivers
The market drivers for the Spreadsheet Software Market can be influenced by various factors. These may include:
Increasing Data Volume: As organizations generate and collect more data, the need for efficient data analysis and management tools, such as spreadsheet software, grows. Rising Demand for Data Visualization: Users increasingly seek sophisticated tools to visualize data for better insights. Spreadsheet software can provide charts and graphs, making data interpretation easier.
Global Spreadsheet Software Market Restraints
Several factors can act as restraints or challenges for the Spreadsheet Software Market, These may include:
Market Saturation: Many organizations already use established spreadsheet software such as Microsoft Excel or Google Sheets. The reliance on these platforms can make it difficult for new entrants or alternative solutions to capture market share. High Competition: The market is highly competitive, with numerous players offering similar features and functionalities. This can lead to price wars and reduced profit margins for software providers.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Source data used to make the graphs in Figure 1 of Kusick et al., 2020
Welcome folks, and thanks for choosing this dataset. In this dataset, you will be finding a lot of useful stuff which enhances your excel skills and google sheets skills. In this dataset you will be having 2300 rows of data in which you will find all top 100 singers and their songs with their ranking every year Tasks using his data set all you need to do is obtain the top 5 % singers and make the bar graph using their scores to find the scores of each singer use the below formula 101-ranking of the singer and using this scores draw the bar graph Task2 In this, you need to draw the graph based on the frequency of the singers who appeared more than or equal to 15 times from 1992 to 2014 based on this frequency draw another graph and see the changes & feel free to post your queries in the discussion pace and try to post your answers too
bye bye...
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
IBM HR Analytics Dashboard Using Excel
🌟 Project Overview This project features a dynamic HR analytics dashboard built using IBM's HR dataset and Excel. The dashboard provides insights into employee demographics, job satisfaction, work-life balance, and turnover rates, enabling data-driven decision-making in human resources management.
✨ Key Highlights
🎛️ Interactive Filters: Explore insights by gender, department, overtime status, job satisfaction, job involvement, and attrition. Dynamic slicers allow for filtering and drilling into specific subsets of data.
📊 Visualizations:
Bar Charts: Gender distribution, monthly income, and business travel patterns by department and job role. Pie Chart: Breakdown of employee job satisfaction levels. Radar Chart: Work-life balance analysis based on marital status. Line Graph: Training time by department.
📌 Top Metrics:
Total Employees: 1,470 (60% Male, 40% Female). Turnover Rate: 16% overall, with insights segmented by gender and department. Job Satisfaction: Visualized on a 4-point scale. Focus Areas: Employee attrition patterns by demographics and job roles. Departmental differences in income, training time, and job satisfaction. Work-life balance analysis based on marital status and job involvement.
🎯 Purpose
The dashboard serves as a tool for HR managers, analysts, and stakeholders to: - Understand key workforce trends. - Identify areas of improvement in job satisfaction and work-life balance. - Analyze factors influencing employee attrition.
💡 Why Excel?
This project demonstrates the power of Excel as a tool for creating interactive dashboards with rich visualizations, making it accessible to HR professionals without advanced technical skills.
💬 Let’s Discuss! I’d love to hear your feedback on this project! Share your thoughts, suggestions, or questions in the comments below. Let’s discuss ways to enhance the dashboard or dive deeper into HR analytics insights! 😊
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
With the user manual provided at the end of the research manuscript, and the Graph Input Data Example.xlsx as a reference, the user provides all the graph semantic data required to evaluate all the performance criteria for the system.These criteria include the probability that the principal target can be reached, and the costs, elapsed times and total vulnerability resulting from a penetration attempt by one or more intruders.This performance computation is accurate and efficient, requiring an insignificant amount of computation time.It also resolves all the statistical dependencies and probabilistic uncertainties believed to be an important challenge to a risk manager and his or her analysts.User enters the Graph Topological data in this excel file, thereby creating a topological model.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Analyzing Coffee Shop Sales: Excel Insights 📈
In my first Data Analytics Project, I Discover the secrets of a fictional coffee shop's success with my data-driven analysis. By Analyzing a 5-sheet Excel dataset, I've uncovered valuable sales trends, customer preferences, and insights that can guide future business decisions. 📊☕
DATA CLEANING 🧹
• REMOVED DUPLICATES OR IRRELEVANT ENTRIES: Thoroughly eliminated duplicate records and irrelevant data to refine the dataset for analysis.
• FIXED STRUCTURAL ERRORS: Rectified any inconsistencies or structural issues within the data to ensure uniformity and accuracy.
• CHECKED FOR DATA CONSISTENCY: Verified the integrity and coherence of the dataset by identifying and resolving any inconsistencies or discrepancies.
DATA MANIPULATION 🛠️
• UTILIZED LOOKUPS: Used Excel's lookup functions for efficient data retrieval and analysis.
• IMPLEMENTED INDEX MATCH: Leveraged the Index Match function to perform advanced data searches and matches.
• APPLIED SUMIFS FUNCTIONS: Utilized SumIFs to calculate totals based on specified criteria.
• CALCULATED PROFITS: Used relevant formulas and techniques to determine profit margins and insights from the data.
PIVOTING THE DATA 𝄜
• CREATED PIVOT TABLES: Utilized Excel's PivotTable feature to pivot the data for in-depth analysis.
• FILTERED DATA: Utilized pivot tables to filter and analyze specific subsets of data, enabling focused insights. Specially used in “PEAK HOURS” and “TOP 3 PRODUCTS” charts.
VISUALIZATION 📊
• KEY INSIGHTS: Unveiled the grand total sales revenue while also analyzing the average bill per person, offering comprehensive insights into the coffee shop's performance and customer spending habits.
• SALES TREND ANALYSIS: Used Line chart to compute total sales across various time intervals, revealing valuable insights into evolving sales trends.
• PEAK HOUR ANALYSIS: Leveraged Clustered Column chart to identify peak sales hours, shedding light on optimal operating times and potential staffing needs.
• TOP 3 PRODUCTS IDENTIFICATION: Utilized Clustered Bar chart to determine the top three coffee types, facilitating strategic decisions regarding inventory management and marketing focus.
*I also used a Timeline to visualize chronological data trends and identify key patterns over specific times.
While it's a significant milestone for me, I recognize that there's always room for growth and improvement. Your feedback and insights are invaluable to me as I continue to refine my skills and tackle future projects. I'm eager to hear your thoughts and suggestions on how I can make my next endeavor even more impactful and insightful.
THANKS TO: WsCube Tech Mo Chen Alex Freberg
TOOLS USED: Microsoft Excel
*** Fake News on Twitter ***
These 5 datasets are the results of an empirical study on the spreading process of newly fake news on Twitter. Particularly, we have focused on those fake news which have given rise to a truth spreading simultaneously against them. The story of each fake news is as follow:
1- FN1: A Muslim waitress refused to seat a church group at a restaurant, claiming "religious freedom" allowed her to do so.
2- FN2: Actor Denzel Washington said electing President Trump saved the U.S. from becoming an "Orwellian police state."
3- FN3: Joy Behar of "The View" sent a crass tweet about a fatal fire in Trump Tower.
4- FN4: The animated children's program 'VeggieTales' introduced a cannabis character in August 2018.
5- FN5: In September 2018, the University of Alabama football program ended its uniform contract with Nike, in response to Nike's endorsement deal with Colin Kaepernick.
The data collection has been done in two stages that each provided a new dataset: 1- attaining Dataset of Diffusion (DD) that includes information of fake news/truth tweets and retweets 2- Query of neighbors for spreaders of tweets that provides us with Dataset of Graph (DG).
DD
DD for each fake news story is an excel file, named FNx_DD where x is the number of fake news, and has the following structure:
The structure of excel files for each dataset is as follow:
Each row belongs to one captured tweet/retweet related to the rumor, and each column of the dataset presents a specific information about the tweet/retweet. These columns from left to right present the following information about the tweet/retweet:
User ID (user who has posted the current tweet/retweet)
The description sentence in the profile of the user who has published the tweet/retweet
The number of published tweet/retweet by the user at the time of posting the current tweet/retweet
Date and time of creation of the account by which the current tweet/retweet has been posted
Language of the tweet/retweet
Number of followers
Number of followings (friends)
Date and time of posting the current tweet/retweet
Number of like (favorite) the current tweet had been acquired before crawling it
Number of times the current tweet had been retweeted before crawling it
Is there any other tweet inside of the current tweet/retweet (for example this happens when the current tweet is a quote or reply or retweet)
The source (OS) of device by which the current tweet/retweet was posted
Tweet/Retweet ID
Retweet ID (if the post is a retweet then this feature gives the ID of the tweet that is retweeted by the current post)
Quote ID (if the post is a quote then this feature gives the ID of the tweet that is quoted by the current post)
Reply ID (if the post is a reply then this feature gives the ID of the tweet that is replied by the current post)
Frequency of tweet occurrences which means the number of times the current tweet is repeated in the dataset (for example the number of times that a tweet exists in the dataset in the form of retweet posted by others)
State of the tweet which can be one of the following forms (achieved by an agreement between the annotators):
r : The tweet/retweet is a fake news post
a : The tweet/retweet is a truth post
q : The tweet/retweet is a question about the fake news, however neither confirm nor deny it
n : The tweet/retweet is not related to the fake news (even though it contains the queries related to the rumor, but does not refer to the given fake news)
DG
DG for each fake news contains two files:
A file in graph format (.graph) which includes the information of graph such as who is linked to whom. (This file named FNx_DG.graph, where x is the number of fake news)
A file in Jsonl format (.jsonl) which includes the real user IDs of nodes in the graph file. (This file named FNx_Labels.jsonl, where x is the number of fake news)
Because in the graph file, the label of each node is the number of its entrance in the graph. For example if node with user ID 12345637 be the first node which has been entered into the graph file then its label in the graph is 0 and its real ID (12345637) would be at the row number 1 (because the row number 0 belongs to column labels) in the jsonl file and so on other node IDs would be at the next rows of the file (each row corresponds to 1 user id). Therefore, if we want to know for example what the user id of node 200 (labeled 200 in the graph) is, then in jsonl file we should look at row number 202.
The user IDs of spreaders in DG (those who have had a post in DD) would be available in DD to get extra information about them and their tweet/retweet. The other user IDs in DG are the neighbors of these spreaders and might not exist in DD.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This MS Excel data has been processed into line graphs to create time series line graphs and data tables which give insight into changing physiochemical water quality characteristics and influences. The study sets out to determine if climate change has had an influence on physiochemical water quality characteristics both within and between the Breede and Olifants estuaries over a nine year monitoring period. The data represents changes and comparisons between salinity, temperature and rainfall within and between the Olifants and Breede river estuaries in the Wester Cape Province of South Africa.
Authors:Brian Brown Date:27th November 1981Brief Description:Data were recorded from Rod Smallwoood's arm on the 27th November 1981; the dot matrix image which shows the ulna and radius bones. We made a 'radiotherapy type' mould of the arm and then put drawing pins through the plastic (pin head inwards) as electrodes. There two sets of data. One is recorded from the arm and the other is with saline filling the mould. The data were published in: D.C. Barber, B.H. Brown, and I.L. Freeston, "Imaging spatial distributions of resistivity using applied potential tomography", Electronics Letters, 19(22):933-935, 1983 http://digital-library.theiet.org/content/journals/10.1049/el_19830637 License:Creative Commons Artistic License (with Attribution)Attribution Requirement:Use or presentation of these data reference this publication: D.C. Barber, B.H. Brown, and I.L. Freeston, "Imaging spatial distributions of resistivity using applied potential tomography", Electronics Letters, 19(22):933-935, 1983 Format:Data are handwritten and scanned into the linked pdf file. The adjacent drive/receive data sets for both the Uniform(Saline) and Arm data and these are included in the attached Excel file. The are 6 columns of data in the xls file. The first three are for the uniform case and give the two reciprocal data sets and the mean of the two. Columns 4-6 are for the arm. I did a quick reconstruction using columns 3 and 6 as ref and data respectively and it looked OK. Methods:The pdf file that is attached shows the line printer output of the data we recorded from Rod Smallwoood's arm on the 27th November 1981 and the dot matrix image which shows the ulna and radius bones. We made a 'radiotherapy type' mould of the arm and then put drawing pins through the plastic (pin head inwards) as electrodes. There two sets of data. One is recorded from the arm and the other is with saline filling the mould. The pdf file also shows my plot of the XY position of the electrodes. Now the data set on the line printer is a complete data set i.e. Drive 1/2 then 1/3 then 1/4 etc for every combination. I could only find the print out for one of the data sets. However, I found my notebook with the adjacent drive/receive data set and this is page 7 of the pdf file. I have extracted the adjacent drive/receive data sets for both the Uniform(Saline) and Arm data and these are included in the attached Excel file. The are 6 columns of data in the xls file. The first three are for the uniform case and give the two reciprocal data sets and the mean of the two. Columns 4-6 are for the arm. I did a quick reconstruction using columns 3 and 6 as ref and data respectively and it looked OK. The first column of data is 104 point as follows. Drive 1/2 receive 3/4 Drive 1/2 receive 4/5 etc Drive 1/2 receive 16/1 Drive 2/3 receive 4/5 Drive 2/3 receive 5/6 etc Drive 2/3 receive 16/1 Drive 4/5 receive 6/7 Drive 4/5 receive 7/8 etc Drive 4/5 receive 16/1 etc etc Drive 14/15 receive 16/1 The second column is the other reciprocal set. I think these data are the ones used to produce the image in the Electronics Letters paper of 1983 - page 1 of my pdf file. Also in the Contributed Data section of the EIDORS project on Sourceforge http://eidors3d.sourceforge.net/data_contrib/bb-human-arm/bb-human-arm.shtml
Replication files for "Job-to-Job Mobility and Inflation" Authors: Renato Faccini and Leonardo Melosi Review of Economics and Statistics Date: February 2, 2023 -------------------------------------------------------------------------------------------- ORDERS OF TOPICS .Section 1. We explain the code to replicate all the figures in the paper (except Figure 6) .Section 2. We explain how Figure 6 is constructed .Section 3. We explain how the data are constructed SECTION 1 Replication_Main.m is used to reproduce all the figures of the paper except Figure 6. All the primitive variables are defined in the code and all the steps are commented in code to facilitate the replication of our results. Replication_Main.m, should be run in Matlab. The authors tested it on a DELL XPS 15 7590 laptop wih the follwoing characteristics: -------------------------------------------------------------------------------------------- Processor Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz 2.40 GHz Installed RAM 64.0 GB System type 64-bit operating system, x64-based processor -------------------------------------------------------------------------------------------- It took 2 minutes and 57 seconds for this machine to construct Figures 1, 2, 3, 4a, 4b, 5, 7a, and 7b. The following version of Matlab and Matlab toolboxes has been used for the test: -------------------------------------------------------------------------------------------- MATLAB Version: 9.7.0.1190202 (R2019b) MATLAB License Number: 363305 Operating System: Microsoft Windows 10 Enterprise Version 10.0 (Build 19045) Java Version: Java 1.8.0_202-b08 with Oracle Corporation Java HotSpot(TM) 64-Bit Server VM mixed mode -------------------------------------------------------------------------------------------- MATLAB Version 9.7 (R2019b) Financial Toolbox Version 5.14 (R2019b) Optimization Toolbox Version 8.4 (R2019b) Statistics and Machine Learning Toolbox Version 11.6 (R2019b) Symbolic Math Toolbox Version 8.4 (R2019b) -------------------------------------------------------------------------------------------- The replication code uses auxiliary files and save the pictures in various subfolders: \JL_models: It contains the equations describing the model including the observation equations and routine used to solve the model. To do so, the routine in this folder calls other routines located in some fo the subfolders below. \gensystoama: It contains a set of codes that allow us to solve linear rational expectations models. We use the AMA solver. More information are provided in the file AMASOLVE.m. The codes in this subfolder have been developed by Alejandro Justiniano. \filters: it contains the Kalman filter augmented with a routine to make sure that the zero lower bound constraint for the nominal interest rate is satisfied in every period in our sample. \SteadyStateSolver: It contains a set of routines that are used to solved the steady state of the model numerically. \NLEquations: It contains some of the equations of the model that are log-linearized using the symbolic toolbox of matlab. \NberDates: It contains a set of routines that allows to add shaded area to graphs to denote NBER recessions. \Graphics: It contains useful codes enabling features to construct some of the graphs in the paper. \Data: it contains the data set used in the paper. \Params: It contains a spreadsheet with the values attributes to the model parameters. \VAR_Estimation: It contains the forecasts implied by the Bayesian VAR model of Section 2. The output of Replication_Main.m are the figures of the paper that are stored in the subfolder \Figures SECTION 2 The Excel file "Figure-6.xlsx" is used to create the charts in Figure 6. All three panels of the charts (A, B, and C) plot a measure of unexpected wage inflation against the unemployment rate, then fits separate linear regressions for the periods 1960-1985,1986-2007, and 2008-2009. Unexpected wage inflation is given by the difference between wage growth and a measure of expected wage growth. In all three panels, the unemployment rate used is the civilian unemployment rate (UNRATE), seasonally adjusted, from the BLS. The sheet "Panel A" uses quarterly manufacturing sector average hourly earnings growth data, seasonally adjusted (CES3000000008), from the Bureau of Labor Statistics (BLS) Employment Situation report as the measure of wage inflation. The unexpected wage inflation is given by the difference between earnings growth at time t and the average of earnings growth across the previous four months. Growth rates are annualized quarterly values. The sheet "Panel B" uses quarterly Nonfarm Business Sector Compensation Per Hour, seasonally adjusted (COMPNFB), from the BLS Productivity and Costs report as its measure of wage inflation. As in Panel A, expected wage inflation is given by the... Visit https://dataone.org/datasets/sha256%3A44c88fe82380bfff217866cac93f85483766eb9364f66cfa03f1ebdaa0408335 for complete metadata about this dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In preparation for some deuterium effects on E. coli and S. cerevisiae, I grew a starter culture and diluted it in 3 different concentrations. 1:10, 1:5, and 1:2. These dilutions were then grown at 37C for 4 hours and an absorption measurement was taken every hour. This fileset contains the raw data and some played with data, along with some figures made in Excel from the data. The file labeled "arb-ecoli-growth.png" is a figure made from manipulated data. I tried to combine the three data sets into one graph to see if I could extract some sort of growth information. I'm pretty sure I didn't do it right, but I included the image here nonetheless. In the 1:10 dilution sample, the cells would double in slightly less than one hour, every hour. In the 1:2 dilution, the growth rate was much slower, and the growth rate seemed to peak rather early in the trial. The 1:5 dilution is an overlap of growths between both the 1:10 and 1:2 dilutions. I don't know what to make of that. Also included in the fileset is an image of the absorbance spectrum from the nanodrop for every sample (including blanks taken every hour).
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Figures in scientific publications are critically important because they often show the data supporting key findings. Our systematic review of research articles published in top physiology journals (n = 703) suggests that, as scientists, we urgently need to change our practices for presenting continuous data in small sample size studies. Papers rarely included scatterplots, box plots, and histograms that allow readers to critically evaluate continuous data. Most papers presented continuous data in bar and line graphs. This is problematic, as many different data distributions can lead to the same bar or line graph. The full data may suggest different conclusions from the summary statistics. We recommend training investigators in data presentation, encouraging a more complete presentation of data, and changing journal editorial policies. Investigators can quickly make univariate scatterplots for small sample size studies using our Excel templates.