Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This Excel based tool was developed to analyze means-end chain data. The tool consists of a user manual, a data input file to correctly organise your MEC data, a calculator file to analyse your data, and instructional videos. The purpose of this tool is to aggregate laddering data into hierarchical value maps showing means-end chains. The summarized results consist of (1) a summary overview, (2) a matrix, and (3) output for copy/pasting into NodeXL to generate hierarchal value maps (HVMs). To use this tool, you must have collected data via laddering interviews. Ladders are codes linked together consisting of attributes, consequences and values (ACVs).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundSpatial data are often aggregated by area to protect the confidentiality of individuals and aid the calculation of pertinent risks and rates. However, the analysis of spatially aggregated data is susceptible to the modifiable areal unit problem (MAUP), which arises when inference varies with boundary or aggregation changes. While the impact of the MAUP has been examined previously, typically these studies have focused on well-populated areas. Understanding how the MAUP behaves when data are sparse is particularly important for countries with less populated areas, such as Australia. This study aims to assess different geographical regions’ vulnerability to the MAUP when data are relatively sparse to inform researchers’ choice of aggregation level for fitting spatial models.MethodsTo understand the impact of the MAUP in Queensland, Australia, the present study investigates inference from simulated lung cancer incidence data using the five levels of spatial aggregation defined by the Australian Statistical Geography Standard. To this end, Bayesian spatial BYM models with and without covariates were fitted.Results and conclusionThe MAUP impacted inference in the analysis of cancer counts for data aggregated to coarsest areal structures. However, area structures with moderate resolution were not greatly impacted by the MAUP, and offer advantages in terms of data sparsity, computational intensity and availability of data sets.
Facebook
TwitterThe data.zip dataset contains metadata and total suspended solids, total phosphorus, nitrate plus nitrite, and total Kjeldahl nitrogen concentration data and associated daily mean streamflow data for the White River at Muncie, near Nora, and near Centerton, Indiana, 1991-2017
Facebook
TwitterThe Giraffidae are represented by only two extant species, the okapi (Okapia johnstoni) and the giraffe. Both have unusual and poorly understood social systems (reviewed by Dagg & Foster 1976; du Toit 2001; Pellew 2001). Although giraffes are typically observed in aggregations, they appear to join, and to leave, them independently of others, suggesting that they do not form long-term social bonds. It may be that adaptive benefits usually ascribed to social species have exerted selective pressure on what are essentially asocial animals to aggregate in this way. These benefits might include foraging efficiency (Krebs & Davies 1997, and see Bertram 1980) and/or collective vigilance (Pulliam 1973; Elgar 1989). Alternatively, giraffes may perceive their social environment in ways that are difficult for human observers to identify (Cameron & du Toit 2005). It may be that their behaviour is modified, not by the composition of whole aggregations, but only by the identity of and distance to their immediate neighbour/s (see e.g. Treves 1998). It may also be, however, that they are able to maintain contact with one another over long distances by means of visual, olfactory and/or infrasonic signals and that they spend much more of their time in stable groups (as they perceive them) than has been appreciated hitherto. The purpose of this study is to investigate the first of these two possibilities and to contribute to the elucidation of the second. It arises from and will extend the work of Cameron and du Toit (2005).
Hypotheses Null hypothesis: (asocial) giraffes co-occur at sites of localised resources e.g. food patches. Alternative hypotheses: benefits accrue to them (as in social species) from (i) sharing vigilance effort with others and/or (ii) from cueing on public information about food resources.
Predictions 1. The frequency and/or duration of individual vigilance is expected to decrease as a function of increasing aggregation size. 2. The time individuals spend foraging is expected to increase as a function of increasing aggregation size.
Research questions 1. Does aggregation size influence the time spent vigilant by individuals? 2. Does aggregation size influence the time spent foraging by individuals? 3. What is the frequency distribution of aggregation sizes? 4. What is the frequency distribution of aggregation compositions? 5. What is the frequency distribution of nearest/close neighbours distances? 6. What is the frequency distribution of nearest/close neighbours identities?
Facebook
Twitterhttps://artefacts.ceda.ac.uk/licences/missing_licence.pdfhttps://artefacts.ceda.ac.uk/licences/missing_licence.pdf
This dataset contains about 5 years of analysed observations regarding the degree of convective aggregation, or clumping, across the tropics - these are averaged onto a large-scale grid. There are also additional variables which represent environmental fields (e.g. sea surface temperature from satellite data, or humidity profiles averaged from reanalysis data) averaged onto the same large-scale grid. The main aggregation index is the Simple Convective Aggregation Index (SCAI) originally defined in Tobin et al. 2012, Journal of Climate. The data were created during the main years of CloudSat and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) satellite data so that they could be compared with vertical cloud profiles from this satellite data, and the results of this analysis appear in Stein et al. 2017, Journal of Climate.
Each file is one year of data (although the year may not be complete).
Each variable is an array: var(nlon, nlat, [nlev], ntime) longitude, latitude, pressure, time are variables in each file units are attributes of each variable (except non-dimensional ones) missing_value is 3.0E20 and is an attribute of each variable
Time is in days since 19790101:00Z and is every 3hours at 00z, 03z, ... The actual temporal frequency of the data is described for each variable below.
The data is for each 10deg X 10deg lat/lon box, 30S-30N (at outer edges of box domain), with each box defined by its centre coordinates and with boxes overlapping each other by 5deg in each direction.
In general, each variable is a spatial average over each box, with the value set to missing if more than 15% of the box is missing data. Exceptions to this are given below. The most important exception is for the brightness temperature data, used in aggregation statistics, which is filled in using neighborhood averaging if no more than 5% of the pixels are missing, but otherwise is considered to be all missing data. The percentage of missing pixels is recorded in 'bt_miss_frac'.
Facebook
TwitterWe show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations.
Facebook
TwitterThe 2006 Second Edition TIGER/Line files are an extract of selected geographic and cartographic information from the Census TIGER database. The geographic coverage for a single TIGER/Line file is a county or statistical equivalent entity, with the coverage area based on the latest available governmental unit boundaries. The Census TIGER database represents a seamless national file with no overlaps or gaps between parts. However, each county-based TIGER/Line file is designed to stand alone as an independent data set or the files can be combined to cover the whole Nation. The 2006 Second Edition TIGER/Line files consist of line segments representing physical features and governmental and statistical boundaries. This shapefile represents the current State House Districts for New Mexico as posted on the Census Bureau website for 2006.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Iris data aggregation class effect.
Facebook
TwitterThere are a number of ways to test for the absence/presence of a spatial signal in a completely observed fine-resolution image. One of these is a powerful nonparametric procedure called enhanced false discovery rate (EFDR). A drawback of EFDR is that it requires the data to be defined on regular pixels in a rectangular spatial domain. Here, we develop an EFDR procedure for possibly incomplete data defined on irregular small areas. Motivated by statistical learning, we use conditional simulation (CS) to condition on the available data and simulate the full rectangular image at its finest resolution many times (M, say). EFDR is then applied to each of these simulations resulting in M estimates of the signal and M statistically dependent p-values. Averaging over these estimates yields a single, combined estimate of a possible signal, but inference is needed to determine whether there really is a signal present. We test the original null hypothesis of no signal by combining the M p-values into a single p-value using copulas and a composite likelihood. If the null hypothesis of no signal is rejected, we use the combined estimate. We call this new procedure EFDR-CS and, to demonstrate its effectiveness, we show results from a simulation study; an experiment where we introduce aggregation and incompleteness into temperature-change data in the Asia-Pacific; and an application to total-column carbon dioxide from satellite remote sensing data over a region of the Middle East, Afghanistan, and the western part of Pakistan. Supplementary materials for this article are available online.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Political scientists use expert surveys to assess latent features of political actors. Experts, though, are unlikely to be equally informed and assess all actors equally well. The literature acknowledges variance in measurement quality, but pays little attention to the implications of uncertainty for aggregating responses. We discuss the nature of the measurement problem in expert surveys. We then propose methods to assess the ability of experts to judge where actors stand and to aggregate expert responses. We examine the effects of aggregation for a prominent survey in the literature on party politics and EU integration. Using a Monte Carlo simulation, we demonstrate that it is better to aggregate expert responses using the median or modal response, rather than the mean.
Facebook
TwitterAggregation of generic tables describing the Noise Zones for all terrestrial infrastructure, map type A and Lden index. ‘Type a’ exposure cards: maps to be made within the framework of the CBS pursuant to Article 3-II-1°-a of the Decree of 24 March 2006. This is a dataset representing for the year of mapping: — areas exposed to more than 55 dB(A) in Lden They represent the isophone curves of 5 in 5 dB(A). Lden: sound level indicator means Level Day-Evening-Night. It corresponds to an equivalent 24-hour sound level in which evening and night noise levels are increased by 5 and 10 dB(A), respectively, to reflect greater discomfort during these periods. Aggregation obtained by the QGIS MIZOGEO plugin made available by CEREMA. Data source by infrastructure: CEREMA.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Facebook
TwitterThe 2006 Second Edition TIGER/Line files are an extract of selected geographic and cartographic information from the Census TIGER database. The geographic coverage for a single TIGER/Line file is a county or statistical equivalent entity, with the coverage area based on the latest available governmental unit boundaries. The Census TIGER database represents a seamless national file with no overlaps or gaps between parts. However, each county-based TIGER/Line file is designed to stand alone as an independent data set or the files can be combined to cover the whole Nation. The 2006 Second Edition TIGER/Line files consist of line segments representing physical features and governmental and statistical boundaries. This shapefile represents the current State Senate Districts for New Mexico as posted on the Census Bureau website for 2006.
Facebook
Twitterhttps://cdla.io/permissive-1-0/https://cdla.io/permissive-1-0/
This dataset contains 4 files with aggregated poker data from the flop. The original dataset consists of ~26M rows of all 5 card combinations of hands, evaluated to a rank between 1 and 7462. There are two rank distribution files, one with the data aggregated by the starting hand, the other by the flop. The weighted files contain a weighted mean of these rank distributions, roughly based on the 7 card hand type distribution.
To access these compressed files, simply use the pandas 'read_pickle()' method. To find an example, you can reference the documentation or my Poker Analysis notebook. Please note, the rank distribution files are very sparse and may need to be converted to a dense dataframe depending on what you're doing.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These are the data associated with the paper, "Predicting aggregate morphology of sequence-defined macromolecules with Recurrent Neural Networks" (DOI 10.1039/D2SM00452F). Three of the directories contains subdirectories with GSD files dumped from HOOMD. The other contains pretrained RNN models as TorchScript binaries exported from PyTorch.
Facebook
TwitterOur People data is gathered and aggregated via surveys, digital services, and public data sources. We use powerful profiling algorithms to collect and ingest only fresh and reliable data points.
Our comprehensive data enrichment solution includes a variety of data sets that can help you address gaps in your customer data, gain a deeper understanding of your customers, and power superior client experiences.
People Data Schema & Reach: Our data reach represents the total number of counts available within various categories and comprises attributes such as country location, MAU, DAU & Monthly Location Pings:
Data Export Methodology: Since we collect data dynamically, we provide the most updated data and insights via a best-suited method on a suitable interval (daily/weekly/monthly).
People data Use Cases:
360-Degree Customer View: Get a comprehensive image of customers by the means of internal and external data aggregation. Data Enrichment: Leverage Online to offline consumer profiles to build holistic audience segments to improve campaign targeting using user data enrichment Fraud Detection: Use multiple digital (web and mobile) identities to verify real users and detect anomalies or fraudulent activity. Advertising & Marketing: Understand audience demographics, interests, lifestyle, hobbies, and behaviors to build targeted marketing campaigns.
Here's the schema of People Data:
person_id
first_name
last_name
age
gender
linkedin_url
twitter_url
facebook_url
city
state
address
zip
zip4
country
delivery_point_bar_code
carrier_route
walk_seuqence_code
fips_state_code
fips_country_code
country_name
latitude
longtiude
address_type
metropolitan_statistical_area
core_based+statistical_area
census_tract
census_block_group
census_block
primary_address
pre_address
streer
post_address
address_suffix
address_secondline
address_abrev
census_median_home_value
home_market_value
property_build+year
property_with_ac
property_with_pool
property_with_water
property_with_sewer
general_home_value
property_fuel_type
year
month
household_id
Census_median_household_income
household_size
marital_status
length+of_residence
number_of_kids
pre_school_kids
single_parents
working_women_in_house_hold
homeowner
children
adults
generations
net_worth
education_level
occupation
education_history
credit_lines
credit_card_user
newly_issued_credit_card_user
credit_range_new
credit_cards
loan_to_value
mortgage_loan2_amount
mortgage_loan_type
mortgage_loan2_type
mortgage_lender_code
mortgage_loan2_render_code
mortgage_lender
mortgage_loan2_lender
mortgage_loan2_ratetype
mortgage_rate
mortgage_loan2_rate
donor
investor
interest
buyer
hobby
personal_email
work_email
devices
phone
employee_title
employee_department
employee_job_function
skills
recent_job_change
company_id
company_name
company_description
technologies_used
office_address
office_city
office_country
office_state
office_zip5
office_zip4
office_carrier_route
office_latitude
office_longitude
office_cbsa_code
office_census_block_group
office_census_tract
office_county_code
company_phone
company_credit_score
company_csa_code
company_dpbc
company_franchiseflag
company_facebookurl
company_linkedinurl
company_twitterurl
company_website
company_fortune_rank
company_government_type
company_headquarters_branch
company_home_business
company_industry
company_num_pcs_used
company_num_employees
company_firm_individual
company_msa
company_msa_name
company_naics_code
company_naics_description
company_naics_code2
company_naics_description2
company_sic_code2
company_sic_code2_description
company_sic_code4
company_sic_code4_description
company_sic_code6
company_sic_code6_description
company_sic_code8
company_sic_code8_description
company_parent_company
company_parent_company_location
company_public_private
company_subsidiary_company
company_residential_business_code
company_revenue_at_side_code
company_revenue_range
company_revenue
company_sales_volume
company_small_business
company_stock_ticker
company_year_founded
company_minorityowned
company_female_owned_or_operated
company_franchise_code
company_dma
company_dma_name
company_hq_address
company_hq_city
company_hq_duns
company_hq_state
company_hq_zip5
company_hq_zip4
company_se...
Facebook
Twitter
According to our latest research, the global broadband aggregation market size is valued at USD 14.2 billion in 2024, exhibiting robust momentum with a compound annual growth rate (CAGR) of 8.7% from 2025 to 2033. The market is expected to reach USD 29.8 billion by 2033, driven by the surging demand for high-speed internet connectivity, rapid digitization across sectors, and the proliferation of smart devices. As per our in-depth analysis, the primary growth factor stems from the continuous expansion of broadband infrastructure and the increasing need for efficient data traffic management worldwide.
One of the pivotal growth drivers for the broadband aggregation market is the exponential surge in data consumption, propelled by the widespread adoption of video streaming, cloud computing, and the Internet of Things (IoT). With more households and businesses relying on bandwidth-intensive applications, service providers are compelled to invest in advanced aggregation solutions to optimize network performance and minimize latency. The rise of remote work and online education has further intensified the need for robust broadband networks, prompting governments and private enterprises to prioritize network upgrades and expansions. This persistent demand is fostering innovation in aggregation hardware and software, enabling seamless integration of diverse access technologies and supporting the scalability required for next-generation broadband services.
Another significant growth factor is the evolution of network architectures, notably the transition to 5G and the deployment of fiber-to-the-premises (FTTP) infrastructure. These advancements necessitate sophisticated broadband aggregation solutions capable of consolidating multiple access points and efficiently routing massive volumes of data traffic. The integration of software-defined networking (SDN) and network function virtualization (NFV) is further revolutionizing the market, allowing operators to dynamically manage network resources and enhance service agility. The convergence of fixed and mobile networks is also driving the adoption of unified broadband aggregation platforms, enabling service providers to deliver consistent user experiences across various access technologies.
The market is also benefiting from strategic collaborations and investments aimed at bridging the digital divide in underserved regions. Governments and international organizations are launching initiatives to expand broadband coverage in rural and remote areas, creating lucrative opportunities for broadband aggregation vendors. These efforts are complemented by the growing emphasis on smart city projects, which require resilient and scalable broadband infrastructure to support connected devices, public safety applications, and digital services. The increasing focus on cybersecurity and data privacy is further shaping the market landscape, prompting the development of secure aggregation solutions that safeguard critical network assets.
Broadband Services are at the heart of this digital transformation, providing the backbone for high-speed internet access that powers both residential and commercial applications. As consumers and businesses alike demand faster and more reliable internet connections, broadband services are evolving to meet these needs through innovative technologies and infrastructure upgrades. This evolution is not only enhancing user experiences but also enabling new applications and services that were previously unimaginable. From streaming high-definition content to supporting complex business operations, broadband services are essential for maintaining competitiveness in today's digital economy.
Regionally, the Asia Pacific market is witnessing the fastest growth, fueled by large-scale infrastructure projects in China, India, and Southeast Asia. North America and Europe remain significant contributors, driven by early adoption of advanced broadband technologies and strong investments in network modernization. Latin America and the Middle East & Africa are emerging as promising markets, supported by government-led broadband expansion programs and rising demand for digital services. This regional diversity reflects the global imperative to enhance internet accessibility and quality, positioning
Facebook
Twitterhttps://www.ontario.ca/page/open-government-licence-ontariohttps://www.ontario.ca/page/open-government-licence-ontario
This spatial dataset represents areas where resources may be extracted within the limits of the aggregate licence or permit for the associated site. Reporting requirements are optional, which means records will be sporadic and limited to certain areas of the province.
Additional details related to aggregates in Ontario are available in related data classes as well as online using the interactive Pits and Quarries map.
Additional Documentation
Aggregate Extraction Area - Data Description (PDF)
Aggregate Extraction Area - Documentation (Word)
Status
On going: data is being continually updated
Maintenance and Update Frequency
As needed: data is updated as deemed necessary
Contact
Ryan Lenethen, Integration Branch, ryan.lenethen@ontario.ca
Facebook
TwitterAggregation of generic tables describing the Noise Zones, for an infrastructure, the type of infrastructure concerned ROUTE (R), card type A and LD index.
Road infrastructure concerned: A68, C1_albi, C1_castres, D100, D1012, D13, D612, D622, D630, D631, D69, D800, D81, D84, D87, D88, D912, D926, D968, D988, D999A, D999, N112, N126, N88
‘Type a’ exposure cards: maps to be made within the framework of the CBS pursuant to Article 3-II-1°-a of the Decree of 24 March 2006. These are two cards representing the year in which the cards were drawn up: — areas exposed to more than 55 dB(A) in Lden They represent the isophone curves of 5 in 5 dB(A).
Lden sound level indicator means Level Day-Evening-Night. It corresponds to an equivalent 24-hour sound level in which evening and night noise levels are increased by 5 and 10 dB(A), respectively, to reflect greater discomfort during these periods.
Aggregation obtained by the QGIS MIZOGEO plugin made available by CEREMA.
Data source by infrastructure: CEREMA.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The file set is a freely downloadable aggregation of information about Australian schools. The individual files represent a series of tables which, when considered together, form a relational database. The records cover the years 2008-2014 and include information on approximately 9500 primary and secondary school main-campuses and around 500 subcampuses. The records all relate to school-level data; no data about individuals is included. All the information has previously been published and is publicly available but it has not previously been released as a documented, useful aggregation. The information includes: (a) the names of schools (b) staffing levels, including full-time and part-time teaching and non-teaching staff (c) student enrolments, including the number of boys and girls (d) school financial information, including Commonwealth government, state government, and private funding (e) test data, potentially for school years 3, 5, 7 and 9, relating to an Australian national testing programme know by the trademark 'NAPLAN'
Documentation of this Edition 2016.1 is incomplete but the organization of the data should be readily understandable to most people. If you are a researcher, the simplest way to study the data is to make use of the SQLite3 database called 'school-data-2016-1.db'. If you are unsure how to use an SQLite database, ask a guru.
The database was constructed directly from the other included files by running the following command at a command-line prompt: sqlite3 school-data-2016-1.db < school-data-2016-1.sql Note that a few, non-consequential, errors will be reported if you run this command yourself. The reason for the errors is that the SQLite database is created by importing a series of '.csv' files. Each of the .csv files contains a header line with the names of the variable relevant to each column. The information is useful for many statistical packages but it is not what SQLite expects, so it complains about the header. Despite the complaint, the database will be created correctly.
Briefly, the data are organized as follows. (a) The .csv files ('comma separated values') do not actually use a comma as the field delimiter. Instead, the vertical bar character '|' (ASCII Octal 174 Decimal 124 Hex 7C) is used. If you read the .csv files using Microsoft Excel, Open Office, or Libre Office, you will need to set the field-separator to be '|'. Check your software documentation to understand how to do this. (b) Each school-related record is indexed by an identifer called 'ageid'. The ageid uniquely identifies each school and consequently serves as the appropriate variable for JOIN-ing records in different data files. For example, the first school-related record after the header line in file 'students-headed-bar.csv' shows the ageid of the school as 40000. The relevant school name can be found by looking in the file 'ageidtoname-headed-bar.csv' to discover that the the ageid of 40000 corresponds to a school called 'Corpus Christi Catholic School'. (3) In addition to the variable 'ageid' each record is also identified by one or two 'year' variables. The most important purpose of a year identifier will be to indicate the year that is relevant to the record. For example, if one turn again to file 'students-headed-bar.csv', one sees that the first seven school-related records after the header line all relate to the school Corpus Christi Catholic School with ageid of 40000. The variable that identifies the important differences between these seven records is the variable 'studentyear'. 'studentyear' shows the year to which the student data refer. One can see, for example, that in 2008, there were a total of 410 students enrolled, of whom 185 were girls and 225 were boys (look at the variable names in the header line). (4) The variables relating to years are given different names in each of the different files ('studentsyear' in the file 'students-headed-bar.csv', 'financesummaryyear' in the file 'financesummary-headed-bar.csv'). Despite the different names, the year variables provide the second-level means for joining information acrosss files. For example, if you wanted to relate the enrolments at a school in each year to its financial state, you might wish to JOIN records using 'ageid' in the two files and, secondarily, matching 'studentsyear' with 'financialsummaryyear'. (5) The manipulation of the data is most readily done using the SQL language with the SQLite database but it can also be done in a variety of statistical packages. (6) It is our intention for Edition 2016-2 to create large 'flat' files suitable for use by non-researchers who want to view the data with spreadsheet software. The disadvantage of such 'flat' files is that they contain vast amounts of redundant information and might not display the data in the form that the user most wants it. (7) Geocoding of the schools is not available in this edition. (8) Some files, such as 'sector-headed-bar.csv' are not used in the creation of the database but are provided as a convenience for researchers who might wish to recode some of the data to remove redundancy. (9) A detailed example of a suitable SQLite query can be found in the file 'school-data-sqlite-example.sql'. The same query, used in the context of analyses done with the excellent, freely available R statistical package (http://www.r-project.org) can be seen in the file 'school-data-with-sqlite.R'.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This Excel based tool was developed to analyze means-end chain data. The tool consists of a user manual, a data input file to correctly organise your MEC data, a calculator file to analyse your data, and instructional videos. The purpose of this tool is to aggregate laddering data into hierarchical value maps showing means-end chains. The summarized results consist of (1) a summary overview, (2) a matrix, and (3) output for copy/pasting into NodeXL to generate hierarchal value maps (HVMs). To use this tool, you must have collected data via laddering interviews. Ladders are codes linked together consisting of attributes, consequences and values (ACVs).