Facebook
TwitterThe data provides a summary of the state of development practice for Geographic Information Systems (GIS) software (as of August 2017). The summary is based on grading a set of 30 GIS products using a template of 56 questions based on 13 software qualities. The products range in scope and purpose from a complete desktop GIS systems, to stand-alone tools, to programming libraries/packages.
The template used to grade the software is found in the TabularSummaries.zip file. Each quality is measured with a series of questions. For unambiguity the responses are quantified wherever possible (e.g.~yes/no answers). The goal is for measures that are visible, measurable and feasible in a short time with limited domain knowledge. Unlike a comprehensive software review, this template does not grade on functionality and features. Therefore, it is possible that a relatively featureless product can outscore a feature-rich product.
A virtual machine is used to provide an optimal testing environments for each software product. During the process of grading the 30 software products, it is much easier to create a new virtual machine to test the software on, rather than using the host operating system and file system.
The raw data obtained by measuring each software product is in SoftwareGrading-GIS.xlsx. Each line in this file corresponds to between 2 and 4 hours of measurement time by a software engineer. The results are summarized for each quality in the TabularSummaries.zip file, as a tex file and compiled pdf file.
Facebook
TwitterThe primary intent of this workshop is to provide practical training in using Statistics Canada geography files with the leading industry standard software: Environmental Systems Research Institute, Inc.(ESRI) ArcGIS 9x. Participants will be introduced to the key features of ArcGIS 9x, as well as to geographic concepts and principles essential to understanding and working with geographic information systems (GIS) software. The workshop will review a range of geography and attribute files available from Statistics Canada, as well as some best practices for accessing this information. A brief overview of complementary data sets available from federal and provincial agencies will be provided. There will also be an opportunity to complete a practical exercise using ArcGIS9x. (Note: Data associated with this presentation is available on the DLI FTP site under folder 1873-221.)
Facebook
TwitterThe USDA Long-Term Agroecosystem Research was established to develop national strategies for sustainable intensification of agricultural production. As part of the Agricultural Research Service, the LTAR Network incorporates numerous geographies consisting of experimental areas and locations where data are being gathered. Starting in early 2019, two working groups of the LTAR Network (Remote Sensing and GIS, and Data Management) set a major goal to jointly develop a geodatabase of LTAR Standard GIS Data Layers. The purpose of the geodatabase was to enhance the Network's ability to utilize coordinated, harmonized datasets and reduce redundancy and potential errors associated with multiple copies of similar datasets. Project organizers met at least twice with each of the 18 LTAR sites from September 2019 through December 2020, compiling and editing a set of detailed geospatial data layers comprising a geodatabase, describing essential data collection areas within the LTAR Network. The LTAR Standard GIS Data Layers geodatabase consists of geospatial data that represent locations and areas associated with the LTAR Network as of late 2020, including LTAR site locations, addresses, experimental plots, fields and watersheds, eddy flux towers, and phenocams. There are six data layers in the geodatabase available to the public. This geodatabase was created in 2019-2020 by the LTAR network as a national collaborative effort among working groups and LTAR sites. The creation of the geodatabase began with initial requests to LTAR site leads and data managers for geospatial data, followed by meetings with each LTAR site to review the initial draft. Edits were documented, and the final draft was again reviewed and certified by LTAR site leads or their delegates. Revisions to this geodatabase will occur biennially, with the next revision scheduled to be published in 2023. Resources in this dataset:Resource Title: LTAR Standard GIS Data Layers, 2020 version, File Geodatabase. File Name: LTAR_Standard_GIS_Layers_v2020.zipResource Description: This file geodatabase consists of authoritative GIS data layers of the Long-Term Agroecosystem Research Network. Data layers include: LTAR site locations, LTAR site points of contact and street addresses, LTAR experimental boundaries, LTAR site "legacy region" boundaries, LTAR eddy flux tower locations, and LTAR phenocam locations.Resource Software Recommended: ArcGIS,url: esri.com Resource Title: LTAR Standard GIS Data Layers, 2020 version, GeoJSON files. File Name: LTAR_Standard_GIS_Layers_v2020_GeoJSON_ADC.zipResource Description: The contents of the LTAR Standard GIS Data Layers includes geospatial data that represent locations and areas associated with the LTAR Network as of late 2020. This collection of geojson files includes spatial data describing LTAR site locations, addresses, experimental plots, fields and watersheds, eddy flux towers, and phenocams. There are six data layers in the geodatabase available to the public. This dataset was created in 2019-2020 by the LTAR network as a national collaborative effort among working groups and LTAR sites. Resource Software Recommended: QGIS,url: https://qgis.org/en/site/
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Update: We updated the data set in March 2022 by adding newly published papers and by providing more insights on how we analyzed them. Details can be found in the file " SEnti-SMS.xlsx".
----------
Update: The updated version (-v2) contains the results of one more snowballing iteration and extracted information on the accuracy of the used methods.
----------
In 2020, we conducted a systematic literature review to explore the development and application of sentiment analysis tools in software engineering.
Information on the execution of the SLR, its scope, the search string, etc. are presented in the paper linked below.
Facebook
TwitterIn this blog I’ll share the workflow and tools used in the GIS part of this analysis. To understand where crashes are occurring, first the dataset had to be mapped. The software of choice in this instance was ArcGIS, though most of the analysis could have been done using QGIS. Heat maps are all the rage, and if you want to make simple heat maps for free and you appreciate good documentation, I recommend the QGIS Heatmap plugin. There are also some great tools in the free open-source program GeoDa for spatial statistics.
Facebook
Twitterhttps://www.gnu.org/copyleft/gpl.htmlhttps://www.gnu.org/copyleft/gpl.html
This file presents the catalog of metrics and the topic independent classification done.
Facebook
TwitterThis dataset contains the final output from the Step 2 analysis.DataStep One of this analysis primarily relies on travel data acquired through Replica.10 This dataset is produced by an activity-based model, calibrated locally with ground truth data from a diverse set of third-party source data such as mobile location data, consumer marketing data, geographic and land use data, credit card transaction data, built environment, and economic activity. Trip data provided by Replica platform includes information such as origin, destination, land use, trip purpose, and socio-demographic of the trip taker. Step Two of this analysis relies on a variety of parcel-level data from member jurisdictions, MWCOG, and DOE charging station data. This data includes existing charging stations11, Equity Emphasis Areas (EEA), Alternative Fuel Corridors (AFCs), transit stations, multifamily housing, EV Charging Justice40 Map, and MWCOG Regional Activity Centers (RACs).Step OneStep One relies primarily on travel data at the census block group level. Census block groups are scored based on trip characteristics that end within that group. Trip characteristics considered in Step One include:Trip purposeTrip lengthDwelling time (30 to 60 minutes, 60 to 120 minutes, and greater than 120 minutes)Income of tripTrips originating from multifamily housingTrips originating from equity emphasis areasThree different trip characteristic scenarios were completed in Step One, outlined in below.Prioritizing DCFCs with High Utilization: This scenario weights trips taken by people with higher incomes more heavily. Because people with higher incomes are also more likely to be homeowners with access to home charging, this scenario would focus on building out DCFCs to provide opportunities for public charging that would help serve a larger number of vehicles more quickly. Scoring adjustments in Step two provide a check on recommending an overbuilding of DCFCs in wealthier areas that already have ample access to public charging.Prioritizing Level 2 Chargers with Equity Focus: DCFCs require higher upfront costs for equipment, installation, and potential utility upgrades that may be needed to accommodate higher powered charging infrastructure. The cost of the electricity at the point of purchase is also higher, which can cause some service providers to cite economic infeasibility when deciding whether to cite DCFCs in communities with less EVs and lower utilization. Most Level 2 charging infrastructure will not require grid or electrical service upgrades, and the projects will have lower costs across other factors (e.g., equipment costs, electricity pricing for customers). Prioritizing Level 2 charging will mean there are fewer barriers to entry for a jurisdiction or project team looking to build out their charging network in EEAs.Prioritizing DCFCs for Multi-Family Housing: Individuals living in multi-family housing that don’t have a dedicated parking spot or reliable access to at-home charging, opportunity charging with DCFCs and workplace charging are two available options. Multi-family residents are more likely to use DCFC stations. Establishing DCFC charging hubs near higher concentrations of multi-family housing developments could provide an attractive and highly utilized alternative to on-site charging for buildings where it is challenging to install and maintain charging infrastructure.Each CBG is scored based on the percentage of regional trips it receives meeting the criteria. The final Step 1 analysis assigns each CBG in the study area a score of 1 to 6. The higher the CBG score, the more traffic a CBG experiences. For example, if a CBG with a score of 1 has a low number of trips starting or ending there, whereas a CBG with a score of 6 has a very high number of trips starting or ending there.Step TwoOnce the census block groups have been scored, individual parcels within high-scoring census block groups are evaluated based on characteristics that make that parcel more or less desirable for charging infrastructure. Those characteristics, called proximity score modifiers, include a parcel’s distance to existing charging stations, distance to multi-family housing, distance to highway on- and off-ramps, proximity to environmental justice communities, and distance to park-and-ride locations. These proximity score modifiers have been selected for the following reasons:Distance to existing charging stations. Locations that are close to existing public chargers have already begun to be built out and may have less demand.Distance to MFH. Residents of MFH typically lack access to home charging and will rely on public infrastructure to meet charging needs.Distance to highway on-ramp or off-ramp. Sites located near highway ramps are likely to attract EV drivers who are making longer trips, typically needing DCFC.Location in or near an EEA. Ensuring the benefits of EVs are spread equitably in the region is a priority. Providing access to charging infrastructure in or near EEAs can help remove barriers to EV adoption.Distance to park-and-ride locations. The distance from potential sites to the nearest public transportation stop with park-and-ride lots is calculated to determine which sites will be most useful in enabling more sustainable first and last miles of multimodal trips. Charging locations near transit stops could benefit EV ride-sharing companies or commuters that use a combination of personal vehicles and mass transit.Each proximity score modifier can increase or decrease a parcel’s overall score. If a parcel is not located near any proximity score modifier, their final score will not be influenced by these characteristics. Proximity score modifiers and their associated point values are as follows:Within ¼ mile of a Park-and-Ride Location - Parcel score increases by 1Within ¼ mile of MFH - Parcel score increases by 1Within ¼ mile of an EEA - Parcel score increases by 1Within ½ mile of Existing Level 2 Charging Stations - Parcel score decreases by up to 2 pointsWithin ½ mile of Existing DCFC Stations - Parcel score decreases by up to 4 pointsThese factors are assessed with GIS software and compiled to modify the parcel’s charging demand score. A parcel’s final score in Step 2 is determined by the following formula:Parcel Score = [Step 1 Census Block Group Score] + [Proximity Score Modifier Total]Parcels that are better suited for charging will score higher than parcels less suitable for charging. Each high scoring parcel should be further reviewed to determine suitability for public EV charging stations such as parcel size, parking availability, facility access, potential site host partnerships, and electric utility service capacity. Local knowledge is key to understanding results.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Welcome to the Google Places Comprehensive Business Dataset! This dataset has been meticulously scraped from Google Maps and presents extensive information about businesses across several countries. Each entry in the dataset provides detailed insights into business operations, location specifics, customer interactions, and much more, making it an invaluable resource for data analysts and scientists looking to explore business trends, geographic data analysis, or consumer behaviour patterns.
This dataset is ideal for a variety of analytical projects, including: - Market Analysis: Understand business distribution and popularity across different regions. - Customer Sentiment Analysis: Explore relationships between customer ratings and business characteristics. - Temporal Trend Analysis: Analyze patterns of business activity throughout the week. - Geospatial Analysis: Integrate with mapping software to visualise business distribution or cluster businesses based on location.
The dataset contains 46 columns, providing a thorough profile for each listed business. Key columns include:
business_id: A unique Google Places identifier for each business, ensuring distinct entries.phone_number: The contact number associated with the business. It provides a direct means of communication.name: The official name of the business as listed on Google Maps.full_address: The complete postal address of the business, including locality and geographic details.latitude: The geographic latitude coordinate of the business location, useful for mapping and spatial analysis.longitude: The geographic longitude coordinate of the business location.review_count: The total number of reviews the business has received on Google Maps.rating: The average user rating out of 5 for the business, reflecting customer satisfaction.timezone: The world timezone the business is located in, important for temporal analysis.website: The official website URL of the business, providing further information and contact options.category: The category or type of service the business provides, such as restaurant, museum, etc.claim_status: Indicates whether the business listing has been claimed by the owner on Google Maps.plus_code: A sho...
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global curriculum mapping software market size was valued at USD 1.2 billion in 2023 and is expected to reach an estimated USD 3.8 billion by 2032, growing at a CAGR of 13.2% during the forecast period from 2024 to 2032. This significant growth can be attributed to the increasing emphasis on personalized learning experiences, the necessity for compliance with educational standards, and the growing adoption of digital tools in the education sector.
One of the primary growth factors for the curriculum mapping software market is the rising demand for personalized and adaptive learning solutions. Educational institutions are increasingly leveraging technology to design curricula that cater to individual student needs. This shift not only enhances the learning experience but also improves student performance and engagement. Additionally, the ability of curriculum mapping software to help educators identify gaps in the curriculum and align teaching methods with learning objectives contributes significantly to its adoption.
Another driving force behind the market's growth is the increased focus on compliance with educational standards and accreditation requirements. Curriculum mapping software allows institutions to systematically design, implement, and review curricula to ensure they meet the necessary standards and regulations. This capability is particularly crucial for higher education institutions seeking accreditation or re-accreditation, as it provides a clear, organized, and accessible record of curriculum alignment and effectiveness.
The growing integration of data analytics and artificial intelligence in curriculum mapping software also plays a crucial role in market expansion. These technologies enable the software to offer advanced analytics, predictive modeling, and insights, which help educators make informed decisions about curriculum design and instruction. The ability to analyze student performance data and predict learning outcomes can facilitate proactive interventions, thus improving the overall educational experience.
Regionally, North America is expected to dominate the market due to the early adoption of advanced educational technologies, the presence of prominent market players, and substantial government funding for educational innovations. However, the Asia Pacific region is anticipated to exhibit the highest growth rate during the forecast period. Countries such as China, India, and Japan are investing heavily in educational technology to enhance their education systems, driven by the increasing demand for skilled professionals and the need for modernized educational infrastructure.
In addition to curriculum mapping software, educational institutions are increasingly turning to Gradebook Software to streamline their assessment and grading processes. Gradebook Software provides educators with a comprehensive platform to manage student grades, track academic progress, and generate detailed reports. This software not only simplifies the grading process but also enhances transparency and communication between teachers, students, and parents. By integrating Gradebook Software with curriculum mapping tools, institutions can create a cohesive educational ecosystem that supports personalized learning and data-driven decision-making. The growing demand for efficient and user-friendly grading solutions is driving the adoption of Gradebook Software across various educational settings.
The curriculum mapping software market can be segmented based on components into software and services. The software segment accounts for the largest share of the market, driven by the increasing adoption of digital platforms for curriculum design and management. Educational institutions are recognizing the benefits of using specialized software to streamline and enhance the curriculum development process. This software facilitates the creation, organization, and assessment of curricula, providing educators with tools to align instructional practices with learning objectives effectively.
Within the software segment, the market is further divided into cloud-based and on-premises solutions. Cloud-based curriculum mapping software is gaining significant traction due to its scalability, flexibility, and cost-effectiveness. These solutions allow institutions to access the software from anywhere, at any time, and often come with automatic updates
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Replication package of the study "Grey Literature in Software Engineering: A Critical Review" published in Information and Software Technology (IST).
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset contains information about metro stations in Riyadh, Saudi Arabia. It includes details such as station names, types, ratings, and geographic coordinates. The dataset is valuable for transportation analysis, urban planning, and navigation applications.
The dataset consists of the following columns:
| Column Name | Data Type | Description |
|---|---|---|
| Name | string | Name of the metro station |
| Type_of_Utility | string | Type of station (Metro Station) |
| Number_of_Ratings | float | Total number of reviews received (some values may be missing) |
| Rating | float | Average rating score (scale: 0-5, some values may be missing) |
| Longitude | float | Geographical longitude coordinate |
| Latitude | float | Geographical latitude coordinate |
For questions or collaboration, reach out via Kaggle comments or email.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract Progress in software engineering requires (1) more empirical studies of quality, (2) increased focus on synthesizing evidence, (3) more theories to be built and tested, and (4) the validity of the experiment is directly related with the level of confidence in the process of experimental investigation. This paper presents the results of a qualitative and quantitative classification of the threats to the validity of software engineering experiments comprising a total of 92 articles published in the period 2001-2015, dealing with software testing of Web applications. Our results show that 29.4% of the analyzed articles do not mention any threats to validity, 44.2% do it briefly, and 14% do it judiciously; that leaves a question: these studies have scientific value?
Facebook
Twitterhttps://www.statsndata.org/how-to-orderhttps://www.statsndata.org/how-to-order
The Mapping Software market has evolved significantly over the past decade, emerging as a vital tool across various industries such as transportation, real estate, urban planning, and logistics. As organizations strive for enhanced efficiency and competitive advantage, mapping software provides comprehensive solutio
Facebook
Twitterhttps://www.statsndata.org/how-to-orderhttps://www.statsndata.org/how-to-order
The Indoor Mapping Software market is experiencing significant growth as organizations increasingly recognize the importance of spatial data in enhancing operational efficiency and user experiences. This software provides a sophisticated solution for visualizing and navigating indoor spaces, whether in large facilit
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Supplemental material about the paper filtering in each phase of the systematic mapping study
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context: Software defect prediction is a trending research topic, and a wide variety of the published papers focus on coding phase or after. A limited number of papers, however, includes the prior (early) phases of the software development lifecycle (SDLC). Objective: The goal of this study is to obtain a general view of the characteristics and usefulness of Early Software Defect Prediction (ESDP) models reported in scientific literature. Method: A systematic mapping and systematic literature review study has been conducted. We searched for the studies reported between 2000 and 2016. We reviewed 52 studies and analyzed the trend and demographics, maturity of state-of-research, in-depth characteristics, success and benefits of ESDP models. Results: We found that categorical models that rely on requirement and design phase metrics, and few continuous models including metrics from requirements phase are very successful. We also found that most studies reported qualitative benefits of using ESDP models. Conclusion: We have highlighted the most preferred prediction methods, metrics, datasets and performance evaluation methods, as well as the addressed SDLC phases. We expect the results will be useful for software teams by guiding them to use early predictors effectively in practice, and for researchers in directing their future efforts.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset contains information about malls and retail stores in Riyadh, Saudi Arabia. It includes key details such as names, categories, number of ratings, average ratings, and geographical coordinates. The dataset is useful for businesses, researchers, and developers working on market analysis, geospatial applications, and retail business strategies.
The dataset consists of the following columns:
| Column Name | Data Type | Description |
|---|---|---|
| Name | string | Name of the mall or retail store |
| Type_of_Utility | string | Category of the place (e.g., shopping mall, clothing store) |
| Number_of_Ratings | integer | Total number of reviews received |
| Rating | float | Average rating score (scale: 0-5) |
| Longitude | float | Geographical longitude coordinate |
| Latitude | float | Geographical latitude coordinate |
For questions or collaboration, reach out via Kaggle comments or email.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Contains spreadsheets used to report findings in the "Application of Collaborative Learning Paradigms within Software Engineering Education: A Systematic Mapping Study" paper.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains al the artifacts of the research: Bots in Software Development: A Systematic Literature Review
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract
Context: The use of theories in Software Engineering research is not as common or explicit as in other fields,with most studies focusing on technical aspects. However, establishing a more robust theoretical foundation couldsignificantly contribute to the maturation of Software Engineering as a science.
Objective: Therefore, this study investigates the use of theory in software engineering by applying the snowballingtechnique to systematic literature reviews indexed by the main online databases over the last 16 years. It also analyzesthe extent of theory use and what, how, and where these theories are used.
Method: We conducted a systematic mapping study to classify evidence on theory definitions, papers’ quality,research topics, methods, types, theory types, theory roles, and publication venues.
Results: Our results showed that the term “theory” varied among the literature due to inconsistent terminology,thus necessitating a comprehensive approach for accurate identification. Although many theories are cited, only a tinypercentage see repeated application across studies, with limited testing for relevance to practical software engineeringcontexts.
Conclusion: Despite the increase in proposed theories, software engineering requires further attention to mature, asmost papers primarily use theory to justify or motivate experimental research questions. Furthermore, although thereis a diversity of research topics and an adaptation of external theories, only 16% of studies explicitly operationalizetheory, highlighting the need for more intentional theoretical integration and developing SE-specific frameworks.
Facebook
TwitterThe data provides a summary of the state of development practice for Geographic Information Systems (GIS) software (as of August 2017). The summary is based on grading a set of 30 GIS products using a template of 56 questions based on 13 software qualities. The products range in scope and purpose from a complete desktop GIS systems, to stand-alone tools, to programming libraries/packages.
The template used to grade the software is found in the TabularSummaries.zip file. Each quality is measured with a series of questions. For unambiguity the responses are quantified wherever possible (e.g.~yes/no answers). The goal is for measures that are visible, measurable and feasible in a short time with limited domain knowledge. Unlike a comprehensive software review, this template does not grade on functionality and features. Therefore, it is possible that a relatively featureless product can outscore a feature-rich product.
A virtual machine is used to provide an optimal testing environments for each software product. During the process of grading the 30 software products, it is much easier to create a new virtual machine to test the software on, rather than using the host operating system and file system.
The raw data obtained by measuring each software product is in SoftwareGrading-GIS.xlsx. Each line in this file corresponds to between 2 and 4 hours of measurement time by a software engineer. The results are summarized for each quality in the TabularSummaries.zip file, as a tex file and compiled pdf file.