For any given market, we identify the major and regional shopping centres and, every month, update our dataset of the tenants in those centres, and tag the appropriate categories (eg Clothing Retailer, Restaurant, Bank etc).
Major firms like Stockland REIT and GapMaps leverage this data to inform:
We have data available off-the-shelf for a number of major markets, and can create new market datasets as quickly as 3-4 weeks.
Why work with us? - Clean, comprehensive and credible datasets - 61% cost savings compared to traditional data sourcing and processing methods - 89% time savings compared to traditional data sourcing and processing methods - Customizable datasets to your needs
Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
Open Prices What is Open Prices? Open Prices is a project to collect and share prices of products around the world. It's a publicly available dataset that can be used for research, analysis, and more. Open Prices is developed and maintained by Open Food Facts. There are currently few companies that own large databases of product prices at the barcode level. These prices are not freely available, but sold at a high price to private actors, researchers and other organizations that can afford them. Open Prices aims to democratize access to price data by collecting and sharing product prices under an open licence. The data is available under the Open Database License (ODbL), which means that it can be used for any purpose, as long as you credit Open Prices and share any modifications you make to the dataset. Images submitted as proof are licensed under the Creative Commons Attribution-ShareAlike 4.0 International. Dataset description This dataset contains in Parquet format all price information contained in the Open Prices database. The dataset is updated daily. Here is a description of the most important columns: id: The ID of the price in DB product_code: The barcode of the product, null if the product is a "raw" product (fruit, vegetable, etc.) category_tag: The category of the product, only present for "raw" products. We follow Open Food Facts category taxonomy for category IDs. labels_tags: The labels of the product, only present for "raw" products. We follow Open Food Facts label taxonomy for label IDs. origins_tags: The origins of the product, only present for "raw" products. We follow Open Food Facts origin taxonomy for origin IDs. price: The price of the product, with the discount if any. price_is_discounted: Whether the price is discounted or not. price_without_discount: The price of the product without discount, null if the price is not discounted. price_per: The unit for which the price is given (e.g. "KILOGRAM", "UNIT") currency: The currency of the price location_osm_id: The OpenStreetMap ID of the location where the price was recorded. We use OpenStreetMap to identify uniquely the store where the price was recorded. location_osm_type: The type of the OpenStreetMap location (e.g. "NODE", "WAY") location_id: The ID of the location in the Open Prices database date: The date when the price was recorded proof_id: The ID of the proof of the price in the Open Prices DB owner: a hash of the owner of the price, for privacy. created: The date when the price was created in the Open Prices DB updated: The date when the price was last updated in the Open Prices DB proof_file_path: The path to the proof file in the Open Prices DB proof_type: The type of the proof. Possible values are RECEIPT, PRICE_TAG, GDPR_REQUEST, SHOP_IMPORT proof_date: The date of the proof proof_currency: The currency of the proof, should be the same as the price currency proof_created: The datetime when the proof was created in the Open Prices DB proof_updated: The datetime when the proof was last updated in the Open Prices DB location_osm_display_name: The display name of the OpenStreetMap location location_osm_address_city: The city of the OpenStreetMap location location_osm_address_postcode: The postcode of the OpenStreetMap location How can I download images? All images can be accessed under the https://prices.openfoodfacts.org/img/ base URL. You just have to concatenate the proof_file_path column to this base URL to get the full URL of the image (ex: https://prices.openfoodfacts.org/img/0010/lqGHf3ZcVR.webp). Can I contribute to Open Prices? Of course! You can contribute by adding prices, trough the Open Prices website or through Open Food Facts mobile app. To participate in the technical development, you can check the Open Prices GitHub repository.
This dataset was created by Chandra Shekhar
Released under Other (specified in description)
SafeGraph Places provides baseline Point of Interest (POI) information for every record in the SafeGraph product suite via the Places schema and polygon information when applicable via the Geometry schema. The current scope of a place is defined as any location humans can visit with the exception of single-family homes. This definition encompasses a diverse set of point of interests ranging from restaurants, grocery stores, and malls; to parks, hospitals, museums, offices, and industrial parks. Premium sets of Places include apartment buildings, Parking Lots, and Point POIs (such as ATMs or transit stations).
SafeGraph Places is a point of interest (POI) data offering with varying coverage depending on the country. Note that address conventions and formatting vary across countries. SafeGraph has coalesced these fields into the Places schema.
SafeGraph provides clean and accurate geospatial datasets on 51M+ physical places/points of interest (POI) globally. Hundreds of industry leaders like Mapbox, Verizon, Clear Channel, and Esri already rely on SafeGraph POI data to unlock business insights and drive innovation.
Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
This feature layer provides access to OpenStreetMap (OSM) shops data for South America, which is updated every 5 minutes with the latest edits. This hosted feature layer view is referencing a hosted feature layer of OSM point (node) data in ArcGIS Online that is updated with minutely diffs from the OSM planet file. This feature layer view includes shop features defined as a query against the hosted feature layer (i.e. shop is not blank).In OSM, a shop is a place selling retail products or services, such as a supermarket, bakery, or florist. These features are identified with a shop tag. There are thousands of different tag values for shop used in the OSM database. In this feature layer, unique symbols are used for several of the most popular shop types, while lesser used types are grouped in an "other" category.Zoom in to large scales (e.g. Neighborhood level or 1:80k scale) to see the shop features display. You can click on the feature to get the name of the shop. The name of the shop will display by default at very large scales (e.g. Building level of 1:2k scale). Labels can be turned off in your map if you prefer.Create New LayerIf you would like to create a more focused version of this shop layer displaying just one or two shop types, you can do that easily! Just add the layer to a map, copy the layer in the content window, add a filter to the new layer (e.g. shop is jewelry), rename the layer as appropriate, and save layer. You can also change the layer symbols or popup if you like. Esri may publish a few such layers (e.g. supermarket or convenience shop) that are ready to use, but not for every type of shop.Important Note: if you do create a new layer, it should be provided under the same Terms of Use and include the same Credits as this layer. You can copy and paste the Terms of Use and Credits info below in the new Item page as needed.
http://apps.ecmwf.int/datasets/licences/copernicushttp://apps.ecmwf.int/datasets/licences/copernicus
land and oceanic climate variables. The data cover the Earth on a 31km grid and resolve the atmosphere using 137 levels from the surface up to a height of 80km. ERA5 includes information about uncertainties for all variables at reduced spatial and temporal resolutions.
Number of nights spent by country/world region of destination
Point-of-interest (POI) is defined as a physical entity (such as a business) in a geo location (point) which may be (of interest).
We strive to provide the most accurate, complete and up to date point of interest datasets for all countries of the world. The South Africa POI Dataset is one of our worldwide POI datasets with over 98% coverage.
This is our process flow:
Our machine learning systems continuously crawl for new POI data
Our geoparsing and geocoding calculates their geo locations
Our categorization systems cleanup and standardize the datasets
Our data pipeline API publishes the datasets on our data store
POI Data is in a constant flux - especially so during times of drastic change such as the Covid-19 pandemic.
Every minute worldwide on an average day over 200 businesses will move, over 600 new businesses will open their doors and over 400 businesses will cease to exist.
In today's interconnected world, of the approximately 200 million POIs worldwide, over 94% have a public online presence. As a new POI comes into existence its information will appear very quickly in location based social networks (LBSNs), other social media, pictures, websites, blogs, press releases. Soon after that, our state-of-the-art POI Information retrieval system will pick it up.
We offer our customers perpetual data licenses for any dataset representing this ever changing information, downloaded at any given point in time. This makes our company's licensing model unique in the current Data as a Service - DaaS Industry. Our customers don't have to delete our data after the expiration of a certain "Term", regardless of whether the data was purchased as a one time snapshot, or via a recurring payment plan on our data update pipeline.
The main differentiators between us vs the competition are our flexible licensing terms and our data freshness.
The core attribute coverage is as follows:
Poi Field Data Coverage (%) poi_name 100 brand 8 poi_tel 67 formatted_address 100 main_category 98 latitude 100 longitude 100 neighborhood 1 source_url 43 email 8 opening_hours 47
The data may be visualized on a map at https://store.poidata.xyz/za and a data sample may be downloaded at https://store.poidata.xyz/datafiles/za_sample.csv
https://object-store.os-api.cci2.ecmwf.int:443/cci2-prod-catalogue/licences/licence-to-use-copernicus-products/licence-to-use-copernicus-products_b4b9451f54cffa16ecef5c912c9cebd6979925a956e3fa677976e0cf198c2c18.pdfhttps://object-store.os-api.cci2.ecmwf.int:443/cci2-prod-catalogue/licences/licence-to-use-copernicus-products/licence-to-use-copernicus-products_b4b9451f54cffa16ecef5c912c9cebd6979925a956e3fa677976e0cf198c2c18.pdf
ERA5-Land is a reanalysis dataset providing a consistent view of the evolution of land variables over several decades at an enhanced resolution compared to ERA5. ERA5-Land has been produced by replaying the land component of the ECMWF ERA5 climate reanalysis. Reanalysis combines model data with observations from across the world into a globally complete and consistent dataset using the laws of physics. Reanalysis produces data that goes several decades back in time, providing an accurate description of the climate of the past.
ERA5-Land uses as input to control the simulated land fields ERA5 atmospheric variables, such as air temperature and air humidity. This is called the atmospheric forcing. Without the constraint of the atmospheric forcing, the model-based estimates can rapidly deviate from reality. Therefore, while observations are not directly used in the production of ERA5-Land, they have an indirect influence through the atmospheric forcing used to run the simulation. In addition, the input air temperature, air humidity and pressure used to run ERA5-Land are corrected to account for the altitude difference between the grid of the forcing and the higher resolution grid of ERA5-Land. This correction is called 'lapse rate correction'.
The ERA5-Land dataset, as any other simulation, provides estimates which have some degree of uncertainty. Numerical models can only provide a more or less accurate representation of the real physical processes governing different components of the Earth System. In general, the uncertainty of model estimates grows as we go back in time, because the number of observations available to create a good quality atmospheric forcing is lower. ERA5-land parameter fields can currently be used in combination with the uncertainty of the equivalent ERA5 fields.
The temporal and spatial resolutions of ERA5-Land makes this dataset very useful for all kind of land surface applications such as flood or drought forecasting. The temporal and spatial resolution of this dataset, the period covered in time, as well as the fixed grid used for the data distribution at any period enables decisions makers, businesses and individuals to access and use more accurate information on land states.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
From 2016 to 2018, we surveyed the world’s largest natural history museum collections to begin mapping this globally distributed scientific infrastructure. The resulting dataset includes 73 institutions across the globe. It has:
Basic institution data for the 73 contributing institutions, including estimated total collection sizes, geographic locations (to the city) and latitude/longitude, and Research Organization Registry (ROR) identifiers where available.
Resourcing information, covering the numbers of research, collections and volunteer staff in each institution.
Indicators of the presence and size of collections within each institution broken down into a grid of 19 collection disciplines and 16 geographic regions.
Measures of the depth and breadth of individual researcher experience across the same disciplines and geographic regions.
This dataset contains the data (raw and processed) collected for the survey, and specifications for the schema used to store the data. It includes:
A diagram of the MySQL database schema.
A SQL dump of the MySQL database schema, excluding the data.
A SQL dump of the MySQL database schema with all data. This may be imported into an instance of MySQL Server to create a complete reconstruction of the database.
Raw data from each database table in CSV format.
A set of more human-readable views of the data in CSV format. These correspond to the database tables, but foreign keys are substituted for values from the linked tables to make the data easier to read and analyse.
A text file containing the definitions of the size categories used in the collection_unit table.
The global collections data may also be accessed at https://rebrand.ly/global-collections. This is a preliminary dashboard, constructed and published using Microsoft Power BI, that enables the exploration of the data through a set of visualisations and filters. The dashboard consists of three pages:
Institutional profile: Enables the selection of a specific institution and provides summary information on the institution and its location, staffing, total collection size, collection breakdown and researcher expertise.
Overall heatmap: Supports an interactive exploration of the global picture, including a heatmap of collection distribution across the discipline and geographic categories, and visualisations that demonstrate the relative breadth of collections across institutions and correlations between collection size and breadth. Various filters allow the focus to be refined to specific regions and collection sizes.
Browse: Provides some alternative methods of filtering and visualising the global dataset to look at patterns in the distribution and size of different types of collections across the global view.
https://object-store.os-api.cci2.ecmwf.int:443/cci2-prod-catalogue/licences/licence-to-use-copernicus-products/licence-to-use-copernicus-products_b4b9451f54cffa16ecef5c912c9cebd6979925a956e3fa677976e0cf198c2c18.pdfhttps://object-store.os-api.cci2.ecmwf.int:443/cci2-prod-catalogue/licences/licence-to-use-copernicus-products/licence-to-use-copernicus-products_b4b9451f54cffa16ecef5c912c9cebd6979925a956e3fa677976e0cf198c2c18.pdf
ERA5 is the fifth generation ECMWF reanalysis for the global climate and weather for the past 8 decades. Data is available from 1940 onwards. ERA5 replaces the ERA-Interim reanalysis. Reanalysis combines model data with observations from across the world into a globally complete and consistent dataset using the laws of physics. This principle, called data assimilation, is based on the method used by numerical weather prediction centres, where every so many hours (12 hours at ECMWF) a previous forecast is combined with newly available observations in an optimal way to produce a new best estimate of the state of the atmosphere, called analysis, from which an updated, improved forecast is issued. Reanalysis works in the same way, but at reduced resolution to allow for the provision of a dataset spanning back several decades. Reanalysis does not have the constraint of issuing timely forecasts, so there is more time to collect observations, and when going further back in time, to allow for the ingestion of improved versions of the original observations, which all benefit the quality of the reanalysis product. ERA5 provides hourly estimates for a large number of atmospheric, ocean-wave and land-surface quantities. An uncertainty estimate is sampled by an underlying 10-member ensemble at three-hourly intervals. Ensemble mean and spread have been pre-computed for convenience. Such uncertainty estimates are closely related to the information content of the available observing system which has evolved considerably over time. They also indicate flow-dependent sensitive areas. To facilitate many climate applications, monthly-mean averages have been pre-calculated too, though monthly means are not available for the ensemble mean and spread. ERA5 is updated daily with a latency of about 5 days. In case that serious flaws are detected in this early release (called ERA5T), this data could be different from the final release 2 to 3 months later. In case that this occurs users are notified. The data set presented here is a regridded subset of the full ERA5 data set on native resolution. It is online on spinning disk, which should ensure fast and easy access. It should satisfy the requirements for most common applications. An overview of all ERA5 datasets can be found in this article. Information on access to ERA5 data on native resolution is provided in these guidelines. Data has been regridded to a regular lat-lon grid of 0.25 degrees for the reanalysis and 0.5 degrees for the uncertainty estimate (0.5 and 1 degree respectively for ocean waves). There are four main sub sets: hourly and monthly products, both on pressure levels (upper air fields) and single levels (atmospheric, ocean-wave and land surface quantities). The present entry is "ERA5 hourly data on single levels from 1940 to present".
http://apps.ecmwf.int/datasets/licences/copernicushttp://apps.ecmwf.int/datasets/licences/copernicus
including aerosols
http://apps.ecmwf.int/datasets/licences/camshttp://apps.ecmwf.int/datasets/licences/cams
The Global Fire Assimilation System (GFAS) assimilates fire radiative power (FRP) observations from satellite-based sensors to produce daily estimates of emissions from wildfires and biomass burning. FRP is a measure of the energy released by the fire and is therefore a measure of how much vegetation is burned.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The MLCommons Dollar Street Dataset is a collection of images of everyday household items from homes around the world that visually captures socioeconomic diversity of traditionally underrepresented populations. It consists of public domain data, licensed for academic, commercial and non-commercial usage, under CC-BY and CC-BY-SA 4.0. The dataset was developed because similar datasets lack socioeconomic metadata and are not representative of global diversity.
This is a subset of the original dataset that can be used for multiclass classification with 10 categories. It is designed to be used in teaching, similar to the widely used, but unlicensed CIFAR-10 dataset.
These are the preprocessing steps that were performed:
This is the label mapping:
Category | label |
day bed | 0 |
dishrag | 1 |
plate | 2 |
running shoe | 3 |
soap dispenser | 4 |
street sign | 5 |
table lamp | 6 |
tile roof | 7 |
toilet seat | 8 |
washing machine | 9 |
Checkout this notebook to see how the subset was created.
The original dataset was downloaded from https://www.kaggle.com/datasets/mlcommons/the-dollar-street-dataset. See https://mlcommons.org/datasets/dollar-street/ for more information.
Location includes a wealth of point of interest information, complete with detailed metadata. This helps businesses in various industries—from retail and logistics to fintech and quick-service restaurants—to make data-driven decisions.
dataplor’s Point of Interest (POI) data offers a rich set of 55+ attributes that provide in-depth insights into each location.
Key data attributes include:
In our work, we have designed and implemented a novel workflow with several heuristic methods to combine state-of-the-art methods related to CVE fix commits gathering. As a consequence of our improvements, we have been able to gather the largest programming language-independent real-world dataset of CVE vulnerabilities with the associated fix commits. Our dataset containing 26,617 unique CVEs coming from 6,945 unique GitHub projects is, to the best of our knowledge, by far the biggest CVE vulnerability dataset with fix commits available today. These CVEs are associated with 31,883 unique commits that fixed those vulnerabilities. Compared to prior work, our dataset brings about a 397% increase in CVEs, a 295% increase in covered open-source projects, and a 480% increase in commit fixes. Our larger dataset thus substantially improves over the current real-world vulnerability datasets and enables further progress in research on vulnerability detection and software security. We used NVD(nvd.nist.gov) and Github Secuirty advisory Database as the main sources of our pipeline.
We release to the community a 14GB PostgreSQL database that contains information on CVEs up to January 24, 2024, CWEs of each CVE, files and methods changed by each commit, and repository metadata. Additionally, patch files related to the fix commits are available as a separate package. Furthermore, we make our dataset collection tool also available to the community.
cvedataset-patches.zip file contains fix patches, and dump_morefixes_27-03-2024_19_52_58.sql.zip contains a postgtesql dump of fixes, together with several other fields such as CVEs, CWEs, repository meta-data, commit data, file changes, method changed, etc.
MoreFixes data-storage strategy is based on CVEFixes to store CVE commits fixes from open-source repositories, and uses a modified version of Porspector(part of ProjectKB from SAP) as a module to detect commit fixes of a CVE. Our full methodology is presented in the paper, with the title of "MoreFixes: A Large-Scale Dataset of CVE Fix Commits Mined through Enhanced Repository Discovery", which will be published in the Promise conference (2024).
For more information about usage and sample queries, visit the Github repository: https://github.com/JafarAkhondali/Morefixes
If you are using this dataset, please be aware that the repositories that we mined contain different licenses and you are responsible to handle any licesnsing issues. This is also the similar case with CVEFixes.
This product uses the NVD API but is not endorsed or certified by the NVD.
Cart abandonment rates have been climbing steadily since 2014, after reaching an all-time high in 2013. In 2023, the share of online shopping carts that is being abandoned reached 70 percent for the first time since 2013. This is an increase of more than 10 percentage points compared to the start of the time period considered here. Mobiles vs. desktops When global consumers shop online, they spend considerably more when doing so on desktop computers. In December 2023, the average value of e-commerce purchases made through desktops was approximately 159 U.S. dollars. Purchases completed on mobiles and tablets were of comparable values, ranging between 100 and 105 U.S. dollars. Even though consumers spent more when conducting their shopping on computers, they were more inclined to add products to their shopping carts when using mobile devices. Ultimately, mobile devices provide a convenient and more accessible way to shop, but desktop computers remain the preferred choice for more expensive purchases. Where do consumers shop online? Across the globe, digital marketplaces are shoppers’ number-one online shopping destination. As of April 2024, some 29 percent of consumers voted marketplaces as their favorite e-commerce channel, followed by physical stores and retailer sites. Looking at which retailers’ global shoppers prefer to shop at, amazon.com emerged as the world's most popular online marketplace, based on share of visits. The U.S. portal accounted for around one-fifth of the global online marketplace's traffic in December 2023. Amazon's German and Japanese portal sites ranked third and fifth among the leading online marketplaces, further demonstrating Amazon's dominance over the market.
https://object-store.os-api.cci2.ecmwf.int:443/cci2-prod-catalogue/licences/licence-to-use-copernicus-products/licence-to-use-copernicus-products_b4b9451f54cffa16ecef5c912c9cebd6979925a956e3fa677976e0cf198c2c18.pdfhttps://object-store.os-api.cci2.ecmwf.int:443/cci2-prod-catalogue/licences/licence-to-use-copernicus-products/licence-to-use-copernicus-products_b4b9451f54cffa16ecef5c912c9cebd6979925a956e3fa677976e0cf198c2c18.pdf
EAC4 (ECMWF Atmospheric Composition Reanalysis 4) is the fourth generation ECMWF global reanalysis of atmospheric composition. Reanalysis combines model data with observations from across the world into a globally complete and consistent dataset using a model of the atmosphere based on the laws of physics and chemistry. This principle, called data assimilation, is based on the method used by numerical weather prediction centres and air quality forecasting centres, where every so many hours (12 hours at ECMWF) a previous forecast is combined with newly available observations in an optimal way to produce a new best estimate of the state of the atmosphere, called analysis, from which an updated, improved forecast is issued. Reanalysis works in the same way to allow for the provision of a dataset spanning back more than a decade. Reanalysis does not have the constraint of issuing timely forecasts, so there is more time to collect observations, and when going further back in time, to allow for the ingestion of improved versions of the original observations, which all benefit the quality of the reanalysis product. The assimilation system is able to estimate biases between observations and to sift good-quality data from poor data. The atmosphere model allows for estimates at locations where data coverage is low or for atmospheric pollutants for which no direct observations are available. The provision of estimates at each grid point around the globe for each regular output time, over a long period, always using the same format, makes reanalysis a very convenient and popular dataset to work with. The observing system has changed drastically over time, and although the assimilation system can resolve data holes, the initially much sparser networks will lead to less accurate estimates. For this reason, EAC4 is only available from 2003 onwards. Although the analysis procedure considers chunks of data in a window of 12 hours in one go, EAC4 provides estimates every 3 hours, worldwide. This is made possible by the 4D-Var assimilation method, which takes account of the exact timing of the observations and model evolution within the assimilation window.
The dataset collection in focus comprises an assortment of tables, each carrying a distinct set of data. These tables are meticulously sourced from the website of Lantmäteriet (The Swedish Mapping, Cadastral and Land Registration Authority) in Sweden. The dataset provides a wide range of valuable data, including but not limited to, information about companies, geospatial data, meteorological data, statistical data, and earth observation & environmental data. The tables present the data in an organized manner, with the information arranged systematically in columns and rows. This makes it convenient to analyze and draw insights from the dataset. Overall, it's a comprehensive dataset collection that offers a diverse and substantial range of information.
NB This datasets has restricted access due to GDPR considerations. Anna visited Swansea University, Wales, a recently built institution. Photos of the visit were taken by a friend, not by her. The experience was unusual for them coming from a small city in Italy. They were intrigued by modern dances like the Twist, which she hadn't seen before. They were surprised also by boys with long hair. While they were familiar with the Beatles, Italy had not yet seen young people organizing themselves into musical groups like we observed during our visit. In Italy, this type of musical band trend emerged later, perhaps a year or two after our visit to Swansea. In 1966, there was a second exchange involving the Italian group. The Welsh family she stayed with owned a grocery shop. The living quarters were located on the first floor above the shop, situated on the street. During her third exchange in Esslingen, they were taken to Stuttgart airport were they visited an airplane to explore its interior. This was Anna’s first time being inside an airplane, and everything about it was new and exciting for her. For Anna, art and cultural experiences were more important than discussions about economy and work. However, the most important aspect of these exchanges wasn't the places they visited or the activities we did—it was the opportunity to be together with boys and girls from different nationalities. Building connections and sharing experiences with peers from diverse cultures, speaking different languages, was the most enjoyable and valuable part of the exchange for Anna. 50 years have passed, a long time, according to Anna. She has forgotten many facts but remembers feelings. Remembers the feeling of great fun. Anna remembers how surprised she was about many things different from what she had seen till that moment. Friends in a Cold Climate: After the Second World War a number of friendship ties were established between towns in Europe. Citizens, council-officials and church representatives were looking for peace and prosperity in a still fragmented Europe. After a visit of the Royal Mens Choir Schiedam to Esslingen in 1963, representatives of Esslingen asked Schiedam to take part in friendly exchanges involving citizens and officials. The connections expanded and in 1970, in Esslingen, a circle of friends was established tying the towns Esslingen, Schiedam, Udine (IT) Velenje (SL) Vienne (F) and Neath together. Each town of this so called “Verbund der Ringpartnerstädte” had to keep in touch with at least 2 towns within the wider network. Friends in a Cold Climate looks primarily through the eyes the citizen-participant. Their motivation for taking part may vary. For example, is there a certain engagement with the European project? Did parents instil in their children a a message of fraternisation stemming from their experiences in WWII? Or did the participants only see youth exchange only as an opportunity for a trip to a foreign country? This latter motivation of taking part for other than Euro-idealistic reasons should however not be regarded as tourist or consumer-led behaviour. Following Michel de Certeau, Friends in a Cold Climate regards citizen-participants as a producers rather than as a consumers. A participant may "put to use" the Town Twinning facilities of travel and activities in his or her own way, regardless of the programme. Integration of West-Europe after the Second World War was driven by a broad movement aimed at peace, security and prosperity. Organised youth exchange between European cities formed an important part of that movement. This research focuses on young people who, from the 1960s onwards, participated in international exchanges organised by twinned towns, also called jumelage. Friends in a Cold Climate asks about the interactions between young people while taking into account the organisational structures on a municipal level, The project investigates the role of the ideology of a united West-Europe, individual desires for travel and freedom, the upcoming discourse about the Second World War and the influence of the prevalent “counterculture” of that period, thus shedding a light on the formative years of European integration.
For any given market, we identify the major and regional shopping centres and, every month, update our dataset of the tenants in those centres, and tag the appropriate categories (eg Clothing Retailer, Restaurant, Bank etc).
Major firms like Stockland REIT and GapMaps leverage this data to inform:
We have data available off-the-shelf for a number of major markets, and can create new market datasets as quickly as 3-4 weeks.
Why work with us? - Clean, comprehensive and credible datasets - 61% cost savings compared to traditional data sourcing and processing methods - 89% time savings compared to traditional data sourcing and processing methods - Customizable datasets to your needs