Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This database contains: rainfall, humidity, temperature, global solar radiation, wind velocity and wind direction ten-minute data from 150 stations of the Meteogalicia network between 1-jan-2000 and 31-dec-2018.
Version installed: postgresql 9.1
Extension installed: postgis 1.5.3-1
Instructions to restore the database:
createdb -E UTF8 -O postgres -U postgres template_postgis
createlang plpgsql -d template_postgis -U postgres
psql -d template_postgis -U postgres -f /usr/share/postgresql/9.1/contrib/postgis-1.5/postgis.sql
psql -d template_postgis -U postgres -f /usr/share/postgresql/9.1/contrib/postgis-1.5/spatial_ref_sys.sql
psql -d template_postgis -U postgres -f /usr/share/postgresql/9.1/contrib/postgis_comments.sql
createdb -U postgres -T template_postgis MeteoGalicia
cat Meteogalicia* | psql MeteoGalicia
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Update 2025-06-08
We release the full version of BIRD-Critic-PG, a dataset containing 530 high-quality user issues focused on real-world PostgreSQL database applications. The schema file is include in the code repository https://github.com/bird-bench/BIRD-CRITIC-1/blob/main/baseline/data/post_schema.jsonl
BIRD-CRITIC-1.0-PG
BIRD-Critic is the first SQL debugging benchmark designed to answer a critical question: Can large language models (LLMs) fix user issues in… See the full description on the dataset page: https://huggingface.co/datasets/birdsql/bird-critic-1.0-postgresql.
As of June 2024, the most popular database management system (DBMS) worldwide was Oracle, with a ranking score of *******; MySQL and Microsoft SQL server rounded out the top three. Although the database management industry contains some of the largest companies in the tech industry, such as Microsoft, Oracle and IBM, a number of free and open-source DBMSs such as PostgreSQL and MariaDB remain competitive. Database Management Systems As the name implies, DBMSs provide a platform through which developers can organize, update, and control large databases. Given the business world’s growing focus on big data and data analytics, knowledge of SQL programming languages has become an important asset for software developers around the world, and database management skills are seen as highly desirable. In addition to providing developers with the tools needed to operate databases, DBMS are also integral to the way that consumers access information through applications, which further illustrates the importance of the software.
Open source object relational database system that uses and extends SQL language combined with many features that safely store and scale the most complicated data workloads. PostgreSQL runs on all major operating systems.
Forager.ai's Small Business Contact Data set is a comprehensive collection of over 695M professional profiles. With an unmatched 2x/month refresh rate, we ensure the most current and dynamic data in the industry today. We deliver this data via JSONL flat-files or PostgreSQL database delivery, capturing publicly available information on each profile.
| Volume and Stats |
Every single record refreshed 2x per month, setting industry standards. First-party data curation powering some of the most renowned sales and recruitment platforms. Delivery frequency is hourly (fastest in the industry today). Additional datapoints and linkages available. Delivery formats: JSONL, PostgreSQL, CSV. | Datapoints |
Over 150+ unique datapoints available! Key fields like Current Title, Current Company, Work History, Educational Background, Location, Address, and more. Unique linkage data to other social networks or contact data available. | Use Cases |
Sales Platforms, ABM Vendors, Intent Data Companies, AdTech and more:
Deliver the best end-customer experience with our people feed powering your solution! Be the first to know when someone changes jobs and share that with end-customers. Industry-leading data accuracy. Connect our professional records to your existing database, find new connections to other social networks, and contact data. Hashed records also available for advertising use-cases. Venture Capital and Private Equity:
Track every company and employee with a publicly available profile. Keep track of your portfolio's founders, employees and ex-employees, and be the first to know when they move or start up. Keep an eye on the pulse by following the most influential people in the industries and segments you care about. Provide your portfolio companies with the best data for recruitment and talent sourcing. Review departmental headcount growth of private companies and benchmark their strength against competitors. HR Tech, ATS Platforms, Recruitment Solutions, as well as Executive Search Agencies:
Build products for industry-specific and industry-agnostic candidate recruiting platforms. Track person job changes and immediately refresh profiles to avoid stale data. Identify ideal candidates through work experience and education history. Keep ATS systems and candidate profiles constantly updated. Link data from this dataset into GitHub, LinkedIn, and other social networks. | Delivery Options |
Flat files via S3 or GCP PostgreSQL Shared Database PostgreSQL Managed Database REST API Other options available at request, depending on scale required | Other key features |
Over 120M US Professional Profiles. 150+ Data Fields (available upon request) Free data samples, and evaluation. Tags: Professionals Data, People Data, Work Experience History, Education Data, Employee Data, Workforce Intelligence, Identity Resolution, Talent, Candidate Database, Sales Database, Contact Data, Account Based Marketing, Intent Data.
The Forager.ai Global Dataset is a leading source of firmographic data, backed by advanced AI and offering the highest refresh rate in the industry.
| Volume and Stats |
| Use Cases |
Sales Platforms, ABM and Intent Data Platforms, Identity Platforms, Data Vendors:
Example applications include:
Uncover trending technologies or tools gaining popularity.
Pinpoint lucrative business prospects by identifying similar solutions utilized by a specific company.
Study a company's tech stacks to understand the technical capability and skills available within that company.
B2B Tech Companies:
Venture Capital and Private Equity:
| Delivery Options |
Our dataset provides a unique blend of volume, freshness, and detail that is perfect for Sales Platforms, B2B Tech, VCs & PE firms, Marketing Automation, ABM & Intent. It stands as a cornerstone in our broader data offering, ensuring you have the information you need to drive decision-making and growth.
Tags: Company Data, Company Profiles, Employee Data, Firmographic Data, AI-Driven Data, High Refresh Rate, Company Classification, Private Market Intelligence, Workforce Intelligence, Public Companies.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Publicly accessible databases often impose query limits or require registration. Even when I maintain public and limit-free APIs, I never wanted to host a public database because I tend to think that the connection strings are a problem for the user.
I’ve decided to host different light/medium size by using PostgreSQL, MySQL and SQL Server backends (in strict descending order of preference!).
Why 3 database backends? I think there are a ton of small edge cases when moving between DB back ends and so testing lots with live databases is quite valuable. With this resource you can benchmark speed, compression, and DDL types.
Please send me a tweet if you need the connection strings for your lectures or workshops. My Twitter username is @pachamaltese. See the SQL dumps on each section to have the data locally.
This is a dump generated by pg_dump -Fc of the IMDb data used in the "How Good are Query Optimizers, Really?" paper. PostgreSQL compatible SQL queries and scripts to automatically create a VM with this dataset can be found here: https://git.io/imdb
The Forager.ai Global Private Equity (PE) Funding Data Set is a leading source of firmographic data, backed by advanced AI and offering the highest refresh rate in the industry.
| Volume and Stats |
| Use Cases |
Sales Platforms, ABM and Intent Data Platforms, Identity Platforms, Data Vendors:
Example applications include:
Uncover trending technologies or tools gaining popularity.
Pinpoint lucrative business prospects by identifying similar solutions utilized by a specific company.
Study a company's tech stacks to understand the technical capability and skills available within that company.
B2B Tech Companies:
Venture Capital and Private Equity:
| Delivery Options |
Our dataset provides a unique blend of volume, freshness, and detail that is perfect for Sales Platforms, B2B Tech, VCs & PE firms, Marketing Automation, ABM & Intent. It stands as a cornerstone in our broader data offering, ensuring you have the information you need to drive decision-making and growth.
Tags: Company Data, Company Profiles, Employee Data, Firmographic Data, AI-Driven Data, High Refresh Rate, Company Classification, Private Market Intelligence, Workforce Intelligence, Public Companies.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The self-documenting aspects and the ability to reproduce results have been touted as significant benefits of Jupyter Notebooks. At the same time, there has been growing criticism that the way notebooks are being used leads to unexpected behavior, encourages poor coding practices and that their results can be hard to reproduce. To understand good and bad practices used in the development of real notebooks, we analyzed 1.4 million notebooks from GitHub. Based on the results, we proposed and evaluated Julynter, a linting tool for Jupyter Notebooks.
Papers:
This repository contains three files:
Reproducing the Notebook Study
The db2020-09-22.dump.gz file contains a PostgreSQL dump of the database, with all the data we extracted from notebooks. For loading it, run:
gunzip -c db2020-09-22.dump.gz | psql jupyter
Note that this file contains only the database with the extracted data. The actual repositories are available in a google drive folder, which also contains the docker images we used in the reproducibility study. The repositories are stored as content/{hash_dir1}/{hash_dir2}.tar.bz2, where hash_dir1 and hash_dir2 are columns of repositories in the database.
For scripts, notebooks, and detailed instructions on how to analyze or reproduce the data collection, please check the instructions on the Jupyter Archaeology repository (tag 1.0.0)
The sample.tar.gz file contains the repositories obtained during the manual sampling.
Reproducing the Julynter Experiment
The julynter_reproducility.tar.gz file contains all the data collected in the Julynter experiment and the analysis notebooks. Reproducing the analysis is straightforward:
The collected data is stored in the julynter/data folder.
Changelog
2019/01/14 - Version 1 - Initial version
2019/01/22 - Version 2 - Update N8.Execution.ipynb to calculate the rate of failure for each reason
2019/03/13 - Version 3 - Update package for camera ready. Add columns to db to detect duplicates, change notebooks to consider them, and add N1.Skip.Notebook.ipynb and N11.Repository.With.Notebook.Restriction.ipynb.
2021/03/15 - Version 4 - Add Julynter experiment; Update database dump to include new data collected for the second paper; remove scripts and analysis notebooks from this package (moved to GitHub), add a link to Google Drive with collected repository files
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
DESCRIPTION
VERSIONS
version1.0.1 fixes problem with functions
version1.0.2 added table dbeel_rivers.rn_rivermouth with GEREM basin, distance to Gibraltar and link to CCM.
version1.0.3 fixes problem with functions
version1.0.4 adds views rn_rna and rn_rne to the database
The SUDOANG project aims at providing common tools to managers to support eel conservation in the SUDOE area (Spain, France and Portugal). VISUANG is the SUDOANG Interactive Web Application that host all these tools . The application consists of an eel distribution atlas (GT1), assessments of mortalities caused by turbines and an atlas showing obstacles to migration (GT2), estimates of recruitment and exploitation rate (GT3) and escapement (chosen as a target by the EC for the Eel Management Plans) (GT4). In addition, it includes an interactive map showing sampling results from the pilot basin network produced by GT6.
The eel abundance for the eel atlas and escapement has been obtained using the Eel Density Analysis model (EDA, GT4's product). EDA extrapolates the abundance of eel in sampled river segments to other segments taking into account how the abundance, sex and size of the eels change depending on different parameters. Thus, EDA requires two main data sources: those related to the river characteristics and those related to eel abundance and characteristics.
However, in both cases, data availability was uneven in the SUDOE area. In addition, this information was dispersed among several managers and in different formats due to different sampling sources: Water Framework Directive (WFD), Community Framework for the Collection, Management and Use of Data in the Fisheries Sector (EUMAP), Eel Management Plans, research groups, scientific papers and technical reports. Therefore, the first step towards having eel abundance estimations including the whole SUDOE area, was to have a joint river and eel database. In this report we will describe the database corresponding to the river’s characteristics in the SUDOE area and the eel abundances and their characteristics.
In the case of rivers, two types of information has been collected:
River topology (RN table): a compilation of data on rivers and their topological and hydrographic characteristics in the three countries.
River attributes (RNA table): contains physical attributes that have fed the SUDOANG models.
The estimation of eel abundance and characteristic (size, biomass, sex-ratio and silver) distribution at different scales (river segment, basin, Eel Management Unit (EMU), and country) in the SUDOE area obtained with the implementation of the EDA2.3 model has been compiled in the RNE table (eel predictions).
CURRENT ACTIVE PROJECT
The project is currently active here : gitlab forgemia
TECHNICAL DESCRIPTION TO BUILD THE POSTGRES DATABASE
All tables are in ESPG:3035 (European LAEA). The format is postgreSQL database. You can download other formats (shapefiles, csv), here SUDOANG gt1 database.
Initial command
cd c:/path/to/my/folder
createdb -U postgres eda2.3 psql -U postgres eda2.3
Within the psql command
create extension "postgis"; create extension "dblink"; create extension "ltree"; create extension "tablefunc"; create schema dbeel_rivers; create schema france; create schema spain; create schema portugal; -- type \q to quit the psql shell
Now the database is ready to receive the differents dumps. The dump file are large. You might not need the part including unit basins or waterbodies. All the tables except waterbodies and unit basins are described in the Atlas. You might need to understand what is inheritance in a database. https://www.postgresql.org/docs/12/tutorial-inheritance.html
These layers contain the topology (see Atlas for detail)
dbeel_rivers.rn
france.rn
spain.rn
portugal.rn
Columns (see Atlas)
gid
idsegment
source
target
lengthm
nextdownidsegment
path
isfrontier
issource
seaidsegment
issea
geom
isendoreic
isinternational
country
dbeel_rivers.rn_rivermouth
seaidsegment
geom (polygon)
gerem_zone_3
gerem_zone_4 (used in EDA)
gerem_zone_5
ccm_wso_id
country
emu_name_short
geom_outlet (point)
name_basin
dist_from_gibraltar_km
name_coast
basin_name
pg_restore -U postgres -d eda2.3 "dbeel_rivers.rn.backup"
pg_restore -U postgres -d eda2.3 "france.rn.backup"
pg_restore -U postgres -d eda2.3 "spain.rn.backup"
pg_restore -U postgres -d eda2.3 "portugal.rn.backup"
for each basin flowing to the sea. pg_restore -U postgres -d eda2.3 "dbeel_rivers.rn_rivermouth.backup"
psql -U postgres -d eda2.3 -f "function_dbeel_rivers.sql"
This corresponds to tables
dbeel_rivers.rna
france.rna
spain.rna
portugal.rna
Columns (See Atlas)
idsegment
altitudem
distanceseam
distancesourcem
cumnbdam
medianflowm3ps
surfaceunitbvm2
surfacebvm2
strahler
shreeve
codesea
name
pfafriver
pfafsegment
basin
riverwidthm
temperature
temperaturejan
temperaturejul
wettedsurfacem2
wettedsurfaceotherm2
lengthriverm
emu
cumheightdam
riverwidthmsource
slope
dis_m3_pyr_riveratlas
dis_m3_pmn_riveratlas
dis_m3_pmx_riveratlas
drought
drought_type_calc
Code :
pg_restore -U postgres -d eda2.3 "dbeel_rivers.rna.backup"
pg_restore -U postgres -d eda2.3 "france.rna.backup"
pg_restore -U postgres -d eda2.3 "spain.rna.backup"
pg_restore -U postgres -d eda2.3 "portugal.rna.backup"
These layers contain eel data (see Atlas for detail)
dbeel_rivers.rne
france.rne
spain.rne
portugal.rne
Columns (see Atlas)
idsegment
surfaceunitbvm2
surfacebvm2
delta
gamma
density
neel
beel
peel150
peel150300
peel300450
peel450600
peel600750
peel750
nsilver
bsilver
psilver150300
psilver300450
psilver450600
psilver600750
psilver750
psilver
pmale150300
pmale300450
pmale450600
pfemale300450
pfemale450600
pfemale600750
pfemale750
pmale
pfemale
sex_ratio
cnfemale300450
cnfemale450600
cnfemale600750
cnfemale750
cnmale150300
cnmale300450
cnmale450600
cnsilver150300
cnsilver300450
cnsilver450600
cnsilver600750
cnsilver750
cnsilver
delta_tr
gamma_tr
type_fit_delta_tr
type_fit_gamma_tr
density_tr
density_pmax_tr
neel_pmax_tr
nsilver_pmax_tr
density_wd
neel_wd
beel_wd
nsilver_wd
bsilver_wd
sector_tr
year_tr
is_current_distribution_area
is_pristine_distribution_area_1985
Code for restauration
pg_restore -U postgres -d eda2.3 "dbeel_rivers.rne.backup"
pg_restore -U postgres -d eda2.3 "france.rne.backup"
pg_restore -U postgres -d eda2.3 "spain.rne.backup"
pg_restore -U postgres -d eda2.3 "portugal.rne.backup"
Units basins are not described in the Altas. They correspond to the following tables :
dbeel_rivers.basinunit_bu
france.basinunit_bu
spain.basinunit_bu
portugal.basinunit_bu
france.basinunitout_buo
spain.basinunitout_buo
portugal.basinunitout_buo
The unit basins is the simple basin that surrounds a segment. It correspond to the topography unit from which unit segment have been calculated. ESPG 3035. Tables bu_unitbv, and bu_unitbvout inherit from dbeel_rivers.unit_bv. The first table intersects with a segment, the second table does not, it corresponds to basin polygons which do not have a riversegment.
Source :
Portugal
France
In france unit bv corresponds to the RHT (Pella et al., 2012)
Spain
pg_restore -U postgres -d eda2.3 'dbeel_rivers.basinunit_bu.backup'
pg_restore -U postgres -d eda2.3
Magnetique: An interactive web application to explore transcriptome signatures of heart failure
Supplementary dataset.
db_dump.sql.gz: This is a daily backup-dump of the Magnetique database obtained on 18.07.2022 and shared for reproducibility purposes
Other files are required as input for the modeling steps detailed at https://github.com/dieterich-lab/magnetiqueCode2022
Refer to https://shiny.dieterichlab.org/app/magnetique or contact the authors for details.
In 2023, over ** percent of surveyed software developers worldwide reported using PostgreSQL, the highest share of any database technology. Other popular database tools among developers included MySQL and SQLite.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this research, data mining and decision tree techniques were analyzed as well as the induction of rules to integrate their many algorithms into the database managing system (DBMS), PostgreSQL, due to the defficiencies of the free use tools avaialable. A mechanism to optimize the performance of the implemented algorithms was proposed with the purpose of taking advantage of the PostgreSQL. By means of an experiment, it was proven that the time response and results obtained are improved when the algorithms are integrated into the managing system.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
An example sample sheet containing samples information that is used to start an analysis in VarGenius. (TSV 330 bytes)
The global database management system (DBMS) market revenue grew to 80 billion U.S. dollars in 2020. Cloud DBMS accounted for the majority of the overall market growth, as database systems are migrating to cloud platforms.
Database market
The database market consists of paid database software such as Oracle and Microsoft SQL Server, as well as free, open-source software options like PostgreSQL and MongolDB. Database Management Systems (DBMSs) provide a platform through which developers can organize, update, and control large databases, with products like Oracle, MySQL, and Microsoft SQL Server being the most widely used in the market.
Database management software
Knowledge of the programming languages related to these databases is becoming an increasingly important asset for software developers around the world, and database management skills such as MongoDB and Elasticsearch are seen as highly desirable. In addition to providing developers with the tools needed to operate databases, DBMS are also integral to the way that consumers access information through applications, which further illustrates the importance of the software.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Worldwide Gender Differences in Public Code Contributions - Replication Package
This document describes how to replicate the findings of the paper: Davide Rossi and Stefano Zacchiroli, 2022, Worldwide Gender Differences in Public Code Contributions. In Software Engineering in Society (ICSE-SEIS'22), May 21-29, 2022, Pittsburgh, PA, USA. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3510458.3513011
This document comes with the software needed to mine and analyze the data presented in the paper.
Prerequisites
These instructions assume the use of the bash shell, the Python programming language, the PosgreSQL DBMS (version 11 or later), the zstd compression utility and various usual *nix shell utilities (cat, pv, ...), all of which are available for multiple architectures and OSs.
It is advisable to create a Python virtual environment and install the following PyPI packages: click==8.0.3 cycler==0.10.0 gender-guesser==0.4.0 kiwisolver==1.3.2 matplotlib==3.4.3 numpy==1.21.3 pandas==1.3.4 patsy==0.5.2 Pillow==8.4.0 pyparsing==2.4.7 python-dateutil==2.8.2 pytz==2021.3 scipy==1.7.1 six==1.16.0 statsmodels==0.13.0
Initial data
swh-replica
, a PostgreSQL database containing a copy of Software Heritage data. The schema for the database is available at https://forge.softwareheritage.org/source/swh-storage/browse/master/swh/storage/sql/.swh-replica
.names.tab
- forenames and surnames per country with their frequencyzones.acc.tab
- countries/territories, timezones, population and world zonesc_c.tab
- ccTDL entities - world zones matchesData preparation
swh-replica
database to create commits.csv.zst
and authors.csv.zst
sh> ./export.sh
authors--clean.csv.zst
sh> ./cleanup.sh authors.csv.zst
authors--plausible.csv.zst
sh> pv authors--clean.csv.zst | unzstd | ./filter_names.py 2> authors--plausible.csv.log | zstdmt > authors--plausible.csv.zst
Gender detection
author-fullnames-gender.csv.zst
sh> pv authors--plausible.csv.zst | unzstd | ./guess_gender.py --fullname --field 2 | zstdmt > author-fullnames-gender.csv.zst
Database creation and data ingestion
Create the PostgreSQL DB sh> createdb gender-commit
Notice that from now on when prepending the psql>
prompt we assume the execution of psql on the gender-commit
database.
Import data into PostgreSQL DB sh> ./import_data.sh
Zone detection
commits.tab
, that is used as input for the gender detection scriptsh> psql -f extract_commits.sql gender-commit
commit_zones.tab.zst
sh> pv commits.tab | ./assign_world_zone.py -a -n names.tab -p zones.acc.tab -x -w 8 | zstdmt > commit_zones.tab.zst
Use ./assign_world_zone.py --help
if you are interested in changing the script parameters.psql> \copy commit_culture from program 'zstdcat commit_zones.tab.zst | cut -f1,6 | grep -Ev ''\s$'''
Extraction and graphs
commits_tz.tab
, authors_tz.tab
, commits_zones.tab
, authors_zones.tab
, and authors_zones_1620.tab
.extract_data.sql
if you whish to modify extraction parameters (start/end year, sampling, ...). sh> ./extract_data.sh
commits_tzs.pdf
, authors_tzs.pdf
, commits_zones.pdf
, authors_zones.pdf
, and authors_zones_1620.pdf
. sh> ./create_charts.sh
Additional graphs
This package also includes some already-made graphs
authors_zones_1.pdf
: stacked graphs showing the ratio of female authors per world zone through the years, considering all authors with at least one commit per periodauthors_zones_2.pdf
: ditto with at least two commits per periodauthors_zones_10.pdf
: ditto with at least ten commits per periodThe datasolr extension for CKAN provides an alternative search backend using Apache Solr for performing datastore queries. Originally developed to address performance limitations with PostgreSQL on very large datasets, it enables faster searching for resources indexed in Solr. This extension offers a specialized search component for CKAN deployments dealing with large, relatively static datasets where search speed is critical. Note: as of the information provided, this extension is no longer actively maintained. Key Features: Solr-Powered Search: Replaces the default datastore search functionality with Solr, offering potentially improved performance for large datasets. Stats on Fields: Capable of generating statistical metrics (min, max, sum, etc.) on fields within a dataset through the solrstatsfields parameter, enriching field metadata. Non-Empty Field Filter: Includes a solrnot_empty filter that ensures results only include records where specified fields contain data. Configurable Field Mapping: Allows for custom field mapping strategies, which helps manage datasets where field names include characters incompatible with Solr's standard alphanumeric and underscore convention. Data Import Handler (DIH) Integration: Supports indexing datasets directly from a PostgreSQL database into Solr using Solr's Data Import Request Handler. Use Cases: Large, Static Datasets: Best suited for scenarios where datasets are large and not frequently updated, such as historical records or datasets updated in batch at regular intervals. Performance-Critical Search: Environments where search speed is paramount and the default PostgreSQL datastore performance is insufficient. Technical Integration: The datasolr extension integrates with CKAN as a plugin configurable through the CKAN configuration file. It can be configured to replace the default datastore_search API endpoint with a Solr-backed implementation. This plugin leverages data still present in the PostgreSQL database and simply moves the searching operations to Solr by using the IDataSolr interface. Benefits & Impact: Implementing the datasolr extension offers the potential to significantly improve search performance on large datasets, provided that Solr is properly configured and indexed. It provides a method to leverage Solr's powerful search capabilities within the CKAN environment, although with some differences in supported query syntax compared to the default datastore search.
LinkDB is an exhaustive dataset of publicly accessible LinkedIn people and companies, containing close to 500M people & companies profiles by region.
LinkDB is updated up to millions of profiles daily at the point of purchase. Post-purchase, you can keep LinkDB updated quarterly for a nominal fee.
Data is shipped in Parquet file format, Apache Parquet, a column-oriented data file format.
All our data and procedures are in place that meet major legal compliance requirements such as GDPR, CCPA. We help you be compliant too.
ODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
License information was derived automatically
This dataset was created by JAYESH CHAUHAN
Released under ODC Public Domain Dedication and Licence (PDDL)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This database contains: rainfall, humidity, temperature, global solar radiation, wind velocity and wind direction ten-minute data from 150 stations of the Meteogalicia network between 1-jan-2000 and 31-dec-2018.
Version installed: postgresql 9.1
Extension installed: postgis 1.5.3-1
Instructions to restore the database:
createdb -E UTF8 -O postgres -U postgres template_postgis
createlang plpgsql -d template_postgis -U postgres
psql -d template_postgis -U postgres -f /usr/share/postgresql/9.1/contrib/postgis-1.5/postgis.sql
psql -d template_postgis -U postgres -f /usr/share/postgresql/9.1/contrib/postgis-1.5/spatial_ref_sys.sql
psql -d template_postgis -U postgres -f /usr/share/postgresql/9.1/contrib/postgis_comments.sql
createdb -U postgres -T template_postgis MeteoGalicia
cat Meteogalicia* | psql MeteoGalicia