The largest reported data leakage as of January 2024 was the Cam4 data breach in March 2020, which exposed more than 10 billion data records. The second-largest data breach in history so far, the Yahoo data breach, occurred in 2013. The company initially reported about one billion exposed data records, but after an investigation, the company updated the number, revealing that three billion accounts were affected. The National Public Data Breach was announced in August 2024. The incident became public when personally identifiable information of individuals became available for sale on the dark web. Overall, the security professionals estimate the leakage of nearly three billion personal records. The next significant data leakage was the March 2018 security breach of India's national ID database, Aadhaar, with over 1.1 billion records exposed. This included biometric information such as identification numbers and fingerprint scans, which could be used to open bank accounts and receive financial aid, among other government services.
Cybercrime - the dark side of digitalization As the world continues its journey into the digital age, corporations and governments across the globe have been increasing their reliance on technology to collect, analyze and store personal data. This, in turn, has led to a rise in the number of cyber crimes, ranging from minor breaches to global-scale attacks impacting billions of users – such as in the case of Yahoo. Within the U.S. alone, 1802 cases of data compromise were reported in 2022. This was a marked increase from the 447 cases reported a decade prior. The high price of data protection As of 2022, the average cost of a single data breach across all industries worldwide stood at around 4.35 million U.S. dollars. This was found to be most costly in the healthcare sector, with each leak reported to have cost the affected party a hefty 10.1 million U.S. dollars. The financial segment followed closely behind. Here, each breach resulted in a loss of approximately 6 million U.S. dollars - 1.5 million more than the global average.
With the surge in data collection and analytics, concerns are raised with regards to the privacy of the individuals represented by the data. In settings where the data is distributed over several data holders, federated learning offers an alternative to learn from the data without the need to centralize it in the first place. This is achieved by exchanging only model parameters learned locally at each data holder. This greatly limits the amount of data to be transferred, reduces the impact of data breaches, and helps to preserve the individual’s privacy. Federated learning thus becomes a viable alternative in IoT and Edge Computing settings, especially if the data collected is sensitive. However, risks for data or information leaks still persist, if information can be inferred from the models exchanged. This can e.g. be in the form of membership inference attacks. In this paper, we investigate how successful such attacks are in the setting of sequential federated learning. The cyclic nature of model learning and exchange might enable attackers with more information to observe the dynamics of the learning process, and thus perform a more powerful attack.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
2768 Global exporters importers export import shipment records of Helium leak detector with prices, volume & current Buyer's suppliers relationships based on actual Global export trade database.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
264 Global exporters importers export import shipment records of Leak detector and Hsn Code 3403 with prices, volume & current Buyer's suppliers relationships based on actual Global export trade database.
This national digital GIS product produced by the British Geological Survey indicates the potential for leakage to have a negative effect on ground stability. It is largely derived from the digital geological map and expert knowledge. The GIS dataset contains seven fields. The first field is a summary map that gives an overview of where leakage may affect ground stability. The other six fields indicate the properties of the ground with respect to the extent to which hazards associated with soluble rocks, landslides, compressible ground, collapsible ground, swelling clays and running sands will be increased due to leakage. The data is useful to asset managers in water companies, local authorities and utility companies who would like to understand where. and to what extent, leaking underground pipes or other structures may initate or worsen ground stability.
Full title: Using Decision Trees to Detect and Isolate Simulated Leaks in the J-2X Rocket Engine Mark Schwabacher, NASA Ames Research Center Robert Aguilar, Pratt & Whitney Rocketdyne Fernando Figueroa, NASA Stennis Space Center Abstract The goal of this work was to use data-driven methods to automatically detect and isolate faults in the J-2X rocket engine. It was decided to use decision trees, since they tend to be easier to interpret than other data-driven methods. The decision tree algorithm automatically “learns” a decision tree by performing a search through the space of possible decision trees to find one that fits the training data. The particular decision tree algorithm used is known as C4.5. Simulated J-2X data from a high-fidelity simulator developed at Pratt & Whitney Rocketdyne and known as the Detailed Real-Time Model (DRTM) was used to “train” and test the decision tree. Fifty-six DRTM simulations were performed for this purpose, with different leak sizes, different leak locations, and different times of leak onset. To make the simulations as realistic as possible, they included simulated sensor noise, and included a gradual degradation in both fuel and oxidizer turbine efficiency. A decision tree was trained using 11 of these simulations, and tested using the remaining 45 simulations. In the training phase, the C4.5 algorithm was provided with labeled examples of data from nominal operation and data including leaks in each leak location. From the data, it “learned” a decision tree that can classify unseen data as having no leak or having a leak in one of the five leak locations. In the test phase, the decision tree produced very low false alarm rates and low missed detection rates on the unseen data. It had very good fault isolation rates for three of the five simulated leak locations, but it tended to confuse the remaining two locations, perhaps because a large leak at one of these two locations can look very similar to a small leak at the other location. Introduction The J-2X rocket engine will be tested on Test Stand A-1 at NASA Stennis Space Center (SSC) in Mississippi. A team including people from SSC, NASA Ames Research Center (ARC), and Pratt & Whitney Rocketdyne (PWR) is developing a prototype end-to-end integrated systems health management (ISHM) system that will be used to monitor the test stand and the engine while the engine is on the test stand[1]. The prototype will use several different methods for detecting and diagnosing faults in the test stand and the engine, including rule-based, model-based, and data-driven approaches. SSC is currently using the G2 tool http://www.gensym.com to develop rule-based and model-based fault detection and diagnosis capabilities for the A-1 test stand. This paper describes preliminary results in applying the data-driven approach to detecting and diagnosing faults in the J-2X engine. The conventional approach to detecting and diagnosing faults in complex engineered systems such as rocket engines and test stands is to use large numbers of human experts. Test controllers watch the data in near-real time during each engine test. Engineers study the data after each test. These experts are aided by limit checks that signal when a particular variable goes outside of a predetermined range. The conventional approach is very labor intensive. Also, humans may not be able to recognize faults that involve the relationships among large numbers of variables. Further, some potential faults could happen too quickly for humans to detect them and react before they become catastrophic. Automated fault detection and diagnosis is therefore needed. One approach to automation is to encode human knowledge into rules or models. Another approach is use data-driven methods to automatically learn models from historical data or simulated data. Our prototype will combine the data-driven approach with the model-based and rule-based appro
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
464 Global exporters importers export import shipment records of Leakage current monitor with prices, volume & current Buyer's suppliers relationships based on actual Global export trade database.
http://inspire.ec.europa.eu/metadata-codelist/LimitationsOnPublicAccess/noLimitationshttp://inspire.ec.europa.eu/metadata-codelist/LimitationsOnPublicAccess/noLimitations
UKCCSRC Flexible Funding 2020. Experimental data are the acoustic emission (AE) signals collected with three AE sensors when CO2 leak from a CO2 storage cylinder under different pressures. '5MPa_20kgh-1' means the data was collected when the pressure was 5MPa and the leakage rate was 20 kg/h. The sampling frequency of AE signals is 3MHz. UKCCSRC Flexible Funding 2020: Monitoring of CO2 flow under CCS conditions through multi-modal sensing and machine learning.
These data files are related to the work titled "A cooperative model to lower cost and increase the efficiency of methane leak inspections at oil and gas sites." The abstract of the work: Methane is a potent greenhouse gas that tends to leak from equipment at oil and gas (O&G) sites. The process of locating and repairing fugitive methane emissions is known as leak detection and repair (LDAR). Conventional LDAR methods are labor intensive and costly because they involve time-consuming close-range, component-level inspections at each site. This has prompted duty holders to examine new methods and strategies that could be more cost-effective. We examined a co-operative model in which multiple duty holders of O&G sites in a region use shared services to complete leak inspections. This approach was hypothesized to be more efficient and cost-effective than independent inspection programs by each duty holder in the region. To test this hypothesis, we developed a geospatial simulation model using empirical data from 11 O&G-producing regions in Canada and the USA. We used the model to compare labor cost, transit time, mileage, vehicle emissions, and driving risk between independent and co-op leak inspection programs. The results indicate that co-op leak inspection programs can generate relative savings in labor costs (1.8–34.2%), transit time (0.6–38.6%), mileage (0.2–43.1%), vehicle emissions (0.01–4.0 tCO2), and driving risk (1.9–31.9%). The largest relative savings and efficiency gains resulting from co-op leak inspection programs were in regions with a high diversity of duty holders, which was confirmed with simulations of artificial O&G sites and road networks spanning diverse conditions. We also found reducing leak inspection time by 75% with streamlined methods can additionally reduce labor cost 8.8–41.1%, transit time 5.6–20.2%, and mileage 2.60–34.3% in co-op leak inspection programs. Overall, this study demonstrates that co-op leak inspection programs can be more efficient and cost-effective, particularly in regions with a large diversity of O&G duty holders, and that methods to reduce leak inspection time can create additional savings.
Replication Data and Code for "Incentives and Information in Methane Leak Detection and Repair" Abstract: Capturing leaked methane can be a win for both firms and the environment. However, leakage volume uncertainty can be a barrier inhibiting leak repair. We study an experiment at oil and gas production sites which randomized whether site operators were informed of methane leakage volumes. At sites with high baseline leakage, we estimate a negative but imprecise effect of information on endline emissions. But at sites with zero measured leakage, giving firms information about methane leakage increased emissions at endline. Our results suggest that giving firms news of low leakage disincentivizes maintenance effort, thereby increasing the likelihood of future leaks. Package includes data from Wang et al. (2024) RCT as well as IEA data on estimated methane emissions and methane abatement costs. Package also includes code for replication.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
6110 Global exporters importers export import shipment records of Leakage detector with prices, volume & current Buyer's suppliers relationships based on actual Global export trade database.
Pipelines transport natural gas (NG) in all stages between production and the end user. The NG composition, pipeline depth, and pressure vary significantly between extraction and consumption. As methane (CH4Â), the primary component of NG is both explosive and a potent greenhouse gas, NG leaks from underground pipelines pose both a safety and environmental threat. Leaks are typically found when an observer detects a CH4 enhancement as they pass through the downwind above-ground NG plume. The likelihood of detecting a plume depends, in part, on the size of the plume, which is contingent on both environmental conditions and intrinsic characteristics of the leak. To investigate the effects of leak characteristics, this study uses controlled NG release experiments to observe how the above-ground plume width changes with changes in the gas composition of the NG, leak rate, and depth of the subsurface emission. Results show that plume width generally decreases when heavier hydrocarbons are pr...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
8984 Global exporters importers export import shipment records of Earth leakage relay with prices, volume & current Buyer's suppliers relationships based on actual Global export trade database.
This dataset presents the amount of different magnesium carbonates under different conditions. Here, using batch reactor experiments and mineralogical characterization, we explored magnesite precipitation kinetics in chemically complex fluids whereby the impact of fluid acidity and alkalinity, NaCl, and MgO nanoparticles was investigated. The dataset was created within SECURe project (Subsurface Evaluation of CCS and Unconventional Risks) - https://www.securegeoenergy.eu/. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 764531
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
The objective of this project is to collect microbial samples from various EVA suits to determine how much microbial contamination is typically released during simulated planetary exploration activities. Data will be released to the planetary protection and science communities, and advanced EVA system designers. In the best case scenario, we will discover that very little microbial contamination leaks from our current or prototype suit designs, in the worst case scenario, we will identify leak paths, learn more about what affects leakage--and we'll have a new, flight-certified swab tool for our EVA toolbox.
NASA has a strategic knowledge gap (B5-3) regarding what life signatures leak/vent from our Extravehicular Activity (EVA) systems; this potentially impacts how we will search for evidence of life at exploration destinations. Funding will be used to fabricate and sterilize test consumables (swab tips), prepare Test Readiness Review products (such as materials compatibility and hazard assessments), certify the EVA Swab Tool, participate in test opportunities as they arise, and perform analysis on collected swabs.
https://data.gov.uk/dataset/eaebc183-17a8-4795-9bbc-36aae7397fbe/qics-paper-modelling-large-scale-co2-leakages-in-the-north-sea#licence-infohttps://data.gov.uk/dataset/eaebc183-17a8-4795-9bbc-36aae7397fbe/qics-paper-modelling-large-scale-co2-leakages-in-the-north-sea#licence-info
A three dimensional hydrodynamic model with a coupled carbonate speciation sub-model is used to simulate large additions of CO2 into the North Sea, representing leakages at potential carbon sequestration sites. A range of leakage scenarios are conducted at two distinct release sites, allowing an analysis of the seasonal, inter-annual and spatial variability of impacts to the marine ecosystem. Seasonally stratified regions are shown to be more vulnerable to CO2 release during the summer as the added CO2 remains trapped beneath the thermocline, preventing outgasing to the atmosphere. On average, CO2 injected into the northern North Sea is shown to reside within the water column twice as long as an equivalent addition in the southern North Sea before reaching the atmosphere. Short-term leakages of 5000 tonnes CO2 over a single day result in substantial acidification at the release sites (up to -1.92 pH units), with significant perturbations (greater than 0.1 pH units) generally confined to a 10 km radius. Long-term CO2 leakages sustained for a year may result in extensive plumes of acidified seawater, carried by major advective pathways. Whilst such scenarios could be harmful to marine biota over confined spatial scales, continued unmitigated CO2 emissions from fossil fuels are predicted to result in greater and more long-lived perturbations to the carbonate system over the next few decades. This is a publication in QICS Special Issue - International Journal of Greenhouse Gas Control, Jack J.C. Phelps et. al. Doi:10.1016/j.ijggc.2014.10.013.
According to a 2024 survey among global business and cyber leaders, nearly half of respondents highlighted the advance of adversarial capabilities, such as phishing, malware development, and deepfakes, as their greatest concern regarding the impact of generative artificial intelligence (GenAI) on cybersecurity. In addition, 22 percent of respondents were most concerned about data leaks and exposure of personally identifiable information through GenAI. Other key concerns included software supply chain risks and technical security of AI systems. AI-powered malware... With the launch of OpenAI’s ChatGPT in November 2022, concerns have been rising around its possible usage in cyber crime.Trained to create human-like texts in a shorter time without spelling errors, phishing e-mails written by ChatGPT would consequently be harder to detect, for instance. In addition, there is growing concern about AI-powered malicious software, commonly known as malware, as deep learning algorithms would allow hostile actors to target specific victims and remain undetected until specific conditions are met. ...Versus AI-powered cybersecurity Risks aside, the advantages brought by AI to cyber criminals can also bolster cybersecurity. In particular, generative AI-powered solutions can search through vast amounts of data to identify abnormal behavior and detect malicious activity. Looking forward, companies will have to adapt and stay up to speed so that generative AI does not end providing overall cyber advantage to attackers.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The dataset contains replication data for scientific article "Widespread natural methane and oil leakage from sub-marine Arctic reservoirs" in Nature Communications journal. Specifically, it contains multibeam echosunder data and CTD data acquired during CAGE 19-2 and CAGE 20-2 research cruises.
The U.S. outer continental shelf is a major source of energy for the United States. The rapid growth of oil and gas production in the Gulf of Mexico increases the risk of underwater oil spills at greater water depths and drilling wells. These hydrocarbons leakages can be caused by either natural events, such as seeping from fissures in the ocean seabed, or by anthropogenic accidents, such as leaking from broken wellheads and pipelines. In order to improve safety and reduce the environmental risks of offshore oil and gas operations, the Bureau of Safety and Environmental Enforcement (BSEE) recommended the use of real-time monitoring. An early warning system for detecting, locating, and characterizing hydrocarbon leakages is essential for preventing the next oil spill as well as for seafloor hydrocarbon seepage detection. Existing monitoring techniques have significant limitations and cannot achieve real-time monitoring. This project launches an effort to develop a functional real-time monitoring system that uses passive acoustic technologies to detect, locate, and characterize undersea hydrocarbon leakages over large areas in a cost-effective manner. In an oil spill event, the leaked hydrocarbon is injected into seawater with huge amounts of discharge at high speeds. With mixed natural gases and oils, this hydrocarbon leakage creates underwater sound through two major mechanisms: shearing and turbulence by a streaming jet of oil droplets and gas bubbles, and bubble oscillation and collapse. These acoustic emissions can be recorded by hydrophones in the water column at far distances. They will be characterized and differentiated from other underwater noises through their unique frequency spectrum, evolution and transportation processes and leaking positions, and further, be utilized to detect and position the leakage locations. With the objective of leakage detection and localization, our approach consisted of recording and modeling the acoustic signals induced by the oil spill and implementing advanced signal processing and triangulation localization techniques with a hydrophone network. Tasks of this project were: 1. Conduct a laboratory study to simulate hydrocarbon leakages and their induced sound under controlled conditions, and to establish the correlation between frequency spectra and leakage properties, such as oil-jet intensities and speeds, bubble radii and distributions, and crack sizes. 2. Implement and develop acoustic bubble modeling for estimating features and strength of the oil leakage. 3. Develop a set of advanced signal processing and triangulation algorithms for leakage detection and localization. The experimental data have been collected in a water tank in the building of the National Center for Physical Acoustics, the University of Mississippi from 2018-2020, including hydrophone recorded underwater sounds generated by oil leakage bubbles under different testing conditions, such as pressures, flow rates, jet velocities, and crack sizes, and movies of oil leakages. Two types of oil leakages (a few bubbles and constant flow bubbles) were tested to simulate oil seepages either from seafloors or from oil well and pipeline breaches. Two types of gases were investigated (nitrogen and methane). These data were analyzed for acoustic bubble modeling, oil leakage characterization, and localization. This dataset contains data for oil leakage source localization. Two localization algorithms were developed: TDOA-based and SpectraRatio-based algorithms. The folders of the dataset are described as follows: • the folders of “Signals†contain raw underwater sounds data used for localization • the folders of “Results†contain the results of true and predicted oil leakage source positions More details of this dataset can be found in the corresponding ReadMe files in each folder.
Eximpedia Export import trade data lets you search trade data and active Exporters, Importers, Buyers, Suppliers, manufacturers exporters from over 209 countries
The largest reported data leakage as of January 2024 was the Cam4 data breach in March 2020, which exposed more than 10 billion data records. The second-largest data breach in history so far, the Yahoo data breach, occurred in 2013. The company initially reported about one billion exposed data records, but after an investigation, the company updated the number, revealing that three billion accounts were affected. The National Public Data Breach was announced in August 2024. The incident became public when personally identifiable information of individuals became available for sale on the dark web. Overall, the security professionals estimate the leakage of nearly three billion personal records. The next significant data leakage was the March 2018 security breach of India's national ID database, Aadhaar, with over 1.1 billion records exposed. This included biometric information such as identification numbers and fingerprint scans, which could be used to open bank accounts and receive financial aid, among other government services.
Cybercrime - the dark side of digitalization As the world continues its journey into the digital age, corporations and governments across the globe have been increasing their reliance on technology to collect, analyze and store personal data. This, in turn, has led to a rise in the number of cyber crimes, ranging from minor breaches to global-scale attacks impacting billions of users – such as in the case of Yahoo. Within the U.S. alone, 1802 cases of data compromise were reported in 2022. This was a marked increase from the 447 cases reported a decade prior. The high price of data protection As of 2022, the average cost of a single data breach across all industries worldwide stood at around 4.35 million U.S. dollars. This was found to be most costly in the healthcare sector, with each leak reported to have cost the affected party a hefty 10.1 million U.S. dollars. The financial segment followed closely behind. Here, each breach resulted in a loss of approximately 6 million U.S. dollars - 1.5 million more than the global average.