Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
this graph was created in R:
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F16731800%2F8cb214f053ced5479fbc0fd9a51ea662%2Fgraph1.gif?generation=1731273118874468&alt=media" alt="">
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F16731800%2F9a5d25492ea93b99e2292e398e0afc01%2Fgraph2.gif?generation=1731273123934184&alt=media" alt="">
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F16731800%2F5af366103f335a73ee593546bbadf2b2%2Fgraph3.gif?generation=1731273128356032&alt=media" alt="">
Welcome to MTSamples! This website is a large collection of medical transcription reports, which have been typed out to show exactly what doctors, nurses, and other healthcare professionals say during medical visits, exams, or procedures. These reports are very useful for people who are learning how to work in medical transcription or for those who already work in this field and need examples to help them with their daily tasks. Medical transcription is an important job where people listen to recordings made by doctors and type them into written reports. The reports on MTSamples are a great way to practice or get familiar with the kind of work a transcriptionist does.
MTSamples.com is constantly updating and adding new reports. It has a wide variety of sample reports that cover many different medical specialties. For example, you can find reports related to cardiology (heart), pulmonology (lungs), orthopedics (bones and muscles), and many other fields. Each report gives a real-life example of what a doctor might say during an appointment or procedure, and how a transcriptionist would type it out. This variety makes the site helpful to those who want to learn about different medical areas, whether they are just starting out or are already experienced transcriptionists.
The samples on MTSamples.com are provided by transcriptionists and users who contribute their work for educational purposes. These reports are meant to be used as reference material, and they show what transcription should look like in real situations. However, because they are user-submitted, there might be some errors in them, and we would greatly appreciate it if anyone finds mistakes to let us know so we can correct them. If you are a transcriptionist and would like to share your own reports with the site, we would love to hear from you. The more examples we have, the better it is for everyone who uses the website for learning or reference.
We encourage you to print, share, or link to any of the reports found on MTSamples.com. If you decide to share or print the reports, we ask that you let us know and give credit to the website. This can be done by including a link to https://www.mtsamples.com or by mentioning the website in a referral note. Our goal is to make sure that everyone who uses the site can easily access useful information, while also making sure that MTSamples gets the credit for providing these valuable resources. By working together, we can create a helpful and supportive community for learning about medical transcription.
Facebook
Twitterhttps://doi.org/10.17026/fp39-0x58https://doi.org/10.17026/fp39-0x58
A highly prevalent and relevant situation in which adolescents have to interpret the intentions of others, is when they interact with peers. We therefore successfully introduced a new paradigm to measure hostile attribution bias and emotional responses to such social interactions, and examined how it related to youth’s aggressiveness.A pilot study was conducted to develop a database with auditory stimuli of positive, negative and ambiguous everyday comments to be used in the main study. The pilot study resulted in a set of social comments that varied in content (e.g. what the person says) as well as tone of voice (e.g. how the person says it). These stimuli were presented to 881 adolescents (Mage = 14.35 years; SD = 1.23; 48.1% male) in the main study. These participants’ peers also reported on their aggressiveness. In general, added negativity of content and tone was driving youth’s intent attribution and emotional responses to the comments. In line with the Social Information Processing model, we found more hostile attribution of intent and more negative emotional responses of aggressive youth to ambiguous stimuli. Aggression was also related to more hostile intent attributions when both content and tone were negative. Unlike most studies on hostile attribution bias, the aggression effects in the current study emerged for girls, but not boys. Implications of these results and future use of the experimental paradigm are discussed.
Facebook
TwitterThe global number of Facebook users was forecast to continuously increase between 2023 and 2027 by in total 391 million users (+14.36 percent). After the fourth consecutive increasing year, the Facebook user base is estimated to reach 3.1 billion users and therefore a new peak in 2027. Notably, the number of Facebook users was continuously increasing over the past years. User figures, shown here regarding the platform Facebook, have been estimated by taking into account company filings or press material, secondary research, app downloads and traffic data. They refer to the average monthly active users over the period and count multiple accounts by persons only once.The shown data are an excerpt of Statista's Key Market Indicators (KMI). The KMI are a collection of primary and secondary indicators on the macro-economic, demographic and technological environment in up to 150 countries and regions worldwide. All indicators are sourced from international and national statistical offices, trade associations and the trade press and they are processed to generate comparable data sets (see supplementary notes under details for more information).
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This is not going to be an article or Op-Ed about Michael Jordan. Since 2009 we've been in the longest bull-market in history, that's 11 years and counting. However a few metrics like the stock market P/E, the call to put ratio and of course the Shiller P/E suggest a great crash is coming in-between the levels of 1929 and the dot.com bubble. Mean reversion historically is inevitable and the Fed's printing money experiment could end in disaster for the stock market in late 2021 or 2022. You can read Jeremy Grantham's Last Dance article here. You are likely well aware of Michael Burry's predicament as well. It's easier for you just to skim through two related videos on this topic of a stock market crash. Michael Burry's Warning see this YouTube. Jeremy Grantham's Warning See this YouTube. Typically when there is a major event in the world, there is a crash and then a bear market and a recovery that takes many many months. In March, 2020 that's not what we saw since the Fed did some astonishing things that means a liquidity sloth and the risk of a major inflation event. The pandemic represented the quickest decline of at least 30% in the history of the benchmark S&P 500, but the recovery was not correlated to anything but Fed intervention. Since the pandemic clearly isn't disappearing and many sectors such as travel, business travel, tourism and supply chain disruptions appear significantly disrupted - the so-called economic recovery isn't so great. And there's this little problem at the heart of global capitalism today, the stock market just keeps going up. Crashes and corrections typically occur frequently in a normal market. But the Fed liquidity and irresponsible printing of money is creating a scenario where normal behavior isn't occurring on the markets. According to data provided by market analytics firm Yardeni Research, the benchmark index has undergone 38 declines of at least 10% since the beginning of 1950. Since March, 2020 we've barely seen a down month. September, 2020 was flat-ish. The S&P 500 has more than doubled since those lows. Look at the angle of the curve: The S&P 500 was 735 at the low in 2009, so in this bull market alone it has gone up 6x in valuation. That's not a normal cycle and it could mean we are due for an epic correction. I have to agree with the analysts who claim that the long, long bull market since 2009 has finally matured into a fully-fledged epic bubble. There is a complacency, buy-the dip frenzy and general meme environment to what BigTech can do in such an environment. The weight of Apple, Amazon, Alphabet, Microsoft, Facebook, Nvidia and Tesla together in the S&P and Nasdaq is approach a ridiculous weighting. When these stocks are seen both as growth, value and companies with unbeatable moats the entire dynamics of the stock market begin to break down. Check out FANG during the pandemic. BigTech is Seen as Bullet-Proof me valuations and a hysterical speculative behavior leads to even higher highs, even as 2020 offered many younger people an on-ramp into investing for the first time. Some analysts at JP Morgan are even saying that until retail investors stop charging into stocks, markets probably don’t have too much to worry about. Hedge funds with payment for order flows can predict exactly how these retail investors are behaving and monetize them. PFOF might even have to be banned by the SEC. The risk-on market theoretically just keeps going up until the Fed raises interest rates, which could be in 2023! For some context, we're more than 1.4 years removed from the bear-market bottom of the coronavirus crash and haven't had even a 5% correction in nine months. This is the most over-priced the market has likely ever been. At the night of the dot-com bubble the S&P 500 was only 1,400. Today it is 4,500, not so many years after. Clearly something is not quite right if you look at history and the P/E ratios. A market pumped with liquidity produces higher earnings with historically low interest rates, it's an environment where dangerous things can occur. In late 1997, as the S&P 500 passed its previous 1929 peak of 21x earnings, that seemed like a lot, but nothing compared to today. For some context, the S&P 500 Shiller P/E closed last week at 38.58, which is nearly a two-decade high. It's also well over double the average Shiller P/E of 16.84, dating back 151 years. So the stock market is likely around 2x over-valued. Try to think rationally about what this means for valuations today and your favorite stock prices, what should they be in historical terms? The S&P 500 is up 31% in the past year. It will likely hit 5,000 before a correction given the amount of added liquidity to the system and the QE the Fed is using that's like a huge abuse of MMT, or Modern Monetary Theory. This has also lent to bubbles in the housing market, crypto and even commodities like Gold with long-term global GDP meeting many headwinds in the years ahead due to a demographic shift of an ageing population and significant technological automation. So if you think that stocks or equities or ETFs are the best place to put your money in 2022, you might want to think again. The crash of the OTC and small-cap market since February 2021 has been quite an indication of what a correction looks like. According to the Motley Fool what happens after major downturns in the market historically speaking? In each of the previous four instances that the S&P 500's Shiller P/E shot above and sustained 30, the index lost anywhere from 20% to 89% of its value. So what's what we too are due for, reversion to the mean will be realistically brutal after the Fed's hyper-extreme intervention has run its course. Of course what the Fed stimulus has really done is simply allowed the 1% to get a whole lot richer to the point of wealth inequality spiraling out of control in the decades ahead leading us likely to a dystopia in an unfair and unequal version of BigTech capitalism. This has also led to a trend of short squeeze to these tech stocks, as shown in recent years' data. Of course the Fed has to say that's its done all of these things for the people, employment numbers and the labor market. Women in the workplace have been set behind likely 15 years in social progress due to the pandemic and the Fed's response. While the 89% lost during the Great Depression would be virtually impossible today thanks to ongoing intervention from the Federal Reserve and Capitol Hill, a correction of 20% to 50% would be pretty fair and simply return the curve back to a normal trajectory as interest rates going back up eventually in the 2023 to 2025 period. It's very unlikely the market has taken Fed tapering into account (priced-in), since the euphoria of a can't miss market just keeps pushing the markets higher. But all good things must come to an end. Earlier this month, the U.S. Bureau of Labor Statistics released inflation data from July. This report showed that the Consumer Price Index for All Urban Consumers rose 5.2% over the past 12 months. While the Fed and economists promise us this inflation is temporary, others are not so certain. As you print so much money, the money you have is worth less and certain goods cost more. Wage gains in some industries cannot be taken back, they are permanent - in the service sector like restaurants, hospitality and travel that have been among the hardest hit. The pandemic has led to a paradigm shift in the future of work, and that too is not temporary. The Great Resignation means white collar jobs with be more WFM than ever before, with a new software revolution, different transport and energy behaviors and so forth. Climate change alone could slow down global GDP in the 21st century. How can inflation be temporary when so many trends don't appear to be temporary? Sure the price of lumber or used-cars could be temporary, but a global chip shortage is exasperating the automobile sector. The stock market isn't even behaving like it cares about anything other than the Fed, and its $billions of dollars of buying bonds each month. Some central banks will start to taper about December, 2021 (like the European). However Delta could further mutate into a variant that makes the first generation of vaccines less effective. Such a macro event could be enough to trigger the correction we've been speaking about. So stay safe, and keep your money safe. The Last Dance of the 2009 bull market could feel especially more painful because we've been spoiled for so long in the markets. We can barely remember what March, 2020 felt like. Some people sold their life savings simply due to scare tactics by the likes of Bill Ackman. His scare tactics on CNBC won him likely hundreds of millions as the stock market tanked. Hedge funds further gamed the Reddit and Gamestop movement, orchestrating them and leading the new retail investors into meme speculation and a whole bunch of other unsavory things like options trading at such scale we've never seen before. It's not just inflation and higher interest rates, it's how absurdly high valuations have become. Still correlation does not imply causation. Just because inflation has picked up, it doesn't guarantee that stocks will head lower. Nevertheless, weaker buying power associated with higher inflation can't be overlooked as a potential negative for the U.S. economy and equities. The current S&P500 10-year P/E Ratio is 38.7. This is 97% above the modern-era market average of 19.6, putting the current P/E 2.5 standard deviations above the modern-era average. This is just math, folks. History is saying the stock market is 2x its true value. So why and who would be full on the market or an asset class like crypto that is mostly speculative in nature to begin with? Study the following on a historical basis, and due your own due diligence as to the health of the markets: Debt-to-GDP ratio Call to put ratio
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Tom and Jerry is an American animated media franchise and series of comedy short films created in 1940 by William Hanna and Joseph Barbera. Best known for its 161 theatrical short films by Metro-Goldwyn-Mayer, the series centers on the rivalry between the titular characters of a cat named Tom and a mouse named Jerry.
This is one of the famous cartoon shows, that we would have never missed watching during our childhood. Now, its time to use our deep learning skills to detect our favorite characters - Tom and Jerry in the images extracted from some of the shows.
This dataset contains more than 5k images (exactly 5478 images) extracted from some of Tom & Jerry's show videos, that are available online. The downloaded videos are converted into images with 1 frame per second (1 FPS)
Labeled images are separated into 4 different folders as given.
Folder - tom_and_jerry SubFolder tom - contains images only with 'tom' SubFolder jerry - contains images only with 'jerry' SubFolder tom_jerry_1 - contains images with both 'tom' and 'jerry' SubFolder tom_jerry_0 - contains images without both the characters
There are images that can be challenged during training in image classification, as these images are distorted in the original size or shape and color of the characters. These image details are given in csv file - challenges.csv Doing error analysis on these images after model training, will help us understand how to improve the score.
Few examples are given below:
Facebook
TwitterThe total amount of data created, captured, copied, and consumed globally is forecast to increase rapidly. While it was estimated at ***** zettabytes in 2025, the forecast for 2029 stands at ***** zettabytes. Thus, global data generation will triple between 2025 and 2029. Data creation has been expanding continuously over the past decade. In 2020, the growth was higher than previously expected, caused by the increased demand due to the coronavirus (COVID-19) pandemic, as more people worked and learned from home and used home entertainment options more often.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract: Wordless books are traditionally associated with illiterate children. However, many of them have fragmented and dense proposals, assuming skills and prior knowledge that a young reader would hardly have. Thus, in research whose focus is on books for children selected by the Brazilian - National Program of the School Library (PNBE), we chose to study Renato Moriconi and Ilan Brenman’s Bocejo. The book consists of apparent isolated scenes that, joined together, form a unique whole, dialoguing with stages that show the history of humanity - from Bible’s Eve to the arrival of man on the Moon or from the act of an individual reader to the interaction with the book. Lack of words that could guide the understanding of the reader, temporal gaps between scenes and the multiplicity of elements which compose each picture lead to structure and thematic fractures that complicate the reception of the book by the beginning reader. The meanings of the story emerge by a picture and the articulation with the fact that the character represented is referring to. The proposal of the work prioritizes the emancipatory nature of the reader; however, in the case of young readers, mediation is necessary to help children in the process of comprehension, understanding the book and the art process involved in this humanity path.
Facebook
TwitterAttribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
Please Note: As announced by the Minister for Immigration and Border Protection on 25 June 2017, the Department of Immigration and Border Protection (DIBP) retired the paper-based Outgoing Passenger Cards (OPC) from 1 July 2017. The information previously gathered via paper-based outgoing passenger cards is now be collated from existing government data and will continue to be provided to users. Further information can be accessed here: http://www.minister.border.gov.au/peterdutton/Pages/removal-of-the-outgoing-passenger-card-jun17.aspx.
Due to the retirement of the OPC, the Australian Bureau of Statistics (ABS) undertook a review of the OAD data based on a new methodology. Further information on this revised methodology is available at: http://www.abs.gov.au/AUSSTATS/abs@.nsf/Previousproducts/3401.0Appendix2Jul%202017?opendocument&tabname=Notes&prodno=3401.0&issue=Jul%202017&num=&view=
A sampling methodology has been applied to this dataset. This method means that data will not replicate, exactly, data released by the ABS, but the differences should be negligible.
Due to ‘Return to Source’ limitations, data supplied to ABS from non-DIPB sources are also excluded.
Overseas Arrivals and Departures (OAD) data refers to the arrival and departure of Australian residents or overseas visitors, through Australian airports and sea ports, which have been recorded on incoming or outgoing passenger cards. OAD data describes the number of movements of travellers rather than the number of travellers. That is, multiple movements of individual persons during a given reference period are all counted. OAD data will differ from data derived from other sources, such as Migration Program Outcomes, Settlement Database or Visa Grant information. Travellers granted a visa in one year may not arrive until the following year, or may not travel to Australia at all. Some visas permit multiple entries to Australia, so travellers may enter Australia more than once on a visa. Settler Arrivals includes New Zealand citizens and other non-program settlers not included on the Settlement Database. The Settlement Database includes onshore processed grants not included in Settler Arrivals.
These de-identified statistics are periodically checked for privacy and other compliance requirements. The statistics were temporarily removed in March 2024 in response to a question about privacy within the emerging technological environment. Following a thorough review and risk assessment, the Department of Home Affairs has republished the dataset.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The task at hand in the Show US the Data competition to train a model that searches for dataset names within scientific articles. To get us started, a bunch of articles in JSON format and some labels of the names of datasets mentioned within those articles are provided. However, these provided labels are incomplete. So it is up to us to discover more dataset mentions within the training data.
If you simply train BERT on the training data, it will perform worse in the public leader board than just literally matching on all the dataset labels given in the training data. One possible reason for this bad performance is that BERT is actually trained wrongly. I will explain why. Since the training data are not exhaustively labelled, BERT will be provided many samples (i.e. sentences) that contain a dataset name, but where the so-called 'ground truth' labels are wrong. Namely, the sentence contains a dataset, but the training labels say the opposite. This might be one of the reasons for BERT disappointing performance (next to the lack of effort put into other improvements).
So it is a good idea to fix the training data. If you do not want to do this manually, you will have to discover the datasets with some model... oh wait, that is the goal of this competition. An easier 'solution' is to search for already known dataset names that you retrieve externally. Such lists are out there, for example the bigger_govt_dataset_list, published by Ken Miller @mlconsult here on Kaggle.
This is a very long list (23652 unique values) and most of these labels do not occur in the articles of the training data. To speed up your searching, I have condensed this list to all labels with more than 1 occurrence in the training data (207 hits). Further more I have also manually cleaned the list, by removing some labels that, to me, seem way too general to count as a dataset. This results in 93 labels.
Columns (both files): 1. Label: The label as how it was found in the text, no capital letters. 2. Hits: The number of hits/occurences/results, or in other words, how often this label was found in the training articles of 'Show US the Data'.
ExtraLabels.txt: All labels from the bigger_govt_dataset_list that occur in the training articles from 'Show US the Data' more than 1 time. ExtraLabelsCleaned.txt: Manually filtered, and hence shorter, version of ExtraLabels.txt. Conditions for a label to be removed are: - Label is obviously not a training dataset (e.g. 'individual', 'cars' are filtered out) - Google does not show that the label is the title of a dataset (e.g. 'beginning postsecondary students' is left in, because googling this term finds datasets) - My personal opinion on dataset-ishness. So compare the original and cleaned file if you do not trust my opinion.
Thanks to Ken Miller @mlconsult for publishing the bigger_govt_dataset_list. Furthermore, thanks to my teammate Frederike Elsmann @frederikeelsmann for finding the dataset above.
Facebook
TwitterOpen Government Licence 3.0http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/
License information was derived automatically
The LIDAR Composite DTM (Digital Terrain Model) is a raster elevation model covering ~99% of England at 1m spatial resolution. The DTM (Digital Terrain Model) is produced from the last or only laser pulse returned to the sensor. We remove surface objects from the Digital Surface Model (DSM), using bespoke algorithms and manual editing of the data, to produce a terrain model of just the surface.
Produced by the Environment Agency in 2022, the DTM is derived from a combination of our Time Stamped archive and National LIDAR Programme surveys, which have been merged and re-sampled to give the best possible coverage. Where repeat surveys have been undertaken the newest, best resolution data is used. Where data was resampled a bilinear interpolation was used before being merged.
The 2022 LIDAR Composite contains surveys undertaken between 6th June 2000 and 2nd April 2022. Please refer to the metadata index catalgoues which show for any location which survey was used in the production of the LIDAR composite.
The data is available to download as GeoTiff rasters in 5km tiles aligned to the OS National grid. The data is presented in metres, referenced to Ordinance Survey Newlyn and using the OSTN’15 transformation method. All individual LIDAR surveys going into the production of the composite had a vertical accuracy of +/-15cm RMSE.
Facebook
TwitterAs part of the attempt to understand the linguistic origin and cognitive nature of grammatical gender, we designed six psycholinguistic experiments for our language sample from Vanuatu (Merei, Lewo, Vatlongos, North Ambrym) and New Caledonia (Nêlêmwa, Iaai). Each language differs in number of classifiers, and whether nouns can freely occur with different classifiers, or are restricted to just one classifier (similar to grammatical gender).. Free-listing: participants heard a possessive classifier and listed associated nouns. This revealed the different semantic domains of classifiers, the salient nouns associated with each classifier, and showed whether participants listed the same noun with different classifiers.. Card-sorting: Participants free-sorted sixty images, followed by a structured sort according to which classifier they used with each picture. We compared whether similar piles were made across sorting tasks to reveal whether the linguistic classification system provides a structure for general cognition.. Video-vignettes: Participants described 24 video clips which showed different interactions between an actor and their possession, evoking a classifier. This tested both typical and atypical interactions to see if the same or different classifiers were used.. Possessive-labelling: Participants heard 140 nouns in their language and responded by saying the item belonged to them, which meant using a classifier. This measured* inter-speaker variation in the use of classifiers for particular items, reaction times and inter-speaker variation for different possessions.. Storyboards: eight four-picture storyboards were presented to participants. We recorded participant responses, uncovering if the same classifier was used in consecutive parts of the larger story and whether the classifiers were used anaphorically.. Eye-tracking: eight line-drawn pictures were combined in a paired-preference design. An eye tracker recorded fixation times. Participants heard the auditory cue of a classifier before being presented with a pair of images. This provided objective measures of automatic processing to identify patterns in attention.
Facebook
TwitterThis data shows how consumers decide whether to remain loyal to a telecom provider or switch to a competitor, and it does so by measuring the factors that most influence their decisions. It captures the relative weight that consumers place on price, network speed, customer service, roaming quality, and bundled content, then produces an overall switching likelihood index that reflects how close a consumer is to making a change. Because this is consumer-reported data, it goes beyond what usage logs or network statistics can tell you. It does not just track what consumers do after they switch; it captures what they are thinking about before they switch, which is the critical lead indicator for churn.
The key strength of this data is that it is not limited to one market or one provider. Rwazi is able to produce this type of consumer sentiment data in any country, for any operator, and at any frequency. It is as relevant for a prepaid-heavy market in Africa as it is for a postpaid market in Europe, as valuable for emerging economies as for mature telecom systems. This global reach means that the switching sentiment map is not just a dataset; it is a scalable framework for understanding how consumers in any market weigh their options. It is representative of the type of consumer-level insight that is possible when zero-party data is collected directly from the source.
For telecom operators themselves, this data represents a competitive edge in churn management. Traditional retention strategies rely on waiting for consumers to show signs of disengagement—fewer top-ups, declining usage, missed payments—before intervention. By that point, churn is often inevitable. With switching likelihood data, operators can see where churn risk is building long before usage drops. If consumers in a given city report that price is the dominant driver of switching, and their switching likelihood index is rising, an operator can preemptively deploy promotions, adjust packages, or create targeted campaigns that blunt the risk. If, on the other hand, network speed emerges as the primary trigger, the operator can accelerate investment in coverage or emphasize speed improvements in advertising. This data gives operators the ability to align intervention with the factor that matters most to consumers, not with generic churn-prevention tactics that often miss the mark.
For marketers, the value extends beyond telecom. Although the dataset is framed around provider switching, the underlying mechanics—capturing consumer sentiment on drivers of loyalty and risk of defection—are industry-agnostic. In financial services, consumers weigh fees, digital experience, customer service, and product bundles in similar ways before deciding whether to switch banks. In insurance, factors such as premium cost, claims service, and add-on benefits play the same role. Even in retail, consumers weigh price, product availability, service quality, and loyalty rewards when deciding whether to keep shopping at one store or move to another. The switching sentiment framework that Rwazi delivers is transferable, allowing any industry that cares about retention to borrow from the same approach.
From an investor perspective, the ability to quantify switching likelihood at the consumer level offers unique foresight. Investors often evaluate telecom companies based on subscriber growth, ARPU (average revenue per user), and churn rates, all of which are lagging indicators. This data provides a leading indicator of churn by showing how consumers are leaning before they actually make a change. If investors see that a provider’s consumers score high on price sensitivity and high on switching likelihood, they can anticipate downward pressure on ARPU and rising churn in the next quarters. Similarly, if bundled content importance is rising across markets, investors can see that providers who build strong partnerships in entertainment, gaming, or streaming may be better positioned to defend market share.
For regulators and policymakers, this type of data creates transparency into consumer welfare and market competitiveness. If consumers consistently report that price is the overwhelming driver of switching, it may indicate insufficient competition on service quality. If, conversely, service quality and roaming are rising in importance, it could reflect progress in network buildouts or the success of roaming agreements. By tracking switching sentiment across time, regulators can measure whether policy interventions are changing consumer perceptions and whether markets are becoming more balanced.
This data also has applications for technology providers and equipment vendors. If network speed consistently emerges as the top switching driver in multiple markets, it signals a readiness for investment in 5G or other high-performance infrastructure. Vendors can use these insights to guide go-to-market strategies, ensuring that their pitches to operators are backed by evidence ...
Facebook
TwitterThis data includes responses to Ground Truth Solutions' perception survey conducted in October 2019 with 1511 refugees in Uganda. Both South Sudanese and Congolese refugees who have received aid and support from humanitarian organisations in the last 12 months are included.
Surveys were conducted in Adjumani (Nyumanzi, Baratuku, Elema), Bidibidi (Zone 1 and Zone 3), Imvepi (Zone I and Zone II), Kiryandongo (Ranch 1 and Ranch 37), Palorinya (Belemaling, Chinyi, Morobi), Rhino (Zone 2 – Omugo, Zone 3 - Ocea), Kyaka II (Byabakora, Kakoni, Mukondo), Kyangwali (Kirokole, Maratatu A, Maratatu B), Nakivale (Base Camp), and Rwamwanja (Base Camp, Kaihora, Nkoma).
Overall, the refugees surveyed view their relations with Ugandan locals and aid workers positively, saying they feel welcome in Uganda and treated with respect by humanitarian workers. Building on this positive relationship, communication between aid providers and refugees could be more open and robust. Currently, just over half of the refugees interviewed say they are able to provide feedback to humanitarian staff, and only a minority is aware of what assistance they are eligible to receive. Around half of the respondents feel that aid is unfairly distributed.
Refugees consider the aid received insufficient to meet their most important needs, so it is perhaps not surprising that they are also pessimistic about achieving self-reliance. Less than a quarter feel that their life prospects in Uganda are improving. While a clear majority points to the need for livelihood opportunities to strengthen their sense of self-reliance, three-quarters of respondents say they lack access to such opportunities.
Almost everyone in our sample has been allocated land, and many consider it too small or not fertile enough, which is reflected in the high percentage of people (79%) who say they are dissatisfied with the land they have received. Refugees surveyed would appreciate more support from humanitarian actors when it comes to making decisions about returning to their countries of origin. Similarly, internal movement within Uganda and opportunities to migrate to a new country are areas in which refugees say they lack guidance from humanitarian agencies or other actors.
Individuals and households
Sample survey data [ssd]
This survey is the third round of questions Ground Truth Solutions has asked in Uganda; the first round took place in 2017 and the second in 2018. As in previous rounds, respondents to the current round of questions have been selected randomly, but the respondents themselves are different from those in previous rounds. When designing the sampling strategy for this survey, we used the most recent figures for populations of refugees from the UNHCR refugee portal. Based on this data, we decided to focus on South Sudanese and Congolese refugees, as they made up 92% of all the refugees in Uganda at the time. Refugees from Burundi, Somalia, Rwanda, Eritrea, Sudan, and Ethiopia each made up 0-3% of the overall refugee population and were excluded from this study. This is not to say that the perspectives of more marginal groups are not important, but rather that gathering these perspectives was simply beyond the scope of our research in view of the geographical and time constraints involved. In terms of the locations selected, we decided to include Adjumani, Bidibidi, Imvepi, Kiryandongo, Kyaka II, Kyangwali, Nakivale, Palorinya, Rhino, and Rwamwanja (and to exclude Kampala, Lobule, Oruchinga, and Palabek), as over 90% of South Sudanese and Congolese refugees reside in these refugee settlements, according to UNHCR's most recent figures.
The actual sample size achieved was 1,511 participants from 10 refugee settlements across Uganda, and the sample size in each settlement was proportional to the population size of the targeted communities within any given settlement. Using a confidence level of 95%, this sample size affords an expected margin of error of 3%.
Ground Truth Solutions co-led enumerator training and supervised data collection on the ground. Within each of the 10 selected settlements, we chose particular zones from which to collect data, and within these zones, we selected smaller village/cluster units. In selecting the zones, we grouped them into two or three tiers, depending on the population size within the given zones of the camp, and asked the data collection partner to select one zone from each tier in order to capture responses from differently sized areas. Within the zones, a GTS supervisor, in consultation with local leaders and actors on the ground, selected the villages/clusters based on several factors, such as when they were established, their distance from central points, and their population size.
Face-to-face [f2f]
Survey questions were developed to help understand refugees’ perceptions of the aid they receive, their relationship with humanitarian workers and the host community, and their future prospects. For the purpose of comparing this data with previous rounds, the questions in this round are phrased similarly to those in rounds one and two wherever possible. We consulted local actors and organisations in Uganda for feedback and input during the survey question design phase. Draft questions were also presented to UNHCR, the Assessment Technical Working Group (ATWG), the Uganda Bureau of Statistics (UBOS), and the Office of the Prime Minister. Additional questions around voluntary repatriation, migration to a different country, and moving within Uganda were introduced this year in order to cover voluntary repatriation as the fifth pillar of the Office of the Prime Minister’s Comprehensive Refugee Response Framework. The team tested all the questions and translations with refugees before rolling out the survey.
Facebook
Twitterhttps://fred.stlouisfed.org/legal/#copyright-public-domainhttps://fred.stlouisfed.org/legal/#copyright-public-domain
View economic output, reported as the nominal value of all new goods and services produced by labor and property located in the U.S.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset Description
This dataset consists of Electroencephalography (EEG) data recorded from 15 healthy subjects with a 64-channel EEG headset during spoken and imagined speech interaction with a simulated robot.
Citation
The dataset recording and study setup are described in detail in the following publication:
Rekrut, M., Selim, A. M., & Krüger, A. (2022, October). Improving Silent Speech BCI Training Procedures Through Transfer from Overt to Silent Speech. In 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC) (pp. 2650-2656). IEEE.
If you use this dataset, please consider citing this work.
Study Design
Participants were seated in a chair and controlled the simulated robot on a screen in a game-like setup through a maze. They were presented with a birds-view of the robots’ surroundings with the robot in the middle. Participants had to decide about its next step and interact for one part of the study via overt and in the second part via imagined speech. The interaction consisted of moving the robot in 3 different directions resulting in the command words ”left”, ”right” and ”up” and picking up screws and pushing boxes out of the way by the words ”pick” and ”push”. Whenever the user had made a decision about the next command they could press the spacebar to indicate the desire for interaction. After the spacebar was pressed, the screen turned black for 2 seconds to give the participant time to prepare the input. After the 2 seconds, a fixation cross appeared, which indicated to start speaking or producing imagined speech of the desired command, depending on the current condition. After 2 seconds, the fixation cross disappeared, and the few switched back to the robot with its updated position.
Our participants were advised to speak the word out loud once during the overt condition and, in the imagined speech part, to repeat the word once silently in their head, just like reading it to themselves, without any movement of the articulatory muscles. The input of the user did not have any impact on the systems output, the robot always performed the correct action, an aspect our participants were informed about. Those requirements were made in order to minimize stress, confusion, or other mental states and prevent impacts on the EEG recording.
The game was split up into 4 parts to allow sufficient breaks in between each session for the participant to rest and prevent inducing too much cognitive load. Furthermore, those breaks were used to check the impedances of the EEG headset. Each participant started with a block of overt speech, followed by a silent speech part, continued with overt speech and did a final block of silent speech. This shift was chosen mainly to keep the participants attentive and provide some sort of variety over the duration of the experiment but also to prevent the blockwise recording of the two paradigms.
We recorded 80 repetitions per word and paradigm, meaning for the 5 words, 400 imagined and 400 spoken repetitions per participant, resulting in 800 repetitions overall. Furthermore, we needed to integrate breaks into the experiment, as doing all repetitions in one session would, in the best case take around 70 minutes (5-6 sec per task times 800 tasks), far too long to remain focused. Therefore, we decided to split up the task into levels of 25 interactions, including 5 repetitions of each word in a random order without repeating a word directly. For our 800 repetitions, this meant that we had to create 32 unique levels with a random order of the 5 different commands with 5 repetitions of each word in each level. Those 32 unique levels were then split into 4 parts, two for imagined and two for overt repetitions. The order of the parts during the experiment was overt, imagined, overt, imagined, again to prevent recording the data per paradigm blockwise, accidentally resulting in classifying arbitrary brain states rather then cognitive processes. Additionally, a tutorial level was created to let the participants practice the interaction and make them familiar with the task to feel comfortable during interaction.
Subjects
We conducted the study with 15 healthy subjects, 11 male and 4 female, with an average age of 26.8 years, all with normal or corrected-to-normal vision and right-handed. All subjects were non-native English speakers but fluent and experienced with the language, as our command words were selected to be English. Each subject was introduced to the task, and informed consent was obtained from all subjects for the scientific use of the recorded data. The study was approved by the ethical review board of the Faculty of Mathematics and Computer Science at Saarland University.
Recording
The data was acquired in a dimly lit room with minimized distractions like external sound, mobile devices, and others. The voluntary participants were asked to sit in a comfortable chair to prevent unnecessary muscle movements and reduce noise and artefacts in the EEG, which could emerge from mental stress, unrelated sensory input, physiological motor activity and electrical interference. EEG signals were recorded using a wireless 64-channel electroencephalograph system, namely Brain Products Live Amp 64. The sampling rate was set to 500 Hz. The 10-20 International System of electrode placement was used to cover the whole scalp resulting in the capturing of spatial information from the brain recordings effectively. The robot game was compiled and executed on the same Windows PC as the recording software of the EEG-Headset to allow synchronization of the data and events recorded in the game, e.g. keyboard press or fixation cross.
Data format
Data was recorded in one single file, including the breaks between sessions in .fif format. This format contains a list of events which look as follows:
{"Event_Dictionary": {"Empty": 1, "EndOfEvent": 2, "EndOfLevel": 3, "EndOfParadigm": 4, "space_bar": 5, "Overt_Up": 11, "Overt_Left": 12, "Overt_Right": 13, "Overt_Pick": 14, "Overt_Push": 15, "Silent_Up": 21, "Silent_Left": 22, "Silent_Right": 23, "Silent_Pick": 24, "Silent_Push": 25}}
These event names can be used to extract epochs from the continuous raw data stream and the desired event type in the fif file, e.g. from an overtly spoken "Up" with the "Overt_Up" event or an imagined "Pick" with "Silent_Pick".
An example of how to extract epochs, as well as the full data analysis from our work submitted at the SMC conference, can be found in the git repository provided below.
Facebook
Twitterhttps://fred.stlouisfed.org/legal/#copyright-citation-requiredhttps://fred.stlouisfed.org/legal/#copyright-citation-required
Graph and download economic data for Housing Inventory: Active Listing Count in the United States (ACTLISCOUUS) from Jul 2016 to Oct 2025 about active listing, listing, and USA.
Facebook
TwitterThe 1996 Papua New Guinea household survey is designed to measure the living standards of a random sample of PNG households. As well as looking at the purchases, own-production, gift giving/receiving and sales activities of households over a short period (usually 14 days), the survey also collects information on education, health, nutrition, housing conditions and agricultural activities. The survey also collects information on community level access to services for education, health, transport and communication, and on the price levels in each community so that the cost of living can be measured.
There are many uses of the data that the survey collects, but one main aim is for the results to help government, aid agencies and donors have a better picture of living conditions in all areas of PNG so that they can develop policies and projects that help to alleviate poverty. In addition, the survey will provide a socio-economic profile of Papua New Guinea, describing the access that the population has to agricultural, educational, health and transportation services, their participation in various economic activities, and household consumption patterns.
The survey is nationwide and the same questionnaire is being used in all parts of the country, including the urban areas. This fact can be pointed out if households find that some of the questions are irrelevant for their own living circumstances: there are at least some Papua New Guinean households for which the questions will be relevant and it is only by asking everyone the same questions that living standards can be compared.
The survey covers all provinces except Noth Solomons.
Sample survey data [ssd]
The Household Listing Form and Selection of the Sample Listing of households is the first job to be done after the team has settled in and completed the introductions to the community. Listing is best done by the whole team working together. This way they all get to know the community and its lay-out. However, if the census unit is too large this wastes too much time. So before beginning asks how many households there are, very roughly, in the census unit (noting that teams are supplied with the number of households that were there in the 1990 census). If the answer is 80 or more, divide the team into two and have each half-team work on one sector of the community/village. See the section below on what to do when the listing work is divided up.
If the census unit is a "line-up point" that does not correspond to any single village or community the number of households will often exceed 200 and frequently they are also quite dispersed. In this case it is not practical to attempt to list the whole census unit, so a decision is made in advance to split the census unit into smaller areas (perhaps groupings of clans). First, a local informant must communicate the boundaries of the census unit and for natural or administrative sub-units with the larger census unit (such as hamlets; or canyons/valleys). The sub-units should be big enough to allow for the selection of a set of households (about 30 or more), but should not be so large that excessive transport time will be needed each day just to find the household. Once the subunit is defined, its boundaries should be clearly described. Then one of the smaller units is randomly selected and the procedures outlined above are then followed to complete the listing. Note: only one of the sub-units are listed, sample chosen, and interviews undertaken.
The most important thing in the listing is to be sure that you list all the households and only the households belonging to the named village or census unit (or subset of the census unit if it is a line-up point). In rural areas, explain to village leaders at the beginning: "We have to write down all the households belonging to (Name) village." In case of doubt, always ask: "Does this household belong to (Name) village?" In the towns, the selected area is shown on a map. Check that the address where you are listing is within the same area shown.
Also explain: "We only write down the name of the head of household. When we have the list of all the households, we will select 12 by chance, for interview."
Procedure for Listing The listing team walks around in every part of the village, accompanied by a guide who is a member of the village. If possible, find a person who conducted the 1990 Census in this community or someone with similar knowledge of the community and ask them to be your guide. Make sure you go to all parts of the village, including outlying hamlets. In hamlets, on in any place far from the centre, always check: "Do these people belong to (Name) village?"
In every part of the village, ask the guide about every house: "Who lives in this house? What is the name of the household head?" Note that you do not have to visit every household. At best, you just need to see each house but you do not need to go inside it or talk to anyone who lives there. Even the rule of seeing each house may be relaxed if there are far away household for which good information can be provided by the guide.
Enter the names of household heads in the lines of the listing form. One line is used for each household. As the lines are numbered, the procedure gives a number to each household. When you come to the last house, check with the guide: "Are you sure we have seen all the houses in the village?"
NOTE: It does not matter in what order you list the households as long as they are all listed. After the listing is complete, check that all lines are numbered consecutively with no gaps, from start to finish. The number on the last line should be exactly the number of households listed.
Note: If the list is long (say more than 30 households) interviewer may encounter difficulties when looking for their selected household. One useful way to avoid this is to show the approximately the place in the list here certain landmarks come. This can be done by writing in the margin, CHURCH or STORE or whatever. You can also indicate where the lister started in a hamlet, for example.
Sample Selection The sampling work is done by the supervisor. The first steps are done at the foot of the first page of the listing form. The steps to be taken are as follows:
MR: multiply M by R and round to the nearest whole number. (If decimal 0.5, round up).
MR gives the 1st selection. (Exception: If MR=0, L gives the first selection.) Enter S against this line in the selection column of the list.
Count down the list, beginning after the 1st selection, a distance of L lines to get the 2nd selection, then another L to get the 3rd, etc. When you come to the bottom of the list, jump back to the top as if the list were circular. Stop after the 15th selection. Mark the 13th, 14th, and 15th selections "RES" (for reserve). Mark the 1st - 12th selection "S" (for selection).
Face-to-face [f2f]
The 1996 Papua New Guinea Household Survey questionnaire consists of three basic parts:
Household questionnaire first visit: asks a series of questions about the household, discovering who lives there, what they do, their characteristics, where they live, and a little about what kinds of things they consume. This questionnaire consists of the following sections. - Section 1. Household Roster - Section 2. Education - Section 3. Income Sources - Section 4. Health - Section 5. Foods in the Diet - Section 6. Housing Conditions - Section 7. Agricultural Assets, Inputs and Services - Section 8. Anthropometrics - Section 9. Household Stocks
Consumption recall (second visit questionnaire): is focused primarily on assessing the household's expenditure, gift giving and recieving, production, and level of wealth. The information in the first and second visits will provide information that can determine the household's level of consumption, nutrition, degree of food security, and ways in which it organizes its income earning activities. This questionnaire consists of the following sections. - Section 1. Purchases of Food - Section 2. Other Frequent Purchases - Section 3. Own-production of Food - Section 4. Gifts Received: Food and Frequent Purchases (START) - Section 5. Annual Expenses and Gifts - Section 6. Inventory of Durable Goods - Section 7. Inward Transfers of Money - Section 8. Outward Transfers of Money - Section 9. Prices - Section 10. Repeat of Anthropometric Measurements - Section 11. Quality of Life
Community Questionnaire: which is completed by the interview team in consultation with community leaders. This questionnaire also includes market price surveys that are carried out by the team when they are working in the community. Associated with this is a listing of all households in the community, which has to be done prior to the selection of the 12 households. This questionnaire consists of the following sections. - Section A. Listing of Community Assets - Section B. Education - Section C. Health - Section D. Town or Government Station - Section E: Transport and Communications - Section F. Prices - Section G. Changes in Economic Activity, Infrastructure, and Services
Facebook
TwitterBackground Blood leukocytes constitute two interchangeable sub-populations, the marginated and circulating pools. These two sub-compartments are found in normal conditions and are potentially affected by non-normal situations, either pathological or physiological. The dynamics between the compartments is governed by rate constants of margination (M) and return to circulation (R). Therefore, estimates of M and R may prove of great importance to a deeper understanding of many conditions. However, there has been a lack of formalism in order to approach such estimates. The few attempts to furnish an estimation of M and R neither rely on clearly stated models that precisely say which rate constant is under estimation nor recognize which factors may influence the estimation.
Results
The returning of the blood pools to a steady-state value after a perturbation (e.g., epinephrine injection) was modeled by a second-order differential equation. This equation has two eigenvalues, related to a fast- and to a slow-component of the dynamics. The model makes it possible to identify that these components are partitioned into three constants: R, M and SB; where SB is a time-invariant exit to tissues rate constant. Three examples of the computations are worked and a tentative estimation of R for mouse monocytes is presented.
Conclusions
This study establishes a firm theoretical basis for the estimation of the rate constants of the dynamics between the blood sub-compartments of white cells. It shows, for the first time, that the estimation must also take into account the exit to tissues rate constant, SB.
Facebook
TwitterThe number of Reddit users in the United States was forecast to continuously increase between 2024 and 2028 by in total 10.3 million users (+5.21 percent). After the ninth consecutive increasing year, the Reddit user base is estimated to reach 208.12 million users and therefore a new peak in 2028. Notably, the number of Reddit users of was continuously increasing over the past years.User figures, shown here with regards to the platform reddit, have been estimated by taking into account company filings or press material, secondary research, app downloads and traffic data. They refer to the average monthly active users over the period and count multiple accounts by persons only once. Reddit users encompass both users that are logged in and those that are not.The shown data are an excerpt of Statista's Key Market Indicators (KMI). The KMI are a collection of primary and secondary indicators on the macro-economic, demographic and technological environment in up to 150 countries and regions worldwide. All indicators are sourced from international and national statistical offices, trade associations and the trade press and they are processed to generate comparable data sets (see supplementary notes under details for more information).Find more key insights for the number of Reddit users in countries like Mexico and Canada.
Facebook
TwitterIn the fourth quarter of 2024, TikTok generated around 186 million downloads from users worldwide. Initially launched in China first by ByteDance as Douyin, the short-video format was popularized by TikTok and took over the global social media environment in 2020. In the first quarter of 2020, TikTok downloads peaked at over 313.5 million worldwide, up by 62.3 percent compared to the first quarter of 2019.
TikTok interactions: is there a magic formula for content success?
In 2024, TikTok registered an engagement rate of approximately 4.64 percent on video content hosted on its platform. During the same examined year, the social video app recorded over 1,100 interactions on average. These interactions were primarily composed of likes, while only recording less than 20 comments per piece of content on average in 2024.
The platform has been actively monitoring the issue of fake interactions, as it removed around 236 million fake likes during the first quarter of 2024. Though there is no secret formula to get the maximum of these metrics, recommended video length can possibly contribute to the success of content on TikTok.
It was recommended that tiny TikTok accounts with up to 500 followers post videos that are around 2.6 minutes long as of the first quarter of 2024. While, the ideal video duration for huge TikTok accounts with over 50,000 followers was 7.28 minutes. The average length of TikTok videos posted by the creators in 2024 was around 43 seconds.
What’s trending on TikTok Shop?
Since its launch in September 2023, TikTok Shop has become one of the most popular online shopping platforms, offering consumers a wide variety of products. In 2023, TikTok shops featuring beauty and personal care items sold over 370 million products worldwide.
TikTok shops featuring womenswear and underwear, as well as food and beverages, followed with 285 and 138 million products sold, respectively. Similarly, in the United States market, health and beauty products were the most-selling items,
accounting for 85 percent of sales made via the TikTok Shop feature during the first month of its launch. In 2023, Indonesia was the market with the largest number of TikTok Shops, hosting over 20 percent of all TikTok Shops. Thailand and Vietnam followed with 18.29 and 17.54 percent of the total shops listed on the famous short video platform, respectively.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
this graph was created in R:
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F16731800%2F8cb214f053ced5479fbc0fd9a51ea662%2Fgraph1.gif?generation=1731273118874468&alt=media" alt="">
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F16731800%2F9a5d25492ea93b99e2292e398e0afc01%2Fgraph2.gif?generation=1731273123934184&alt=media" alt="">
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F16731800%2F5af366103f335a73ee593546bbadf2b2%2Fgraph3.gif?generation=1731273128356032&alt=media" alt="">
Welcome to MTSamples! This website is a large collection of medical transcription reports, which have been typed out to show exactly what doctors, nurses, and other healthcare professionals say during medical visits, exams, or procedures. These reports are very useful for people who are learning how to work in medical transcription or for those who already work in this field and need examples to help them with their daily tasks. Medical transcription is an important job where people listen to recordings made by doctors and type them into written reports. The reports on MTSamples are a great way to practice or get familiar with the kind of work a transcriptionist does.
MTSamples.com is constantly updating and adding new reports. It has a wide variety of sample reports that cover many different medical specialties. For example, you can find reports related to cardiology (heart), pulmonology (lungs), orthopedics (bones and muscles), and many other fields. Each report gives a real-life example of what a doctor might say during an appointment or procedure, and how a transcriptionist would type it out. This variety makes the site helpful to those who want to learn about different medical areas, whether they are just starting out or are already experienced transcriptionists.
The samples on MTSamples.com are provided by transcriptionists and users who contribute their work for educational purposes. These reports are meant to be used as reference material, and they show what transcription should look like in real situations. However, because they are user-submitted, there might be some errors in them, and we would greatly appreciate it if anyone finds mistakes to let us know so we can correct them. If you are a transcriptionist and would like to share your own reports with the site, we would love to hear from you. The more examples we have, the better it is for everyone who uses the website for learning or reference.
We encourage you to print, share, or link to any of the reports found on MTSamples.com. If you decide to share or print the reports, we ask that you let us know and give credit to the website. This can be done by including a link to https://www.mtsamples.com or by mentioning the website in a referral note. Our goal is to make sure that everyone who uses the site can easily access useful information, while also making sure that MTSamples gets the credit for providing these valuable resources. By working together, we can create a helpful and supportive community for learning about medical transcription.