24 datasets found
  1. P

    Premium Annotation Tools Report

    • marketresearchforecast.com
    doc, pdf, ppt
    Updated Mar 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market Research Forecast (2025). Premium Annotation Tools Report [Dataset]. https://www.marketresearchforecast.com/reports/premium-annotation-tools-34887
    Explore at:
    ppt, doc, pdfAvailable download formats
    Dataset updated
    Mar 15, 2025
    Dataset authored and provided by
    Market Research Forecast
    License

    https://www.marketresearchforecast.com/privacy-policyhttps://www.marketresearchforecast.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The premium annotation tools market, valued at $1115.9 million in 2025, is experiencing robust growth, projected to expand at a Compound Annual Growth Rate (CAGR) of 7.8% from 2025 to 2033. This growth is fueled by the increasing demand for high-quality training data across various sectors, including autonomous vehicles, medical imaging, and natural language processing. The rise of deep learning and artificial intelligence (AI) necessitates meticulously annotated datasets, driving adoption of sophisticated annotation tools that offer features like collaborative annotation, automated workflows, and advanced quality control mechanisms. The market is segmented by deployment (cloud-based and web-based) and application (student, worker, and others), with cloud-based solutions gaining significant traction due to their scalability and accessibility. The competitive landscape is characterized by a mix of established players and emerging startups, constantly innovating to meet the evolving needs of data scientists and AI developers. North America and Europe currently hold the largest market shares, reflecting the high concentration of AI research and development activities in these regions. However, significant growth is anticipated in Asia-Pacific, driven by increasing investments in AI and data-centric technologies within rapidly developing economies like China and India. The continued expansion of the premium annotation tools market is contingent upon several factors. Firstly, the ongoing advancements in AI and machine learning will continue to drive demand for larger and more complex datasets. Secondly, the increasing availability of affordable cloud computing resources will make premium annotation tools more accessible to a broader range of users. Thirdly, the growing focus on data quality and accuracy within the AI development lifecycle will necessitate the adoption of tools capable of guaranteeing high standards. Conversely, factors such as the high initial investment cost of premium tools and the need for skilled professionals to operate them could pose challenges to market penetration. Nevertheless, the overall outlook for the premium annotation tools market remains positive, with substantial opportunities for growth and innovation in the coming years.

  2. M

    Manual Data Annotation Tools Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated Apr 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). Manual Data Annotation Tools Report [Dataset]. https://www.datainsightsmarket.com/reports/manual-data-annotation-tools-1450942
    Explore at:
    pdf, doc, pptAvailable download formats
    Dataset updated
    Apr 15, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The global manual data annotation tools market is experiencing robust growth, projected to reach $1045.4 million in 2025 and exhibiting a Compound Annual Growth Rate (CAGR) of 14.2% from 2025 to 2033. This expansion is fueled by the increasing reliance on artificial intelligence (AI) and machine learning (ML) across diverse sectors. The demand for high-quality annotated data to train and improve the accuracy of AI algorithms is a primary driver. Key application areas include IT & Telecom, BFSI (Banking, Financial Services, and Insurance), Healthcare, Retail, and Automotive, each contributing significantly to market growth. The prevalence of image/video annotation, alongside text and audio annotation, further segments the market, reflecting the varied data types required for AI model development. Geographic distribution reveals strong market presence in North America and Europe, driven by advanced technological adoption and a large pool of skilled professionals. However, rapidly developing economies in Asia-Pacific are expected to witness significant growth in the coming years, presenting lucrative opportunities for market players. Competitive forces within the market are intensifying, with established players like Amazon Web Services and Google competing with specialized annotation service providers like Appen and LionBridge AI. The market's future trajectory is influenced by factors like the increasing adoption of cloud-based solutions for data annotation and the rising demand for specialized annotation services catering to niche AI applications. The market's growth is also shaped by ongoing technological advancements in data annotation techniques, such as automated annotation tools. While these tools can improve efficiency, the need for manual validation and correction continues to fuel the demand for human annotators. This creates a dynamic where manual data annotation tools remain essential, especially for complex tasks requiring nuanced human judgment. Regulatory considerations regarding data privacy and security also impact the market, driving a demand for secure and compliant annotation solutions. Furthermore, the talent pool of skilled annotators remains a crucial factor. Addressing challenges like data bias and ensuring consistent annotation quality remains a priority for businesses investing in AI development, thus maintaining the demand for reliable and efficient manual data annotation tools.

  3. A

    Automated Data Annotation Tool Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated Jul 4, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). Automated Data Annotation Tool Report [Dataset]. https://www.datainsightsmarket.com/reports/automated-data-annotation-tool-1416565
    Explore at:
    ppt, doc, pdfAvailable download formats
    Dataset updated
    Jul 4, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The automated data annotation tool market is experiencing robust growth, driven by the increasing demand for high-quality training data in artificial intelligence (AI) and machine learning (ML) applications. The market, estimated at $2 billion in 2025, is projected to expand at a Compound Annual Growth Rate (CAGR) of 25% from 2025 to 2033, reaching approximately $10 billion by 2033. This significant expansion is fueled by several key factors. Firstly, the proliferation of AI and ML across diverse industries like healthcare, finance, and autonomous vehicles necessitates large volumes of accurately labeled data. Secondly, the limitations of manual annotation, including its time-consuming nature and susceptibility to human error, are driving the adoption of automated solutions that offer increased speed, accuracy, and scalability. Furthermore, advancements in computer vision, natural language processing, and other AI techniques are continuously improving the capabilities of automated annotation tools, making them increasingly efficient and reliable. Key players like Amazon Web Services, Google, and other specialized providers are actively contributing to this growth through innovation and strategic partnerships. However, market growth isn't without challenges. The high initial investment cost of implementing automated annotation tools can be a barrier for smaller companies. Moreover, the accuracy of automated annotation can still lag behind manual annotation in certain complex scenarios, necessitating hybrid approaches that combine automated and manual processes. Despite these restraints, the long-term outlook for the automated data annotation tool market remains exceptionally positive, driven by continued advancements in AI and the expanding demand for large-scale, high-quality datasets to fuel the next generation of AI applications. The market is segmented by tool type (image, text, video, audio), deployment mode (cloud, on-premise), and industry, with each segment exhibiting unique growth trajectories reflecting specific application needs.

  4. P

    Premium Annotation Tools Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated Apr 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). Premium Annotation Tools Report [Dataset]. https://www.datainsightsmarket.com/reports/premium-annotation-tools-1414371
    Explore at:
    ppt, pdf, docAvailable download formats
    Dataset updated
    Apr 24, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The premium annotation tools market, valued at $1169.4 million in 2025, is projected to experience robust growth, driven by the increasing demand for high-quality training data in artificial intelligence (AI) and machine learning (ML) applications. The market's Compound Annual Growth Rate (CAGR) of 8.1% from 2025 to 2033 signifies a substantial expansion, fueled by several key factors. The rise of sophisticated AI models necessitates meticulously annotated datasets for optimal performance. This is particularly crucial in sectors like autonomous vehicles, medical image analysis, and natural language processing, where accuracy is paramount. The shift towards cloud-based and web-based annotation tools simplifies data management, collaboration, and scalability, further boosting market adoption. Segment-wise, the student and worker application segments are expected to see significant growth due to the increasing accessibility and affordability of these tools, while cloud-based solutions are poised to dominate owing to their flexibility and scalability advantages. Geographic expansion, particularly in regions with burgeoning tech industries like Asia Pacific and North America, will also contribute to the overall market growth. Competitive pressures among established players and emerging startups are driving innovation and affordability, making premium annotation tools more accessible to a wider range of users. Despite the positive outlook, the market faces certain challenges. The high cost of premium tools and the need for skilled annotators can be entry barriers for smaller businesses and individuals. Additionally, data privacy and security concerns related to sensitive datasets used in annotation remain a critical factor influencing market growth. However, the continuous advancements in automation and AI-powered annotation techniques are likely to mitigate these concerns. The ongoing evolution of annotation techniques, such as incorporating active learning and transfer learning, promises to further increase efficiency and reduce annotation costs, fostering wider adoption across various industries and accelerating market expansion.

  5. M

    Manual Data Annotation Tools Report

    • marketresearchforecast.com
    doc, pdf, ppt
    Updated Mar 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market Research Forecast (2025). Manual Data Annotation Tools Report [Dataset]. https://www.marketresearchforecast.com/reports/manual-data-annotation-tools-33619
    Explore at:
    pdf, doc, pptAvailable download formats
    Dataset updated
    Mar 14, 2025
    Dataset authored and provided by
    Market Research Forecast
    License

    https://www.marketresearchforecast.com/privacy-policyhttps://www.marketresearchforecast.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The manual data annotation tools market, valued at $949.7 million in 2025, is experiencing robust growth, projected to expand at a compound annual growth rate (CAGR) of 13.6% from 2025 to 2033. This surge is driven by the escalating demand for high-quality training data across diverse sectors. The increasing adoption of artificial intelligence (AI) and machine learning (ML) models necessitates large volumes of meticulously annotated data for optimal performance. Industries like IT & Telecom, BFSI (Banking, Financial Services, and Insurance), Healthcare, and Automotive are leading the charge, investing significantly in data annotation to improve their AI-powered applications, from fraud detection and medical image analysis to autonomous vehicle development and personalized customer experiences. The market is segmented by data type (image, video, text, audio) and application sector, reflecting the diverse needs of various industries. The rise of cloud-based annotation platforms is streamlining workflows and enhancing accessibility, while the increasing complexity of AI models is pushing the demand for more sophisticated and specialized annotation techniques. The competitive landscape is characterized by a mix of established players and emerging startups. Companies like Appen, Amazon Web Services, Google, and IBM are leveraging their extensive resources and technological capabilities to dominate the market. However, smaller, specialized companies are also making significant strides, catering to niche needs and offering innovative solutions. Geographic expansion is another key trend, with North America currently holding a substantial market share due to its advanced technology adoption and significant investments in AI research. However, Asia-Pacific, especially India and China, is witnessing rapid growth fueled by expanding digitalization and increasing government initiatives promoting AI development. Despite the rapid growth, challenges remain, including the high cost and time-consuming nature of manual annotation, alongside concerns around data privacy and security. The market's future trajectory will depend on technological advancements, evolving industry needs, and the effective addressal of these challenges.

  6. A

    Automated Data Annotation Tools Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated Apr 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). Automated Data Annotation Tools Report [Dataset]. https://www.datainsightsmarket.com/reports/automated-data-annotation-tools-1947663
    Explore at:
    pdf, ppt, docAvailable download formats
    Dataset updated
    Apr 24, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The automated data annotation tools market is experiencing robust growth, driven by the escalating demand for high-quality training data in various sectors like IT & Telecom, BFSI, Healthcare, and Retail. The increasing adoption of artificial intelligence (AI) and machine learning (ML) models, which heavily rely on accurately annotated data, is a primary catalyst. Furthermore, the rising complexity of AI algorithms necessitates larger and more precisely labeled datasets, fueling the market's expansion. While challenges such as the high cost of annotation and the need for skilled human annotators exist, the market is overcoming these hurdles through the development of more efficient and cost-effective automation tools. The market segmentation reveals a strong presence across various application areas, with IT & Telecom and BFSI likely leading in terms of adoption due to their substantial investments in AI-driven solutions. Different annotation types, including image/video, text, and audio, cater to a wide range of AI development needs. The competitive landscape is populated by established players like Amazon Web Services and Google LLC, alongside innovative startups, creating a dynamic market characterized by continuous innovation and competition. Geographic expansion is also a prominent factor, with North America and Europe currently holding significant market shares, but emerging economies in Asia-Pacific are poised for substantial growth due to increasing digitalization and AI adoption. Looking ahead, the market is predicted to exhibit sustained growth driven by ongoing technological advancements and the expanding applications of AI across multiple industries. The forecast period (2025-2033) suggests continued market expansion fueled by factors such as advancements in automation techniques, reduced annotation costs through optimized algorithms, and the expanding scope of AI applications in sectors like autonomous vehicles and precision agriculture. The emergence of new annotation methods and the increasing accessibility of tools will further democratize AI development and drive market growth. Companies are strategically investing in research and development to enhance the accuracy, efficiency, and scalability of their annotation tools. The market's competitive nature fosters innovation, leading to the development of more sophisticated and user-friendly tools that meet the diverse needs of different industries and applications. The market's evolution is expected to be shaped by the ongoing interplay between technological advancements, industry demands, and competitive dynamics.

  7. ImageCLEF 2012 Image annotation and retrieval dataset (MIRFLICKR)

    • zenodo.org
    • explore.openaire.eu
    txt, zip
    Updated May 22, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bart Thomee; Adrian Popescu; Bart Thomee; Adrian Popescu (2020). ImageCLEF 2012 Image annotation and retrieval dataset (MIRFLICKR) [Dataset]. http://doi.org/10.5281/zenodo.1246796
    Explore at:
    zip, txtAvailable download formats
    Dataset updated
    May 22, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Bart Thomee; Adrian Popescu; Bart Thomee; Adrian Popescu
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    DESCRIPTION
    For this task, we use a subset of the MIRFLICKR (http://mirflickr.liacs.nl) collection. The entire collection contains 1 million images from the social photo sharing website Flickr and was formed by downloading up to a thousand photos per day that were deemed to be the most interesting according to Flickr. All photos in this collection were released by their users under a Creative Commons license, allowing them to be freely used for research purposes. Of the entire collection, 25 thousand images were manually annotated with a limited number of concepts and many of these annotations have been further refined and expanded over the lifetime of the ImageCLEF photo annotation task. This year we used crowd sourcing to annotate all of these 25 thousand images with the concepts.

    On this page we provide you with more information about the textual features, visual features and concept features we supply with each image in the collection we use for this year's task.


    TEXTUAL FEATURES
    All images are accompanied by the following textual features:

    - Flickr user tags
    These are the tags that the users assigned to the photos their uploaded to Flickr. The 'raw' tags are the original tags, while the 'clean' tags are those collapsed to lowercase and condensed to removed spaces.

    - EXIF metadata
    If available, the EXIF metadata contains information about the camera that took the photo and the parameters used. The 'raw' exif is the original camera data, while the 'clean' exif reduces the verbosity.

    - User information and Creative Commons license information
    This contains information about the user that took the photo and the license associated with it.


    VISUAL FEATURES
    Over the previous years of the photo annotation task we noticed that often the same types of visual features are used by the participants, in particular features based on interest points and bag-of-words are popular. To assist you we have extracted several features for you that you may want to use, so you can focus on the concept detection instead. We additionally give you some pointers to easy to use toolkits that will help you extract other features or the same features but with different default settings.

    - SIFT, C-SIFT, RGB-SIFT, OPPONENT-SIFT
    We used the ISIS Color Descriptors (http://www.colordescriptors.com) toolkit to extract these descriptors. This package provides you with many different types of features based on interest points, mostly using SIFT. It furthermore assists you with building codebooks for bag-of-words. The toolkit is available for Windows, Linux and Mac OS X.

    - SURF
    We used the OpenSURF (http://www.chrisevansdev.com/computer-vision-opensurf.html) toolkit to extract this descriptor. The open source code is available in C++, C#, Java and many more languages.

    - TOP-SURF
    We used the TOP-SURF (http://press.liacs.nl/researchdownloads/topsurf) toolkit to extract this descriptor, which represents images with SURF-based bag-of-words. The website provides codebooks of several different sizes that were created using a combination of images from the MIR-FLICKR collection and from the internet. The toolkit also offers the ability to create custom codebooks from your own image collection. The code is open source, written in C++ and available for Windows, Linux and Mac OS X.

    - GIST
    We used the LabelMe (http://labelme.csail.mit.edu) toolkit to extract this descriptor. The MATLAB-based library offers a comprehensive set of tools for annotating images.

    For the interest point-based features above we used a Fast Hessian-based technique to detect the interest points in each image. This detector is built into the OpenSURF library. In comparison with the Hessian-Laplace technique built into the ColorDescriptors toolkit it detects fewer points, resulting in a considerably reduced memory footprint. We therefore also provide you with the interest point locations in each image that the Fast Hessian-based technique detected, so when you would like to recalculate some features you can use them as a starting point for the extraction. The ColorDescriptors toolkit for instance accepts these locations as a separate parameter. Please go to http://www.imageclef.org/2012/photo-flickr/descriptors for more information on the file format of the visual features and how you can extract them yourself if you want to change the default settings.


    CONCEPT FEATURES
    We have solicited the help of workers on the Amazon Mechanical Turk platform to perform the concept annotation for us. To ensure a high standard of annotation we used the CrowdFlower platform that acts as a quality control layer by removing the judgments of workers that fail to annotate properly. We reused several concepts of last year's task and for most of these we annotated the remaining photos of the MIRFLICKR-25K collection that had not yet been used before in the previous task; for some concepts we reannotated all 25,000 images to boost their quality. For the new concepts we naturally had to annotate all of the images.

    - Concepts
    For each concept we indicate in which images it is present. The 'raw' concepts contain the judgments of all annotators for each image, where a '1' means an annotator indicated the concept was present whereas a '0' means the concept was not present, while the 'clean' concepts only contain the images for which the majority of annotators indicated the concept was present. Some images in the raw data for which we reused last year's annotations only have one judgment for a concept, whereas the other images have between three and five judgments; the single judgment does not mean only one annotator looked at it, as it is the result of a majority vote amongst last year's annotators.

    - Annotations
    For each image we indicate which concepts are present, so this is the reverse version of the data above. The 'raw' annotations contain the average agreement of the annotators on the presence of each concept, while the 'clean' annotations only include those for which there was a majority agreement amongst the annotators.

    You will notice that the annotations are not perfect. Especially when the concepts are more subjective or abstract, the annotators tend to disagree more with each other. The raw versions of the concept annotations should help you get an understanding of the exact judgments given by the annotators.

  8. h

    Open Annotation Tools

    • hsscommons.ca
    Updated Apr 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kimberly Silk (2024). Open Annotation Tools [Dataset]. http://doi.org/10.25547/7B77-NN17
    Explore at:
    Dataset updated
    Apr 11, 2024
    Dataset provided by
    Canadian HSS Commons
    Authors
    Kimberly Silk
    Description

    Open annotation is the ability to freely contribute to online, usually web-based, content, such as documents, images and video. Open annotation as a concept has been embraced predominantly by scholars in the Digital Humanities, a group that has a long history of online collaboration.

  9. T

    Text Annotation Tool Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated Jan 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). Text Annotation Tool Report [Dataset]. https://www.datainsightsmarket.com/reports/text-annotation-tool-1928348
    Explore at:
    pdf, doc, pptAvailable download formats
    Dataset updated
    Jan 22, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    Market Analysis for Text Annotation Tool The global market for text annotation tools is projected to grow significantly, reaching XXX million USD by 2033, exhibiting a CAGR of XX% from 2025 to 2033. Key drivers behind this growth include the increasing demand for accurate data labeling for machine learning and natural language processing applications, the rise of cloud computing and AI-driven automation, and the expanding need for data annotation in various sectors such as healthcare, finance, and research. The market is segmented by application (commercial use, personal use), type (text annotation tool, image annotation tool, others), company (CloudApp, iMerit, Playment, Trilldata Technologies, Amazon Web Services, and others), and region (North America, South America, Europe, Middle East & Africa, Asia Pacific). North America currently holds the largest market share, followed by Europe and Asia Pacific. The increasing adoption of text annotation tools by enterprises and government agencies is expected to drive growth in the commercial use segment, while the demand for personal annotation tools for research and academic purposes is expected to fuel growth in the personal use segment.

  10. The Semantic PASCAL-Part Dataset

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Apr 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ivan Donadello; Ivan Donadello; Luciano Serafini; Luciano Serafini (2025). The Semantic PASCAL-Part Dataset [Dataset]. http://doi.org/10.5281/zenodo.5878773
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 24, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Ivan Donadello; Ivan Donadello; Luciano Serafini; Luciano Serafini
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Semantic PASCAL-Part dataset

    The Semantic PASCAL-Part dataset is the RDF version of the famous PASCAL-Part dataset used for object detection in Computer Vision. Each image is annotated with bounding boxes containing a single object. Couples of bounding boxes are annotated with the part-whole relationship. For example, the bounding box of a car has the part-whole annotation with the bounding boxes of its wheels.

    This original release joins Computer Vision with Semantic Web as the objects in the dataset are aligned with concepts from:

    • the provided supporting ontology;
    • the WordNet database through its synstes;
    • the Yago ontology.

    The provided Python 3 code (see the GitHub repo) is able to browse the dataset and convert it in RDF knowledge graph format. This new format easily allows the fostering of research in both Semantic Web and Machine Learning fields.

    Structure of the semantic PASCAL-Part Dataset

    This is the folder structure of the dataset:

    • semanticPascalPart: it contains the refined images and annotations (e.g., small specific parts are merged into bigger parts) of the PASCAL-Part dataset in Pascal-voc style.
      • Annotations_set: the test set annotations in .xml format. For further information See the PASCAL VOC format here.
      • Annotations_trainval: the train and validation set annotations in .xml format. For further information See the PASCAL VOC format here.
      • JPEGImages_test: the test set images in .jpg format.
      • JPEGImages_trainval: the train and validation set images in .jpg format.
      • test.txt: the 2416 image filenames in the test set.
      • trainval.txt: the 7687 image filenames in the train and validation set.

    The PASCAL-Part Ontology

    The PASCAL-Part OWL ontology formalizes, through logical axioms, the part-of relationship between whole objects (22 classes) and their parts (39 classes). The ontology contains 85 logical axiomns in Description Logic in (for example) the following form:

    Every potted_plant has exactly 1 plant AND
              has exactly 1 pot
    

    We provide two versions of the ontology: with and without cardinality constraints in order to allow users to experiment with or without them. The WordNet alignment is encoded in the ontology as annotations. We further provide the WordNet_Yago_alignment.csv file with both WordNet and Yago alignments.

    The ontology can be browsed with many Semantic Web tools such as:

    • Protégé: a graphical tool for ongology modelling;
    • OWLAPI: Java API for manipulating OWL ontologies;
    • rdflib: Python API for working with the RDF format.
    • RDF stores: databases for storing and semantically retrieve RDF triples. See here for some examples.

    Citing semantic PASCAL-Part

    If you use semantic PASCAL-Part in your research, please use the following BibTeX entry

    @article{DBLP:journals/ia/DonadelloS16,
     author  = {Ivan Donadello and
            Luciano Serafini},
     title   = {Integration of numeric and symbolic information for semantic image
            interpretation},
     journal  = {Intelligenza Artificiale},
     volume  = {10},
     number  = {1},
     pages   = {33--47},
     year   = {2016}
    }
    
  11. Ramadan Lanterns

    • kaggle.com
    Updated Mar 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Abdelrahman Ahmed Eldaba (2024). Ramadan Lanterns [Dataset]. https://www.kaggle.com/datasets/abdelrahmanahmed110/ramadan-lanterns/code
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 2, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Abdelrahman Ahmed Eldaba
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    I collected some images for Ramadan Lanterns and then annotated them using VGG Image Annotator (VIA) (A web-based tool to annotate objects in images) as each image converts to a JSON file to be used for computer vision tasks.

    The images may have only one lantern or many ones. I used web scraping to collect many images then I filtered them to select the most suitable ones.

    Feel free to look at my website with a portfolio that displays a collection of my works Here.

  12. d

    Benthic Cover from Automated Annotation of Benthic Images Collected at Coral...

    • catalog.data.gov
    • datasets.ai
    • +1more
    Updated Oct 19, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (Point of Contact, Custodian) (2024). Benthic Cover from Automated Annotation of Benthic Images Collected at Coral Reef Sites in the Pacific Remote Island Areas and American Samoa in 2018 [Dataset]. https://catalog.data.gov/dataset/benthic-cover-from-automated-annotation-of-benthic-images-collected-at-coral-reef-sites-in-20181
    Explore at:
    Dataset updated
    Oct 19, 2024
    Dataset provided by
    (Point of Contact, Custodian)
    Area covered
    American Samoa
    Description

    The coral reef benthic community data described here result from the automated annotation (classification) of benthic images collected during photoquadrat surveys conducted by the NOAA Pacific Islands Fisheries Science Center (PIFSC), Ecosystem Sciences Division (ESD, formerly the Coral Reef Ecosystem Division) as part of NOAA's ongoing National Coral Reef Monitoring Program (NCRMP). SCUBA divers conducted benthic photoquadrat surveys in coral reef habitats according to protocols established by ESD and NCRMP during the ESD-led NCRMP mission to the islands and atolls of the Pacific Remote Island Areas (PRIA) and American Samoa from June 8 to August 11, 2018. Still photographs were collected with a high-resolution digital camera mounted on a pole to document the benthic community composition at predetermined points along transects at stratified random sites surveyed only once as part of Rapid Ecological Assessment (REA) surveys for corals and fish and permanent sites established by ESD and resurveyed every ~3 years for climate change monitoring. Overall, 30 photoquadrat images were collected at each survey site. The benthic habitat images were quantitatively analyzed using the web-based, machine-learning, image annotation tool, CoralNet (https://coralnet.ucsd.edu; Beijbom et al. 2015). Ten points were randomly overlaid on each image and the machine-learning algorithm "robot" identified the organism or type of substrate beneath, with 300 annotations (points) generated per site. Benthic elements falling under each point were identified to functional group (Tier 1: hard coral, soft coral, sessile invertebrate, macroalgae, crustose coralline algae, and turf algae) for coral, algae, invertebrates, and other taxa following Lozada-Misa et al. (2017). These benthic data can ultimately be used to produce estimates of community composition, relative abundance (percentage of benthic cover), and frequency of occurrence.

  13. A

    Annotating Software Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated May 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). Annotating Software Report [Dataset]. https://www.datainsightsmarket.com/reports/annotating-software-1447731
    Explore at:
    ppt, doc, pdfAvailable download formats
    Dataset updated
    May 7, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The annotating software market is experiencing robust growth, driven by increasing demand across various sectors. The expanding adoption of digital content and the need for efficient data annotation in fields like machine learning, education, and business processes are key factors fueling this expansion. While precise market sizing for 2025 isn't provided, a reasonable estimation, considering typical software market growth and the provided CAGR, could place the market value at approximately $500 million. This is further substantiated by the presence of several established players like Ginger Labs Inc. and Readdle Inc., alongside emerging companies in regions like China. The market is segmented by application (campus and workplace) and type (web-based and on-premise), reflecting diverse user needs and deployment preferences. Web-based solutions are expected to dominate due to their accessibility and scalability advantages. Growth is anticipated across all regions, with North America and Europe currently holding significant market share, but the Asia-Pacific region is projected to witness the fastest growth rate due to increasing digitalization and technological advancements. Challenges include the need for user-friendly interfaces and robust security features to gain wider adoption. The competitive landscape features both established players and innovative startups, leading to continuous product development and market innovation. The forecast period of 2025-2033 suggests continued market expansion, potentially reaching over $1 billion by 2033. Sustained growth will depend on factors such as technological advancements (e.g., AI-powered annotation tools), improved user experience, and increased awareness of the benefits of annotation software across various industries. Addressing existing restraints, like data security concerns and the learning curve associated with complex software, will be crucial for continued market penetration and wider user adoption. The on-premise segment might see slower growth compared to the web-based segment, owing to higher initial investment and maintenance costs. However, industries with stringent data privacy requirements may continue to rely on on-premise solutions.

  14. Website Screenshots

    • kaggle.com
    • universe.roboflow.com
    Updated May 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pooria Mostafapoor (2025). Website Screenshots [Dataset]. https://www.kaggle.com/datasets/pooriamst/website-screenshots
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 19, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Pooria Mostafapoor
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    The Website Screenshots dataset is a synthetically generated dataset composed of screenshots from over 1000 of the world's top websites. They have been automatically annotated to label the following classes: :fa-spacer: * button - navigation links, tabs, etc. * heading - text that was enclosed in <h1> to <h6> tags. * link - inline, textual <a> tags. * label - text labeling form fields. * text - all other text. * image - <img>, <svg>, or <video> tags, and icons. * iframe - ads and 3rd party content.

    Example

    This is an example image and annotation from the dataset: https://i.imgur.com/mOG3u3Z.png" alt="WIkipedia Screenshot">

    Usage

    Annotated screenshots are very useful in Robotic Process Automation. But they can be expensive to label. This dataset would cost over $4000 for humans to label on popular labeling services. We hope this dataset provides a good starting point for your project. Try it with a model from our model library.

    The dataset contains 1689 train data, 243 test data and 483 valid data.

  15. Additional file 1: of KinMap: a web-based tool for interactive navigation...

    • springernature.figshare.com
    zip
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sameh Eid; Samo Turk; Andrea Volkamer; Friedrich Rippmann; Simone Fulle (2023). Additional file 1: of KinMap: a web-based tool for interactive navigation through human kinome data [Dataset]. http://doi.org/10.6084/m9.figshare.c.3658955_D1.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Sameh Eid; Samo Turk; Andrea Volkamer; Friedrich Rippmann; Simone Fulle
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    (KinMap_Examples.zip) contains the input CSV files used to generate the annotated kinome trees in Fig. 1 (Example_1_Erlotinib_NSCLC.csv), Fig. 2a (Example_2_Sunitinib_Sorafenib_Cancer.csv), and Fig. 2b (Example_3_Kinase_Stats.csv). (ZIP 5 kb)

  16. National Coral Reef Monitoring Program: Benthic cover derived from analysis...

    • accession.nodc.noaa.gov
    • s.cnmilf.com
    • +2more
    html
    Updated Aug 23, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NOAA Pacific Islands Fisheries Science Center, Ecosystem Sciences Division (2024). National Coral Reef Monitoring Program: Benthic cover derived from analysis of images collected during stratified random surveys (StRS) across the Pacific Remote Island Areas [Dataset]. http://doi.org/10.7289/v5154fbh
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Aug 23, 2024
    Dataset provided by
    National Centers for Environmental Informationhttps://www.ncei.noaa.gov/
    National Oceanic and Atmospheric Administrationhttp://www.noaa.gov/
    University of Hawaii Joint Institute for Marine and Atmospheric Research (JIMAR)
    Coral Reef Ecosystems Program (CREP)
    Authors
    NOAA Pacific Islands Fisheries Science Center, Ecosystem Sciences Division
    Time period covered
    Mar 16, 2014 - Present
    Area covered
    OCEAN BASIN > Pacific Ocean > Central Pacific Ocean > Baker Island > Baker Island (00N176W0001), COUNTRY/TERRITORY > United States of America > USA Minor Outlying Islands > Johnston Atoll (16N169W0001), PRIMNM, Pacific Remote Island Areas, OCEAN > PACIFIC OCEAN > NORTH PACIFIC OCEAN, Pacific Remote Islands Marine National Monument, PRIA, geographic bounding box, OCEAN BASIN > Pacific Ocean > Central Pacific Ocean > Wake Atoll > Wake Atoll (19N167E0001),
    Description

    The benthic cover data in this collection result from the analysis of images produced by benthic photo-quadrat surveys. These surveys were conducted along transects at stratified random sites across the Pacific Remote Island Areas since 2014 as a part of the ongoing National Coral Reef Monitoring Program's (NCRMP) Rapid Ecological Assessment (REA) surveys for corals and fish. Benthic cover data and the associated images are both included in this collection.

    A stratified random sampling (StRS) design was employed to survey the coral reef ecosystems throughout the U.S. Pacific regions. The survey domain encompassed the majority of the mapped area of reef and hard bottom habitats in the 0-30 m depth range. The stratification scheme included island, reef zone, and depth. Sampling effort was allocated based on strata area and sites were randomly located within strata. Sites were surveyed using photo-quadrats along transects to collect benthic imagery to ultimately produce estimates of relative abundance (percent cover), frequency of occurrence, benthic community taxonomic composition and relative generic richness.

    Benthic habitat imagery were quantitatively analyzed using Coral Point Count with Excel extensions (CPCe; Kohler and Gill, 2006) software in 2014 and a web-based annotation tool called CoralNet (Beijbom et al. 2015) thereafter. In general, images were analyzed to produce three functional group levels of benthic cover: Tier 1 (e.g., hard coral, soft coral, macroalgae, turf algae, etc.), Tier 2 (e.g., Hard Coral = massive, branching, foliose, encrusting, etc.; Macroalgae = upright macroalgae, encrusting macroalgae, bluegreen macroalgae, and Halimeda, etc.), and Tier 3 (e.g., Hard Coral = Astreopora sp, Favia sp, Pocillopora, etc.; Macroalgae = Caulerpa sp, Dictyosphaeria sp, Padina sp, etc.).

  17. d

    Benthic Imagery from the Kahekili Herbivore Fisheries Management Area (HFMA)...

    • catalog.data.gov
    • datasets.ai
    Updated Jul 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (Point of Contact) (2025). Benthic Imagery from the Kahekili Herbivore Fisheries Management Area (HFMA) (NCEI Accession 0255324) [Dataset]. https://catalog.data.gov/dataset/benthic-imagery-from-the-kahekili-herbivore-fisheries-management-area-hfma-ncei-accession-025531
    Explore at:
    Dataset updated
    Jul 1, 2025
    Dataset provided by
    (Point of Contact)
    Area covered
    Kahekili Herbivore Fisheries Management Area
    Description

    Photoquadrat benthic images were collected within the Kahekili Herbivore Fisheries Management Area (KHFMA) in West Maui by the Pacific Islands Fisheries Science Center (PIFSC) Ecosystem Sciences Division (ESD; formerly the Coral Reef Ecosystem Division) for quantifying benthic composition. This dataset is only the imagery; the benthic cover data produced is archived separately in NCEI accession 0165015. Photo-transects were taken at each site, with 25 photographs taken at 1m intervals. Benthic habitat imagery were quantitatively analyzed for each site using a web-based annotation tool called CoralNet where 15 points were analyzed per photograph. The images provided here can be linked to the benthic composition data produced by site name; the accompanying metadata file provided describes the sites analyzed.

  18. A

    Automated Data Annotation Tool Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated Jan 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). Automated Data Annotation Tool Report [Dataset]. https://www.datainsightsmarket.com/reports/automated-data-annotation-tool-1928380
    Explore at:
    doc, ppt, pdfAvailable download formats
    Dataset updated
    Jan 25, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    Market Analysis: Automated Data Annotation Tool The automated data annotation tool market is poised for significant growth, with a projected CAGR of X% over the forecast period from 2025 to 2033. The market is currently valued at approximately $X million and is expected to reach a value of $X million by 2033. Key drivers of this growth include the increasing adoption of artificial intelligence (AI) and machine learning (ML) technologies, the need for faster and more accurate data annotation, and the growing volume of data generated across industries. The market is segmented by application (commercial use, personal use), type (text annotation tool, image annotation tool), and region (North America, South America, Europe, Middle East & Africa, and Asia Pacific). Major industry players include CloudApp, iMerit, Playment, Trilldata Technologies, Amazon Web Services, LionBridge AI, Mighty AI, Samasource, Google, Labelbox, Webtunix AI, Appen, CloudFactory, IBM, Neurala, Alelion, Cogito, Scale, Clickworker GmbH, MonkeyLearn, and Hive. The North American region is currently the largest market for automated data annotation tools, followed by Europe and Asia Pacific. This report provides an in-depth analysis of the Automated Data Annotation Tool market, covering its concentration, trends, key regions, product insights, driving forces, challenges, and emerging trends.

  19. National Coral Reef Monitoring Program: Benthic cover derived from analysis...

    • accession.nodc.noaa.gov
    • datasets.ai
    • +2more
    html
    Updated Apr 3, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NOAA Pacific Islands Fisheries Science Center, Ecosystem Sciences Division (2024). National Coral Reef Monitoring Program: Benthic cover derived from analysis of images collected during stratified random surveys (StRS) across American Samoa [Dataset]. http://doi.org/10.7289/v52v2dfw
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Apr 3, 2024
    Dataset provided by
    National Centers for Environmental Informationhttps://www.ncei.noaa.gov/
    National Oceanic and Atmospheric Administrationhttp://www.noaa.gov/
    Coral Reef Ecosystems Program (CREP)
    Authors
    NOAA Pacific Islands Fisheries Science Center, Ecosystem Sciences Division
    Time period covered
    Feb 15, 2015 - Present
    Area covered
    OCEAN BASIN > Pacific Ocean > Manu'a Group > Olosega Island (14S169W0014), OCEAN BASIN > Pacific Ocean > Manu'a Group > Olosega (14S169W0016), OCEAN BASIN > Pacific Ocean > Manu'a Group > Ofu (14S169W0002), OCEAN > PACIFIC OCEAN > SOUTH PACIFIC OCEAN > POLYNESIA > SAMOA, OCEAN > PACIFIC OCEAN > SOUTH PACIFIC OCEAN, Rose Atoll Marine National Monument, Fagatele Bay National Marine Sanctuary, OCEAN BASIN > Pacific Ocean > Manu'a Group > Ta'u Island (14S169W0012), geographic bounding box, OCEAN BASIN > Pacific Ocean > Manu'a Group > Ofu Island (14S169W0013)
    Description

    The benthic cover data in this collection result from the analysis of images produced by benthic photo-quadrat surveys. These surveys were conducted along transects at stratified random sites across American Samoa as a part of the ongoing National Coral Reef Monitoring Program (NCRMP). Benthic cover data derived from this imagery are also included in this collection.

    A stratified random sampling (StRS) design was employed to survey the coral reef ecosystems throughout the region. The survey domain encompassed the majority of the mapped area of reef and hard bottom habitats in the 0-30 m depth range. The stratification scheme included island, reef zone, and depth. Sampling effort was allocated based on strata area and sites were randomly located within strata. Sites were surveyed using photo-quadrats along transects. The imagery were then analyzed to produce estimates of relative abundance (percent cover), frequency of occurrence, benthic community taxonomic composition, and relative generic richness.

    Benthic habitat imagery were quantitatively analyzed using a web-based annotation tool called CoralNet (Beijbom et al. 2015). In general, images are analyzed to produce three functional group levels of benthic cover: Tier 1 (e.g., hard coral, soft coral, macroalgae, turf algae, etc.), Tier 2 (e.g., Hard Coral = massive, branching, foliose, encrusting, etc.; Macroalgae = upright macroalgae, encrusting macroalgae, bluegreen macroalgae, and Halimeda, etc.), and Tier 3 (e.g., Hard Coral = Astreopora sp, Favia sp, Pocillopora, etc.; Macroalgae = Caulerpa sp, Dictyosphaeria sp, Padina sp, etc.).

  20. g

    Benthic Surveys in Aua, American Samoa: benthic cover derived from analysis...

    • gimi9.com
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Benthic Surveys in Aua, American Samoa: benthic cover derived from analysis of benthic images collected during belt transect surveys of coral demography from 2022-09-12 to 2022-09-22 (NCEI Accession 0275987) | gimi9.com [Dataset]. https://gimi9.com/dataset/data-gov_eb170fbbfd6bcd9557521a3e5c7a814696fd41cc/
    Explore at:
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Aua
    Description

    This data package includes benthic cover data of Aua Reef, American Samoa, produced from the analysis of benthic imagery performed by the Ecosystem Sciences Division (ESD) of the Pacific Island Fisheries Science Center (PIFSC), and funded by the Coral Reef Conservation Program (CRCP). Benthic imagery was collected at 18 randomly-selected sites during coral demographic surveys by the NOAA ESD during the 2022 fly-in mission to American Samoa (MP2206). After processing and sorting site photos, imagery was qualitatively analyzed using the web-based CoralNet image annotation tool. CoralNet projects random points on each image, and the benthic elements falling directly underneath each point are identified by trained scientists. The source imagery analyzed to produce benthic cover estimates are archived separately with the NOAA National Centers for Environmental Information (NCEI) in NCEI Accession 0270551.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Market Research Forecast (2025). Premium Annotation Tools Report [Dataset]. https://www.marketresearchforecast.com/reports/premium-annotation-tools-34887

Premium Annotation Tools Report

Explore at:
ppt, doc, pdfAvailable download formats
Dataset updated
Mar 15, 2025
Dataset authored and provided by
Market Research Forecast
License

https://www.marketresearchforecast.com/privacy-policyhttps://www.marketresearchforecast.com/privacy-policy

Time period covered
2025 - 2033
Area covered
Global
Variables measured
Market Size
Description

The premium annotation tools market, valued at $1115.9 million in 2025, is experiencing robust growth, projected to expand at a Compound Annual Growth Rate (CAGR) of 7.8% from 2025 to 2033. This growth is fueled by the increasing demand for high-quality training data across various sectors, including autonomous vehicles, medical imaging, and natural language processing. The rise of deep learning and artificial intelligence (AI) necessitates meticulously annotated datasets, driving adoption of sophisticated annotation tools that offer features like collaborative annotation, automated workflows, and advanced quality control mechanisms. The market is segmented by deployment (cloud-based and web-based) and application (student, worker, and others), with cloud-based solutions gaining significant traction due to their scalability and accessibility. The competitive landscape is characterized by a mix of established players and emerging startups, constantly innovating to meet the evolving needs of data scientists and AI developers. North America and Europe currently hold the largest market shares, reflecting the high concentration of AI research and development activities in these regions. However, significant growth is anticipated in Asia-Pacific, driven by increasing investments in AI and data-centric technologies within rapidly developing economies like China and India. The continued expansion of the premium annotation tools market is contingent upon several factors. Firstly, the ongoing advancements in AI and machine learning will continue to drive demand for larger and more complex datasets. Secondly, the increasing availability of affordable cloud computing resources will make premium annotation tools more accessible to a broader range of users. Thirdly, the growing focus on data quality and accuracy within the AI development lifecycle will necessitate the adoption of tools capable of guaranteeing high standards. Conversely, factors such as the high initial investment cost of premium tools and the need for skilled professionals to operate them could pose challenges to market penetration. Nevertheless, the overall outlook for the premium annotation tools market remains positive, with substantial opportunities for growth and innovation in the coming years.

Search
Clear search
Close search
Google apps
Main menu