Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset supports a literature mapping of AI-driven content generation, analyzing 631 solutions published over the last five years to better understand and characterize the Generative Artificial Intelligence landscape. Tools like ChatGPT, Dall-E, or Midjourney have democratized access to Large Language Models, enabling the creation of human-like content. However, the concept 'Generative Artificial Intelligence' lacks a universally accepted definition, leading to potential misunderstandings.
The study has been published in International Journal of Interactive Multimedia and Artificial Intelligence.
García-Peñalvo, F. J., & Vázquez-Ingelmo, A. (2023). What do we mean by GenAI? A systematic mapping of the evolution, trends, and techniques involved in Generative AI. International Journal of Interactive Multimedia and Artificial Intelligence, In Press.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Accurate and interpretable solar power forecasting is critical for effectively integrating Photo-Voltaic (PV) systems into modern energy infrastructure. This paper introduces a novel two-stage hybrid framework that couples deep learning-based time series prediction with generative Large Language Models (LLMs) to enhance forecast accuracy and model interpretability. At its core, the proposed SolarTrans model leverages a lightweight Transformer-based encoder-decoder architecture tailored for short-term DC power prediction using multivariate inverter and weather data, including irradiance, ambient and module temperatures, and temporal features. Experiments conducted on publicly available datasets from two PV plants over 34 days demonstrate strong predictive performance. The SolarTrans model achieves a Mean Absolute Error (MAE) of 0.0782 and 0.1544, Root Mean Squared Error (RMSE) of 0.1760 and 0.4424, and R2 scores of 0.9692 and 0.7956 on Plant 1 and Plant 2, respectively. On the combined dataset, the model yields an MAE of 0.1105, RMSE of 0.3189, and R2 of 0.8967. To address the interpretability challenge, we fine-tuned the Flan-T5 model on structured prompts derived from domain-informed templates and forecast outputs. The resulting explanation module achieves ROUGE-1, ROUGE-2, ROUGE-L, and ROUGE-Lsum scores of 0.7889, 0.7211, 0.7759, and 0.7771, respectively, along with a BLEU score of 0.6558, indicating high-fidelity generation of domain-relevant natural language explanations.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Sample sizes for different trophic niches in the bird beak dataset. The data is highly imbalanced.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Like science itself, our understanding of chemical concepts and the way we teach them change over time. This paper explores historical and modern perspectives of the concept of valence in the context of collegiate general chemistry and draws comparisons to responses from generative artificial intelligence (genAI) tools such as ChatGPT. A fundamental concept in chemistry, valence in the early and mid-20th century was primarily defined as the “combining capacity” of atoms. Twenty-first century textbooks do not include this historical definition but rather use valence as an adjective to modify other nouns, e.g., valence electron or valence orbital. To explore these different perspectives in other information sources that could be used by students, we used a systematic series of prompts about valence to analyze the responses from ChatGPT, Bard, Liner, and ChatSonic from September and December 2023. Our findings show the historical definition is very common in responses to prompts which use valence or valency as a noun but less common when prompts include valence as an adjective. Regarding this concept, the state-of-the-art genAI tools are more consistent with textbooks from the 1950s than modern collegiate general chemistry textbooks. These findings present an opportunity for chemistry educators to observe and discuss with students the nature of science and how our understanding of chemistry changes. Including implications for educators, we present an example activity that may be deployed in general chemistry classes.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Accurate and interpretable solar power forecasting is critical for effectively integrating Photo-Voltaic (PV) systems into modern energy infrastructure. This paper introduces a novel two-stage hybrid framework that couples deep learning-based time series prediction with generative Large Language Models (LLMs) to enhance forecast accuracy and model interpretability. At its core, the proposed SolarTrans model leverages a lightweight Transformer-based encoder-decoder architecture tailored for short-term DC power prediction using multivariate inverter and weather data, including irradiance, ambient and module temperatures, and temporal features. Experiments conducted on publicly available datasets from two PV plants over 34 days demonstrate strong predictive performance. The SolarTrans model achieves a Mean Absolute Error (MAE) of 0.0782 and 0.1544, Root Mean Squared Error (RMSE) of 0.1760 and 0.4424, and R2 scores of 0.9692 and 0.7956 on Plant 1 and Plant 2, respectively. On the combined dataset, the model yields an MAE of 0.1105, RMSE of 0.3189, and R2 of 0.8967. To address the interpretability challenge, we fine-tuned the Flan-T5 model on structured prompts derived from domain-informed templates and forecast outputs. The resulting explanation module achieves ROUGE-1, ROUGE-2, ROUGE-L, and ROUGE-Lsum scores of 0.7889, 0.7211, 0.7759, and 0.7771, respectively, along with a BLEU score of 0.6558, indicating high-fidelity generation of domain-relevant natural language explanations.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset supports a literature mapping of AI-driven content generation, analyzing 631 solutions published over the last five years to better understand and characterize the Generative Artificial Intelligence landscape. Tools like ChatGPT, Dall-E, or Midjourney have democratized access to Large Language Models, enabling the creation of human-like content. However, the concept 'Generative Artificial Intelligence' lacks a universally accepted definition, leading to potential misunderstandings.
The study has been published in International Journal of Interactive Multimedia and Artificial Intelligence.
García-Peñalvo, F. J., & Vázquez-Ingelmo, A. (2023). What do we mean by GenAI? A systematic mapping of the evolution, trends, and techniques involved in Generative AI. International Journal of Interactive Multimedia and Artificial Intelligence, In Press.