top of page

AI Generated Fiction

AI-Generated Fiction examines the creative potential of artificial intelligence in storytelling. The project uses AI models to produce narratives, exploring how machines can emulate human-like imagination. By analyzing AI-authored stories, it questions notions of originality and creativity, and asks what it means to be an “author” when a narrative is machine-made. Ultimately, this research sheds light on the evolving relationship between human writers and intelligent algorithms in literature.

Small language models (SLMs) are increasingly utilized for on-device applications due to their ability to ensure user privacy, reduce inference latency, and operate independently of cloud infrastructure. However, their performance is often limited when processing complex data structures such as graphs, which are ubiquitous in real-world datasets like social networks and system interactions. Graphs inherently encode intricate structural dependencies, requiring models to effectively capture both local and global relationships. Traditional language models, designed primarily for text data, struggle to address these requirements, leading to suboptimal performance in graph-related tasks. To overcome this limitation, we propose a novel graph encoder-based prompt tuning framework which integrates a graph convolutional network (GCN) with a graph transformer. By leveraging the complementary strengths of the GCN for local structural modeling and the graph transformer for capturing global relationships, our method enables SLMs to effectively process graph data. This integration significantly enhances the ability of SLMs to handle graph-centric tasks while maintaining the efficiency required for resource-constrained devices. The experimental results show that our approach not only improves the performance of SLMs on various graph benchmarks but also achieves results which closely approach the performance of a large language model (LLM). This work highlights the potential of extending SLMs for graph-based applications and advancing the capabilities of on-device artificial intelligence.​

Large language models (LLMs) have demonstrated competitive performance across various domains, particularly in tasks requiring creativity, and thus offer a wide range of applications. This study evaluates the performance of large language models (LLMs), such as GPT-4, in generating creative writing through a comparison with text authored by humans. This study evaluates 100 creative writing pieces generated by GPT-4 and 100 texts authored by humans, using parameters such as fluency, flexibility, originality, elaboration, usefulness, and specific creative strategies. The findings reveal that GPT-4 is able to emulate the performance of human authors closely, producing high-quality and creative content. Despite the inconsistencies among the evaluators, GPT-4 demonstrates the significant potential to enhance human creativity and improve the quality of creative writing. However, the limitations inherent to the training data of GPT-4, including its dependence on factual and historical background information, indicate critical differences from human creativity.

bottom of page