top of page
roo-bgbg.jpg
roo.png
linkedin.png
logo-ilovemarketing.png
2026report.png

Stay ahead of emerging marketing research trends without the chaos of information overload. We track shifts in topics, methods, and expectations in marketing research helping you understand where the field is heading and how to contribute meaningfully. Build confidence, stay informed, and define your own research path.

Research Trend Report Showing Where You Need to Leap Next

2026 Marketing Science Annual Trend Report*

block2.png
block2.png
block2.png

What will be published

in 2–3 years?

Which research topics are emerging

and which are declining?​

Is your area of interest in demand

within marketing science?

*Based on a content analysis of 235 special issue calls for papers in leading marketing journals. 

Full report to be released in Spring '26

ilovemmock_edited.png

Don't miss out — request it to be sent directly to your email!

arrow-pink.png
Position

"ILOVE Marketing 2026" Report Insights

Methodology Behind the Report

What We Studied and Why

This study maps the evolving landscape of marketing scholarship by analysing calls for papers (CFPs) for special issues published in marketing journals between 2022 and 2025.This study maps the evolving landscape of marketing scholarship by analysing calls for papers (CFPs) for special issues published in marketing journals between 2022 and 2025. Why CFPs? Unlike analysing already-published articles, which reflect what researchers were working on 2–3 years ago, CFPs reveal what journal editors believe matters right now and where the field should be heading. They're forward-looking signals of scholarly priorities, acting as editorial roadmaps that shape which topics attract research attention and funding. CFPs also offer rich metadata beyond just themes. They tell us about editorial team composition, journal quality, geographic diversity, and timing enabling us to examine how research priorities differ across journal tiers, regions, and time periods. Our final dataset included 225 CFPs from 62 journals across 27 publishers. The sample leans toward recent years (61% from 2024–2025), ensuring our findings reflect current editorial priorities. Journals spanned the full quality spectrum using the Academic Journal Guide (AJG) 2024 and Scopus quartile rankings (Q1-Q4. Special issues were managed by an average of 3 editors representing nearly 2 countries each, reflecting the global nature of marketing scholarship today. Why CFPs? Unlike analysing already-published articles, which reflect what researchers were working on 2–3 years ago, CFPs reveal what journal editors believe matters right now and where the field should be heading. They're forward-looking signals of scholarly priorities, acting as editorial roadmaps that shape which topics attract research attention and funding. CFPs also offer rich metadata beyond just themes. They tell us about editorial team composition, journal quality, geographic diversity, and timing enabling us to examine how research priorities differ across journal tiers, regions, and time periods. Our final dataset included 225 CFPs from 62 journals across 27 publishers. The sample leans toward recent years (61% from 2024–2025), ensuring our findings reflect current editorial priorities. Journals spanned the full quality spectrum using the Academic Journal Guide (AJG) 2024 and Scopus quartile rankings (Q1-Q4. Special issues were managed by an average of 3 editors representing nearly 2 countries each, reflecting the global nature of marketing scholarship today.

Building the Thematic Framework

To analyse 225 special issue calls for papers (CFPs), we developed a theme and subtheme framework through an iterative process combining secondary evidence, full-sample exploration, confirmatory multi-coder classification, and manual validation. We began by reviewing secondary resources, including scientific articles and industry reports on emerging priorities in marketing research. This informed an initial exploratory structure of 12 themes with multiple subthemes, defined with boundary rules and keyword cues. We then tested and refined the structure through exploratory analysis of the full CFP dataset, both to assess how strongly each theme appeared in editorial gap signals and to capture additional themes and subthemes emerging from CFP language. This produced the updated theme and subtheme framework used for classification. Using this framework, we conducted confirmatory coding with three independent AI coders (GPT-5.2, Gemini Pro, and Claude 4.5 Sonnet), each coding every CFP under identical inputs and instructions. Coders assigned a primary theme, up to two secondary themes, and relative emphasis allocations with confidence ratings. Agreement was high across coders. In the small number of cases where all three coders disagreed, we resolved classifications through a structured review process including keyword checks, review of emphasis allocations, and expert reading, with senior faculty adjudication where needed. Finally, trained human coders manually classified a random subset of CFPs to validate the automated consensus. Alignment between manual coding and the triangulated AI classifications was strong, providing an empirical check on reliability.

How We Analysed the Data

With all CFPs reliably coded, we ran four main types of analysis. Co-occurrence analysis examined which themes appeared together in the same CFPs more often than chance would predict. We built a matrix showing every possible theme pairing, then tested each with chi-square statistics and calculated "lift ratios", which is a measure of how much more (or less) frequently two themes co-occur compared to what you'd expect if they were independent. We also mapped these relationships as a network, revealing which themes serve as connectors across the field and which sit in relative isolation. Journal tier analysis compared theme emphasis between higher tiers of journals (ABS 3, 4, 4*) and lower-tier journals (ABS 1, 2), testing whether certain topics are more valued at different quality levels. Geographic scope analysis examined whether theme emphasis varies depending on whether guest editors are largely based in the journal’s publisher country versus internationally distributed, as a proxy for how locally anchored versus broadly framed certain gaps appear. Temporal trend analysis tracked how theme prevalence changed from 2022 to 2025, identifying which topics are gaining momentum and which are fading. We also looked at volatility – themes that spiked suddenly (like AI after ChatGPT's release) versus those with steady, sustained presence. Finally, we ran multiple robustness checks, varying coding thresholds, excluding ambiguous cases, controlling for publisher effects and publication year, to ensure our findings weren't artefacts of any single methodological choice. All main results held across these alternative specifications.

© 2026 by Dan Muravsky

bottom of page