Why this hasn't landed yet
It is a methods paper about a reference dataset. No patient was harmed. No drug was pulled. No regulator announced anything. The downstream impact, better benchmarking of early warning systems, is real but one step removed from anything a general audience would recognize as news.
What happens next
Researchers developing signal detection algorithms now have a concrete benchmark they did not have before. Expect a wave of retrospective validation papers testing existing methods against the time-indexed dataset to see which ones would have caught post-market dangers before regulatory confirmation. Methods that looked strong on undated datasets may look weaker when temporal discipline is applied. The EMA and national competent authorities in the EU will likely face pressure to adopt whichever methods validate best. A parallel question this dataset makes answerable: are there drug-adverse event pairs in the 25.5% post-marketing category that took unusually long to appear in labels, and what delayed them? That is where the politically uncomfortable findings will come from.
The catch
The dataset covers only centrally authorized products, which is 1,513 out of a much larger universe of medicines available in Europe. Nationally authorized products are not included. That is a significant scope limitation for any researcher trying to generalize findings. The time indexing relies on dates of SmPC label changes as a proxy for when regulatory authorities 'recognized' an adverse event, but label updates lag the actual regulatory decision-making process by an unknown and variable amount. The timestamp is real; what it measures is debatable. No context research was available to name specific critics, but the methodological debate over what counts as 'recognition' is predictable and will arrive in peer review.