Loading…

Efficient Link Prediction via GNN Layers Induced by Negative Sampling

Graph neural networks (GNNs) for link prediction can loosely be divided into two broad categories. First, node-wise architectures pre-compute individual embeddings for each node that are later combined by a simple decoder to make predictions. While extremely efficient at inference time, model expres...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on knowledge and data engineering 2025-01, Vol.37 (1), p.253-264
Main Authors: Wang, Yuxin, Hu, Xiannian, Gan, Quan, Huang, Xuanjing, Qiu, Xipeng, Wipf, David
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c148t-f225317b84650d0ab0a10bfc636fb8fd7a787633bde45ad10cec5e139967a1ca3
container_end_page 264
container_issue 1
container_start_page 253
container_title IEEE transactions on knowledge and data engineering
container_volume 37
creator Wang, Yuxin
Hu, Xiannian
Gan, Quan
Huang, Xuanjing
Qiu, Xipeng
Wipf, David
description Graph neural networks (GNNs) for link prediction can loosely be divided into two broad categories. First, node-wise architectures pre-compute individual embeddings for each node that are later combined by a simple decoder to make predictions. While extremely efficient at inference time, model expressiveness is limited such that isomorphic nodes contributing to candidate edges may not be distinguishable, compromising accuracy. In contrast, edge-wise methods rely on the formation of edge-specific subgraph embeddings to enrich the representation of pair-wise relationships, disambiguating isomorphic nodes to improve accuracy, but with increased model complexity. To better navigate this trade-off, we propose a novel GNN architecture whereby the forward pass explicitly depends on both positive (as is typical) and negative (unique to our approach) edges to inform more flexible, yet still cheap node-wise embeddings. This is achieved by recasting the embeddings themselves as minimizers of a forward-pass-specific energy function that favors separation of positive and negative samples. Notably, this energy is distinct from the actual training loss shared by most existing link prediction models, where contrastive pairs only influence the backward pass . As demonstrated by extensive empirical evaluations, the resulting architecture retains the inference speed of node-wise models, while producing competitive accuracy with edge-wise alternatives.
doi_str_mv 10.1109/TKDE.2024.3481015
format article
fullrecord <record><control><sourceid>crossref_ieee_</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TKDE_2024_3481015</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10716808</ieee_id><sourcerecordid>10_1109_TKDE_2024_3481015</sourcerecordid><originalsourceid>FETCH-LOGICAL-c148t-f225317b84650d0ab0a10bfc636fb8fd7a787633bde45ad10cec5e139967a1ca3</originalsourceid><addsrcrecordid>eNpNkMFOwzAQRC0EEqXwAUgc_AMpu7EdO0dUQqmIAhLlHDn2ujK0aZWESv17WrUHTjOHeXN4jN0jTBAhf1y8PReTFFI5EdIgoLpgI1TKJCnmeHnoIDGRQuprdtP33wBgtMERK4oQoovUDryM7Q__6MhHN8RNy3fR8llV8dLuqev5vPW_jjxv9ryipR3ijvinXW9XsV3esqtgVz3dnXPMvl6KxfQ1Kd9n8-lTmTiUZkhCmiqBujEyU-DBNmARmuAykYXGBK-tNjoTovEklfUIjpwiFHmeaYvOijHD06_rNn3fUai3XVzbbl8j1EcP9dFDffRQnz0cmIcTE4no315jZsCIP_HOWRA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Efficient Link Prediction via GNN Layers Induced by Negative Sampling</title><source>IEEE Xplore (Online service)</source><creator>Wang, Yuxin ; Hu, Xiannian ; Gan, Quan ; Huang, Xuanjing ; Qiu, Xipeng ; Wipf, David</creator><creatorcontrib>Wang, Yuxin ; Hu, Xiannian ; Gan, Quan ; Huang, Xuanjing ; Qiu, Xipeng ; Wipf, David</creatorcontrib><description>Graph neural networks (GNNs) for link prediction can loosely be divided into two broad categories. First, node-wise architectures pre-compute individual embeddings for each node that are later combined by a simple decoder to make predictions. While extremely efficient at inference time, model expressiveness is limited such that isomorphic nodes contributing to candidate edges may not be distinguishable, compromising accuracy. In contrast, edge-wise methods rely on the formation of edge-specific subgraph embeddings to enrich the representation of pair-wise relationships, disambiguating isomorphic nodes to improve accuracy, but with increased model complexity. To better navigate this trade-off, we propose a novel GNN architecture whereby the forward pass explicitly depends on both positive (as is typical) and negative (unique to our approach) edges to inform more flexible, yet still cheap node-wise embeddings. This is achieved by recasting the embeddings themselves as minimizers of a forward-pass-specific energy function that favors separation of positive and negative samples. Notably, this energy is distinct from the actual training loss shared by most existing link prediction models, where contrastive pairs only influence the backward pass . As demonstrated by extensive empirical evaluations, the resulting architecture retains the inference speed of node-wise models, while producing competitive accuracy with edge-wise alternatives.</description><identifier>ISSN: 1041-4347</identifier><identifier>EISSN: 1558-2191</identifier><identifier>DOI: 10.1109/TKDE.2024.3481015</identifier><identifier>CODEN: ITKEEH</identifier><language>eng</language><publisher>IEEE</publisher><subject>Accuracy ; Artificial intelligence (AI) ; Computational modeling ; Computer architecture ; Convergence ; Costs ; Decoding ; Graph neural networks ; graph neural networks (GNN) ; link prediction ; machine learning ; Optimization ; Predictive models ; Training</subject><ispartof>IEEE transactions on knowledge and data engineering, 2025-01, Vol.37 (1), p.253-264</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c148t-f225317b84650d0ab0a10bfc636fb8fd7a787633bde45ad10cec5e139967a1ca3</cites><orcidid>0000-0001-7163-5247 ; 0000-0001-9197-9426 ; 0009-0001-3296-2867 ; 0009-0002-0986-457X ; 0000-0002-2768-4540 ; 0000-0001-6551-000X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10716808$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,27903,27904,54775</link.rule.ids></links><search><creatorcontrib>Wang, Yuxin</creatorcontrib><creatorcontrib>Hu, Xiannian</creatorcontrib><creatorcontrib>Gan, Quan</creatorcontrib><creatorcontrib>Huang, Xuanjing</creatorcontrib><creatorcontrib>Qiu, Xipeng</creatorcontrib><creatorcontrib>Wipf, David</creatorcontrib><title>Efficient Link Prediction via GNN Layers Induced by Negative Sampling</title><title>IEEE transactions on knowledge and data engineering</title><addtitle>TKDE</addtitle><description>Graph neural networks (GNNs) for link prediction can loosely be divided into two broad categories. First, node-wise architectures pre-compute individual embeddings for each node that are later combined by a simple decoder to make predictions. While extremely efficient at inference time, model expressiveness is limited such that isomorphic nodes contributing to candidate edges may not be distinguishable, compromising accuracy. In contrast, edge-wise methods rely on the formation of edge-specific subgraph embeddings to enrich the representation of pair-wise relationships, disambiguating isomorphic nodes to improve accuracy, but with increased model complexity. To better navigate this trade-off, we propose a novel GNN architecture whereby the forward pass explicitly depends on both positive (as is typical) and negative (unique to our approach) edges to inform more flexible, yet still cheap node-wise embeddings. This is achieved by recasting the embeddings themselves as minimizers of a forward-pass-specific energy function that favors separation of positive and negative samples. Notably, this energy is distinct from the actual training loss shared by most existing link prediction models, where contrastive pairs only influence the backward pass . As demonstrated by extensive empirical evaluations, the resulting architecture retains the inference speed of node-wise models, while producing competitive accuracy with edge-wise alternatives.</description><subject>Accuracy</subject><subject>Artificial intelligence (AI)</subject><subject>Computational modeling</subject><subject>Computer architecture</subject><subject>Convergence</subject><subject>Costs</subject><subject>Decoding</subject><subject>Graph neural networks</subject><subject>graph neural networks (GNN)</subject><subject>link prediction</subject><subject>machine learning</subject><subject>Optimization</subject><subject>Predictive models</subject><subject>Training</subject><issn>1041-4347</issn><issn>1558-2191</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2025</creationdate><recordtype>article</recordtype><recordid>eNpNkMFOwzAQRC0EEqXwAUgc_AMpu7EdO0dUQqmIAhLlHDn2ujK0aZWESv17WrUHTjOHeXN4jN0jTBAhf1y8PReTFFI5EdIgoLpgI1TKJCnmeHnoIDGRQuprdtP33wBgtMERK4oQoovUDryM7Q__6MhHN8RNy3fR8llV8dLuqev5vPW_jjxv9ryipR3ijvinXW9XsV3esqtgVz3dnXPMvl6KxfQ1Kd9n8-lTmTiUZkhCmiqBujEyU-DBNmARmuAykYXGBK-tNjoTovEklfUIjpwiFHmeaYvOijHD06_rNn3fUai3XVzbbl8j1EcP9dFDffRQnz0cmIcTE4no315jZsCIP_HOWRA</recordid><startdate>202501</startdate><enddate>202501</enddate><creator>Wang, Yuxin</creator><creator>Hu, Xiannian</creator><creator>Gan, Quan</creator><creator>Huang, Xuanjing</creator><creator>Qiu, Xipeng</creator><creator>Wipf, David</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0001-7163-5247</orcidid><orcidid>https://orcid.org/0000-0001-9197-9426</orcidid><orcidid>https://orcid.org/0009-0001-3296-2867</orcidid><orcidid>https://orcid.org/0009-0002-0986-457X</orcidid><orcidid>https://orcid.org/0000-0002-2768-4540</orcidid><orcidid>https://orcid.org/0000-0001-6551-000X</orcidid></search><sort><creationdate>202501</creationdate><title>Efficient Link Prediction via GNN Layers Induced by Negative Sampling</title><author>Wang, Yuxin ; Hu, Xiannian ; Gan, Quan ; Huang, Xuanjing ; Qiu, Xipeng ; Wipf, David</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c148t-f225317b84650d0ab0a10bfc636fb8fd7a787633bde45ad10cec5e139967a1ca3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2025</creationdate><topic>Accuracy</topic><topic>Artificial intelligence (AI)</topic><topic>Computational modeling</topic><topic>Computer architecture</topic><topic>Convergence</topic><topic>Costs</topic><topic>Decoding</topic><topic>Graph neural networks</topic><topic>graph neural networks (GNN)</topic><topic>link prediction</topic><topic>machine learning</topic><topic>Optimization</topic><topic>Predictive models</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Yuxin</creatorcontrib><creatorcontrib>Hu, Xiannian</creatorcontrib><creatorcontrib>Gan, Quan</creatorcontrib><creatorcontrib>Huang, Xuanjing</creatorcontrib><creatorcontrib>Qiu, Xipeng</creatorcontrib><creatorcontrib>Wipf, David</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998–Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><jtitle>IEEE transactions on knowledge and data engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Yuxin</au><au>Hu, Xiannian</au><au>Gan, Quan</au><au>Huang, Xuanjing</au><au>Qiu, Xipeng</au><au>Wipf, David</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Efficient Link Prediction via GNN Layers Induced by Negative Sampling</atitle><jtitle>IEEE transactions on knowledge and data engineering</jtitle><stitle>TKDE</stitle><date>2025-01</date><risdate>2025</risdate><volume>37</volume><issue>1</issue><spage>253</spage><epage>264</epage><pages>253-264</pages><issn>1041-4347</issn><eissn>1558-2191</eissn><coden>ITKEEH</coden><abstract>Graph neural networks (GNNs) for link prediction can loosely be divided into two broad categories. First, node-wise architectures pre-compute individual embeddings for each node that are later combined by a simple decoder to make predictions. While extremely efficient at inference time, model expressiveness is limited such that isomorphic nodes contributing to candidate edges may not be distinguishable, compromising accuracy. In contrast, edge-wise methods rely on the formation of edge-specific subgraph embeddings to enrich the representation of pair-wise relationships, disambiguating isomorphic nodes to improve accuracy, but with increased model complexity. To better navigate this trade-off, we propose a novel GNN architecture whereby the forward pass explicitly depends on both positive (as is typical) and negative (unique to our approach) edges to inform more flexible, yet still cheap node-wise embeddings. This is achieved by recasting the embeddings themselves as minimizers of a forward-pass-specific energy function that favors separation of positive and negative samples. Notably, this energy is distinct from the actual training loss shared by most existing link prediction models, where contrastive pairs only influence the backward pass . As demonstrated by extensive empirical evaluations, the resulting architecture retains the inference speed of node-wise models, while producing competitive accuracy with edge-wise alternatives.</abstract><pub>IEEE</pub><doi>10.1109/TKDE.2024.3481015</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0001-7163-5247</orcidid><orcidid>https://orcid.org/0000-0001-9197-9426</orcidid><orcidid>https://orcid.org/0009-0001-3296-2867</orcidid><orcidid>https://orcid.org/0009-0002-0986-457X</orcidid><orcidid>https://orcid.org/0000-0002-2768-4540</orcidid><orcidid>https://orcid.org/0000-0001-6551-000X</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1041-4347
ispartof IEEE transactions on knowledge and data engineering, 2025-01, Vol.37 (1), p.253-264
issn 1041-4347
1558-2191
language eng
recordid cdi_crossref_primary_10_1109_TKDE_2024_3481015
source IEEE Xplore (Online service)
subjects Accuracy
Artificial intelligence (AI)
Computational modeling
Computer architecture
Convergence
Costs
Decoding
Graph neural networks
graph neural networks (GNN)
link prediction
machine learning
Optimization
Predictive models
Training
title Efficient Link Prediction via GNN Layers Induced by Negative Sampling
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T23%3A12%3A12IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Efficient%20Link%20Prediction%20via%20GNN%20Layers%20Induced%20by%20Negative%20Sampling&rft.jtitle=IEEE%20transactions%20on%20knowledge%20and%20data%20engineering&rft.au=Wang,%20Yuxin&rft.date=2025-01&rft.volume=37&rft.issue=1&rft.spage=253&rft.epage=264&rft.pages=253-264&rft.issn=1041-4347&rft.eissn=1558-2191&rft.coden=ITKEEH&rft_id=info:doi/10.1109/TKDE.2024.3481015&rft_dat=%3Ccrossref_ieee_%3E10_1109_TKDE_2024_3481015%3C/crossref_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c148t-f225317b84650d0ab0a10bfc636fb8fd7a787633bde45ad10cec5e139967a1ca3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10716808&rfr_iscdi=true