English 中文(简体)
PyTorch - Word Embedding
  • 时间:2024-11-03

PyTorch - Word Embedding


Previous Page Next Page  

In this chapter, we will understand the famous word embedding model − word2vec. Word2vec model is used to produce word embedding with the help of group of related models. Word2vec model is implemented with pure C-code and the gradient are computed manually.

The implementation of word2vec model in PyTorch is explained in the below steps −

Step 1

Implement the pbraries in word embedding as mentioned below −

import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F

Step 2

Implement the Skip Gram Model of word embedding with the class called word2vec. It includes emb_size, emb_dimension, u_embedding, v_embedding type of attributes.

class SkipGramModel(nn.Module):
   def __init__(self, emb_size, emb_dimension):
      super(SkipGramModel, self).__init__()
      self.emb_size = emb_size
      self.emb_dimension = emb_dimension
      self.u_embeddings = nn.Embedding(emb_size, emb_dimension, sparse=True)
      self.v_embeddings = nn.Embedding(emb_size, emb_dimension, sparse = True)
      self.init_emb()
   def init_emb(self):
      initrange = 0.5 / self.emb_dimension
      self.u_embeddings.weight.data.uniform_(-initrange, initrange)
      self.v_embeddings.weight.data.uniform_(-0, 0)
   def forward(self, pos_u, pos_v, neg_v):
      emb_u = self.u_embeddings(pos_u)
      emb_v = self.v_embeddings(pos_v)
      score = torch.mul(emb_u, emb_v).squeeze()
      score = torch.sum(score, dim = 1)
      score = F.logsigmoid(score)
      neg_emb_v = self.v_embeddings(neg_v)
      neg_score = torch.bmm(neg_emb_v, emb_u.unsqueeze(2)).squeeze()
      neg_score = F.logsigmoid(-1 * neg_score)
      return -1 * (torch.sum(score)+torch.sum(neg_score))
   def save_embedding(self, id2word, file_name, use_cuda):
      if use_cuda:
         embedding = self.u_embeddings.weight.cpu().data.numpy()
      else:
         embedding = self.u_embeddings.weight.data.numpy()
      fout = open(file_name,  w )
      fout.write( %d %d
  % (len(id2word), self.emb_dimension))
      for wid, w in id2word.items():
         e = embedding[wid]
         e =    .join(map(lambda x: str(x), e))
         fout.write( %s %s
  % (w, e))
def test():
   model = SkipGramModel(100, 100)
   id2word = dict()
   for i in range(100):
      id2word[i] = str(i)
   model.save_embedding(id2word)         

Step 3

Implement the main method to get the word embedding model displayed in proper way.

if __name__  ==   __main__ :
   test()
Advertisements