1. <strong id="7actg"></strong>
    2. <table id="7actg"></table>

    3. <address id="7actg"></address>
      <address id="7actg"></address>
      1. <object id="7actg"><tt id="7actg"></tt></object>

        【NLP】簡(jiǎn)單學(xué)習(xí)一下NLP中的transformer的pytorch代碼

        共 37383字,需瀏覽 75分鐘

         ·

        2022-05-15 17:06

        • 經(jīng)典transformer的學(xué)習(xí)
        • 文章轉(zhuǎn)自微信公眾號(hào)【機(jī)器學(xué)習(xí)煉丹術(shù)】
        • 作者:陳亦新(已授權(quán))
        • 聯(lián)系方式: 微信cyx645016617
        • 歡迎交流,共同進(jìn)步


        • 代碼細(xì)講

          • transformer

          • Embedding

          • Encoder_MultipleLayers

          • Encoder

        • 完整代碼



        代碼細(xì)講

        transformer

        class transformer(nn.Sequential):
            def __init__(self, encoding, **config):
                super(transformer, self).__init__()
                if encoding == 'drug':
                    self.emb = Embeddings(config['input_dim_drug'], config['transformer_emb_size_drug'], 50, config['transformer_dropout_rate'])
                    self.encoder = Encoder_MultipleLayers(config['transformer_n_layer_drug'], 
                                                            config['transformer_emb_size_drug'], 
                                                            config['transformer_intermediate_size_drug'], 
                                                            config['transformer_num_attention_heads_drug'],
                                                            config['transformer_attention_probs_dropout'],
                                                            config['transformer_hidden_dropout_rate'])
                elif encoding == 'protein':
                    self.emb = Embeddings(config['input_dim_protein'], config['transformer_emb_size_target'], 545, config['transformer_dropout_rate'])
                    self.encoder = Encoder_MultipleLayers(config['transformer_n_layer_target'], 
                                                            config['transformer_emb_size_target'], 
                                                            config['transformer_intermediate_size_target'], 
                                                            config['transformer_num_attention_heads_target'],
                                                            config['transformer_attention_probs_dropout'],
                                                            config['transformer_hidden_dropout_rate'])

            ### parameter v (tuple of length 2) is from utils.drug2emb_encoder 
            def forward(self, v):
                e = v[0].long().to(device)
                e_mask = v[1].long().to(device)
                print(e.shape,e_mask.shape)
                ex_e_mask = e_mask.unsqueeze(1).unsqueeze(2)
                ex_e_mask = (1.0 - ex_e_mask) * -10000.0

                emb = self.emb(e)
                encoded_layers = self.encoder(emb.float(), ex_e_mask.float())
                return encoded_layers[:,0]
        • 只要有兩個(gè)組件,一個(gè)是Embedding層,一個(gè)是Encoder_MultipleLayers模塊;
        • forward的輸入v是一個(gè)元組,包含兩個(gè)元素:第一個(gè)是數(shù)據(jù),第二個(gè)是mask。對(duì)應(yīng)有效數(shù)據(jù)的位置。

        Embedding

        class Embeddings(nn.Module):
            """Construct the embeddings from protein/target, position embeddings.
            """

            def __init__(self, vocab_size, hidden_size, max_position_size, dropout_rate):
                super(Embeddings, self).__init__()
                self.word_embeddings = nn.Embedding(vocab_size, hidden_size)
                self.position_embeddings = nn.Embedding(max_position_size, hidden_size)

                self.LayerNorm = nn.LayerNorm(hidden_size)
                self.dropout = nn.Dropout(dropout_rate)

            def forward(self, input_ids):
                seq_length = input_ids.size(1)
                position_ids = torch.arange(seq_length, dtype=torch.long, device=input_ids.device)
                position_ids = position_ids.unsqueeze(0).expand_as(input_ids)
                
                words_embeddings = self.word_embeddings(input_ids)
                position_embeddings = self.position_embeddings(position_ids)

                embeddings = words_embeddings + position_embeddings
                embeddings = self.LayerNorm(embeddings)
                embeddings = self.dropout(embeddings)
                return embeddings
        • 包含三個(gè)組件,一個(gè)是Embedding,其他是LayerNorm和Dropout層。
        torch.nn.Embedding(num_embeddings, embedding_dim, padding_idx=None,
         max_norm=None,  norm_type=2.0,   scale_grad_by_freq=False
         sparse=False,  _weight=None)

        其為一個(gè)簡(jiǎn)單的存儲(chǔ)固定大小的詞典的嵌入向量的查找表,意思就是說(shuō),給一個(gè)編號(hào),嵌入層就能返回這個(gè)編號(hào)對(duì)應(yīng)的嵌入向量,嵌入向量反映了各個(gè)編號(hào)代表的符號(hào)之間的語(yǔ)義關(guān)系。

        輸入為一個(gè)編號(hào)列表,輸出為對(duì)應(yīng)的符號(hào)嵌入向量列表。

        • num_embeddings (python:int) – 詞典的大小尺寸,比如總共出現(xiàn)5000個(gè)詞,那就輸入5000。此時(shí)index為(0-4999)
        • embedding_dim (python:int) – 嵌入向量的維度,即用多少維來(lái)表示一個(gè)符號(hào)。
        • padding_idx (python:int, optional) – 填充id,比如,輸入長(zhǎng)度為100,但是每次的句子長(zhǎng)度并不一樣,后面就需要用統(tǒng)一的數(shù)字填充,而這里就是指定這個(gè)數(shù)字,這樣,網(wǎng)絡(luò)在遇到填充id時(shí),就不會(huì)計(jì)算其與其它符號(hào)的相關(guān)性。(初始化為0)
        • max_norm (python:float, optional) – 最大范數(shù),如果嵌入向量的范數(shù)超過(guò)了這個(gè)界限,就要進(jìn)行再歸一化。
        • norm_type (python:float, optional) – 指定利用什么范數(shù)計(jì)算,并用于對(duì)比max_norm,默認(rèn)為2范數(shù)。
        • scale_grad_by_freq (boolean, optional) – 根據(jù)單詞在mini-batch中出現(xiàn)的頻率,對(duì)梯度進(jìn)行放縮。默認(rèn)為False.
        • sparse (bool, optional) – 若為T(mén)rue,則與權(quán)重矩陣相關(guān)的梯度轉(zhuǎn)變?yōu)橄∈鑿埩俊?/section>

        舉一個(gè)例子:

        如果你的整數(shù)最大超過(guò)了設(shè)置的字典的容量,那么就會(huì)出錯(cuò)誤:

        • Embedding其中有可學(xué)習(xí)參數(shù)!是一個(gè)num_embedding * embedding_dim的矩陣。

        Encoder_MultipleLayers

        class Encoder_MultipleLayers(nn.Module):
            def __init__(self, n_layer, hidden_size, intermediate_size, num_attention_heads, attention_probs_dropout_prob, hidden_dropout_prob):
                super(Encoder_MultipleLayers, self).__init__()
                layer = Encoder(hidden_size, intermediate_size, num_attention_heads, attention_probs_dropout_prob, hidden_dropout_prob)
                self.layer = nn.ModuleList([copy.deepcopy(layer) for _ in range(n_layer)])    

            def forward(self, hidden_states, attention_mask, output_all_encoded_layers=True):
                all_encoder_layers = []
                for layer_module in self.layer:
                    hidden_states = layer_module(hidden_states, attention_mask)
                return hidden_states
        • transformer中的embedding,目的是將數(shù)據(jù)轉(zhuǎn)換成對(duì)應(yīng)的向量。這個(gè)Encoder-multilayer則是提取特征的關(guān)鍵。
        • 結(jié)構(gòu)很簡(jiǎn)單,就是由==n_layer==個(gè)Encoder堆疊而成。

        Encoder

        class Encoder(nn.Module):
            def __init__(self, hidden_size, intermediate_size, num_attention_heads, attention_probs_dropout_prob, hidden_dropout_prob):
                super(Encoder, self).__init__()
                self.attention = Attention(hidden_size, num_attention_heads, attention_probs_dropout_prob, hidden_dropout_prob)
                self.intermediate = Intermediate(hidden_size, intermediate_size)
                self.output = Output(intermediate_size, hidden_size, hidden_dropout_prob)

            def forward(self, hidden_states, attention_mask):
                attention_output = self.attention(hidden_states, attention_mask)
                intermediate_output = self.intermediate(attention_output)
                layer_output = self.output(intermediate_output, attention_output)
                return layer_output    
        • 其中包含了Attention部分,Intermediate和Output。
        class Attention(nn.Module):
            def __init__(self, hidden_size, num_attention_heads, attention_probs_dropout_prob, hidden_dropout_prob):
                super(Attention, self).__init__()
                self.self = SelfAttention(hidden_size, num_attention_heads, attention_probs_dropout_prob)
                self.output = SelfOutput(hidden_size, hidden_dropout_prob)

            def forward(self, input_tensor, attention_mask):
                self_output = self.self(input_tensor, attention_mask)
                attention_output = self.output(self_output, input_tensor)
                return attention_output  
        class SelfAttention(nn.Module):
            def __init__(self, hidden_size, num_attention_heads, attention_probs_dropout_prob):
                super(SelfAttention, self).__init__()
                if hidden_size % num_attention_heads != 0:
                    raise ValueError(
                        "The hidden size (%d) is not a multiple of the number of attention "
                        "heads (%d)" % (hidden_size, num_attention_heads))
                self.num_attention_heads = num_attention_heads
                self.attention_head_size = int(hidden_size / num_attention_heads)
                self.all_head_size = self.num_attention_heads * self.attention_head_size

                self.query = nn.Linear(hidden_size, self.all_head_size)
                self.key = nn.Linear(hidden_size, self.all_head_size)
                self.value = nn.Linear(hidden_size, self.all_head_size)

                self.dropout = nn.Dropout(attention_probs_dropout_prob)

            def transpose_for_scores(self, x):
            # num_attention_heads = 8, attention_head_size = 128 / 8 = 16
                new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size)
                x = x.view(*new_x_shape)
                return x.permute(0213)

            def forward(self, hidden_states, attention_mask):
            # hidden_states.shape = [batch,50,128]
                mixed_query_layer = self.query(hidden_states)
                mixed_key_layer = self.key(hidden_states)
                mixed_value_layer = self.value(hidden_states)

                query_layer = self.transpose_for_scores(mixed_query_layer)
                key_layer = self.transpose_for_scores(mixed_key_layer)
                value_layer = self.transpose_for_scores(mixed_value_layer)
                # query_layer.shape = [batch,8,50,16]

                # Take the dot product between "query" and "key" to get the raw attention scores.
                attention_scores = torch.matmul(query_layer, key_layer.transpose(-1-2))
                # attention_score.shape = [batch,8,50,50]
                attention_scores = attention_scores / math.sqrt(self.attention_head_size)

                attention_scores = attention_scores + attention_mask

                # Normalize the attention scores to probabilities.
                attention_probs = nn.Softmax(dim=-1)(attention_scores)

                # This is actually dropping out entire tokens to attend to, which might
                # seem a bit unusual, but is taken from the original Transformer paper.
                attention_probs = self.dropout(attention_probs)

                context_layer = torch.matmul(attention_probs, value_layer)
                context_layer = context_layer.permute(0213).contiguous()
                new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,)
                context_layer = context_layer.view(*new_context_layer_shape)
                return context_layer

        這一段和一般的vit處理的流程類(lèi)似。雖然transformer是從NLP到CV的,但從CV的vit再回看NLP的transformer也是有一種樂(lè)趣。里面要注意的點(diǎn)是multihead的概念。本來(lái)hidden-size是128,如果設(shè)置multihead的數(shù)量為8,那么其實(shí)好比卷積里面的通道數(shù)量。會(huì)把128的token看成8個(gè)16個(gè)token,然后分別做自注意力。但是把multihead比作卷積的概念感覺(jué)說(shuō)的過(guò)去,比作分組卷積的概念好像也OK:

        • 比作卷積。如果固定了每一個(gè)head的size數(shù)量為16,那么head就好比通道數(shù),那么增加head的數(shù)量,其實(shí)就是增加了卷積核通道數(shù)的感覺(jué);
        • 比作分組卷積。如果固定了hidden-size的數(shù)量為128,那么head的數(shù)量就是分組的數(shù)量,那么增加head的數(shù)量就好比卷積分組變多,降低了計(jì)算量。

        -其他部分的代碼都是FC + LayerNorm +Dropout,不再贅述。

        完整代碼

        import torch.nn as nn
        import torch.nn.functional as F
        import copy,math
        class Embeddings(nn.Module):
            """Construct the embeddings from protein/target, position embeddings.
            """

            def __init__(self, vocab_size, hidden_size, max_position_size, dropout_rate):
                super(Embeddings, self).__init__()
                self.word_embeddings = nn.Embedding(vocab_size, hidden_size)
                self.position_embeddings = nn.Embedding(max_position_size, hidden_size)

                self.LayerNorm = nn.LayerNorm(hidden_size)
                self.dropout = nn.Dropout(dropout_rate)

            def forward(self, input_ids):
                seq_length = input_ids.size(1)
                position_ids = torch.arange(seq_length, dtype=torch.long, device=input_ids.device)
                position_ids = position_ids.unsqueeze(0).expand_as(input_ids)
                
                words_embeddings = self.word_embeddings(input_ids)
                position_embeddings = self.position_embeddings(position_ids)

                embeddings = words_embeddings + position_embeddings
                embeddings = self.LayerNorm(embeddings)
                embeddings = self.dropout(embeddings)
                return embeddings
        class Encoder_MultipleLayers(nn.Module):
            def __init__(self, n_layer, hidden_size, intermediate_size, num_attention_heads, attention_probs_dropout_prob, hidden_dropout_prob):
                super(Encoder_MultipleLayers, self).__init__()
                layer = Encoder(hidden_size, intermediate_size, num_attention_heads, attention_probs_dropout_prob, hidden_dropout_prob)
                self.layer = nn.ModuleList([copy.deepcopy(layer) for _ in range(n_layer)])    

            def forward(self, hidden_states, attention_mask, output_all_encoded_layers=True):
                all_encoder_layers = []
                for layer_module in self.layer:
                    hidden_states = layer_module(hidden_states, attention_mask)
                    #if output_all_encoded_layers:
                    #    all_encoder_layers.append(hidden_states)
                #if not output_all_encoded_layers:
                #    all_encoder_layers.append(hidden_states)
                return hidden_states
        class Encoder(nn.Module):
            def __init__(self, hidden_size, intermediate_size, num_attention_heads, attention_probs_dropout_prob, hidden_dropout_prob):
                super(Encoder, self).__init__()
                self.attention = Attention(hidden_size, num_attention_heads, attention_probs_dropout_prob, hidden_dropout_prob)
                self.intermediate = Intermediate(hidden_size, intermediate_size)
                self.output = Output(intermediate_size, hidden_size, hidden_dropout_prob)

            def forward(self, hidden_states, attention_mask):
                attention_output = self.attention(hidden_states, attention_mask)
                intermediate_output = self.intermediate(attention_output)
                layer_output = self.output(intermediate_output, attention_output)
                return layer_output    
        class Attention(nn.Module):
            def __init__(self, hidden_size, num_attention_heads, attention_probs_dropout_prob, hidden_dropout_prob):
                super(Attention, self).__init__()
                self.self = SelfAttention(hidden_size, num_attention_heads, attention_probs_dropout_prob)
                self.output = SelfOutput(hidden_size, hidden_dropout_prob)

            def forward(self, input_tensor, attention_mask):
                self_output = self.self(input_tensor, attention_mask)
                attention_output = self.output(self_output, input_tensor)
                return attention_output  

        class SelfAttention(nn.Module):
            def __init__(self, hidden_size, num_attention_heads, attention_probs_dropout_prob):
                super(SelfAttention, self).__init__()
                if hidden_size % num_attention_heads != 0:
                    raise ValueError(
                        "The hidden size (%d) is not a multiple of the number of attention "
                        "heads (%d)" % (hidden_size, num_attention_heads))
                self.num_attention_heads = num_attention_heads
                self.attention_head_size = int(hidden_size / num_attention_heads)
                self.all_head_size = self.num_attention_heads * self.attention_head_size

                self.query = nn.Linear(hidden_size, self.all_head_size)
                self.key = nn.Linear(hidden_size, self.all_head_size)
                self.value = nn.Linear(hidden_size, self.all_head_size)

                self.dropout = nn.Dropout(attention_probs_dropout_prob)

            def transpose_for_scores(self, x):
                new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size)
                x = x.view(*new_x_shape)
                return x.permute(0213)

            def forward(self, hidden_states, attention_mask):
                mixed_query_layer = self.query(hidden_states)
                mixed_key_layer = self.key(hidden_states)
                mixed_value_layer = self.value(hidden_states)

                query_layer = self.transpose_for_scores(mixed_query_layer)
                key_layer = self.transpose_for_scores(mixed_key_layer)
                value_layer = self.transpose_for_scores(mixed_value_layer)

                # Take the dot product between "query" and "key" to get the raw attention scores.
                attention_scores = torch.matmul(query_layer, key_layer.transpose(-1-2))
                attention_scores = attention_scores / math.sqrt(self.attention_head_size)

                attention_scores = attention_scores + attention_mask

                # Normalize the attention scores to probabilities.
                attention_probs = nn.Softmax(dim=-1)(attention_scores)

                # This is actually dropping out entire tokens to attend to, which might
                # seem a bit unusual, but is taken from the original Transformer paper.
                attention_probs = self.dropout(attention_probs)

                context_layer = torch.matmul(attention_probs, value_layer)
                context_layer = context_layer.permute(0213).contiguous()
                new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,)
                context_layer = context_layer.view(*new_context_layer_shape)
                return context_layer
            
        class SelfOutput(nn.Module):
            def __init__(self, hidden_size, hidden_dropout_prob):
                super(SelfOutput, self).__init__()
                self.dense = nn.Linear(hidden_size, hidden_size)
                self.LayerNorm = nn.LayerNorm(hidden_size)
                self.dropout = nn.Dropout(hidden_dropout_prob)

            def forward(self, hidden_states, input_tensor):
                hidden_states = self.dense(hidden_states)
                hidden_states = self.dropout(hidden_states)
                hidden_states = self.LayerNorm(hidden_states + input_tensor)
                return hidden_states   
            
            
        class Intermediate(nn.Module):
            def __init__(self, hidden_size, intermediate_size):
                super(Intermediate, self).__init__()
                self.dense = nn.Linear(hidden_size, intermediate_size)

            def forward(self, hidden_states):
                hidden_states = self.dense(hidden_states)
                hidden_states = F.relu(hidden_states)
                return hidden_states
            
        class Output(nn.Module):
            def __init__(self, intermediate_size, hidden_size, hidden_dropout_prob):
                super(Output, self).__init__()
                self.dense = nn.Linear(intermediate_size, hidden_size)
                self.LayerNorm = nn.LayerNorm(hidden_size)
                self.dropout = nn.Dropout(hidden_dropout_prob)

            def forward(self, hidden_states, input_tensor):
                hidden_states = self.dense(hidden_states)
                hidden_states = self.dropout(hidden_states)
                hidden_states = self.LayerNorm(hidden_states + input_tensor)
                return hidden_states
        class transformer(nn.Sequential):
            def __init__(self, encoding, **config):
                super(transformer, self).__init__()
                if encoding == 'drug':
                    self.emb = Embeddings(config['input_dim_drug'], config['transformer_emb_size_drug'], 50, config['transformer_dropout_rate'])
                    self.encoder = Encoder_MultipleLayers(config['transformer_n_layer_drug'], 
                                                            config['transformer_emb_size_drug'], 
                                                            config['transformer_intermediate_size_drug'], 
                                                            config['transformer_num_attention_heads_drug'],
                                                            config['transformer_attention_probs_dropout'],
                                                            config['transformer_hidden_dropout_rate'])
                elif encoding == 'protein':
                    self.emb = Embeddings(config['input_dim_protein'], config['transformer_emb_size_target'], 545, config['transformer_dropout_rate'])
                    self.encoder = Encoder_MultipleLayers(config['transformer_n_layer_target'], 
                                                            config['transformer_emb_size_target'], 
                                                            config['transformer_intermediate_size_target'], 
                                                            config['transformer_num_attention_heads_target'],
                                                            config['transformer_attention_probs_dropout'],
                                                            config['transformer_hidden_dropout_rate'])

            ### parameter v (tuple of length 2) is from utils.drug2emb_encoder 
            def forward(self, v):
                e = v[0].long().to(device)
                e_mask = v[1].long().to(device)
                print(e.shape,e_mask.shape)
                ex_e_mask = e_mask.unsqueeze(1).unsqueeze(2)
                ex_e_mask = (1.0 - ex_e_mask) * -10000.0

                emb = self.emb(e)
                encoded_layers = self.encoder(emb.float(), ex_e_mask.float())
                return encoded_layers[:,0]
        往期精彩回顧





        瀏覽 63
        點(diǎn)贊
        評(píng)論
        收藏
        分享

        手機(jī)掃一掃分享

        分享
        舉報(bào)
        評(píng)論
        圖片
        表情
        推薦
        點(diǎn)贊
        評(píng)論
        收藏
        分享

        手機(jī)掃一掃分享

        分享
        舉報(bào)
        1. <strong id="7actg"></strong>
        2. <table id="7actg"></table>

        3. <address id="7actg"></address>
          <address id="7actg"></address>
          1. <object id="7actg"><tt id="7actg"></tt></object>
            99国产成人精品无码青春在线 | 18片毛片60分钟免费 | 天天干天天插天天秀 | 日屄视频免费观看 | 四虎影院一区 | 在办公室把我添高潮了电影 | 人妻p| 岳啊灬啊别停灬啊灬快点老番 | 国产麻豆精品成人免费视频 | 亚洲精品乱伦视频 |