版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
1迎接多模態AIBy神櫻AI團隊/高煥堂教授指導***本文摘自高煥堂的下列書籍***2OpenAI在互聯網上找到大量可用的監督來源測出我們數據集裡的那一個文本(Text)與它實一個輸入句子,它將能夠檢索與該句子相對應的最相關的圖像。3集上進行訓練時,也可以當做分類器來用。然語言>監督中有效地學習<視覺>概念。也就是,基於自己企業的素材圖庫來(基本架構4的特徵,然後映射到潛藏空間裡的一個新的點。接著,經由矩陣運算,計算出位於<新的點>附近的一些點測值了。以中藥材的CLIP為例5助提取個文本的特徵,然後將各文本(隨意)對映到潛藏空間的點6展開訓練7以上訓練完成了。其智慧表達於模型裡的參數(即w預測(一):從圖像找文本8預測(二):從文本找圖像9觀察訓練過程的相似度變化以陣列呈現最新的相似度。如下圖:與<提詞-2>相似度提高了。Step-1:檢測編譯環境(及所import的套件)參數定義程式碼:##myConfig.pyimage_path="C:/Moein/AI/Datasets/Flicker-8k/Images"captions_path="C:/Moein/AI/Datasets/Flicker-8k"device=torch.device("cuda"iftorctext_encoder_model="distilbert-base-uncased"text_embedding=768text_tokenizer="distilbert-base-uncased"pretrained=False#forbothimageencodetrainable=False#for#forprojectionhead;usedforbothimageandtextencodersnum_projection_layer#clip_ex_001.pyimporttorchvision.modelsasmodelsfromtransformersimportDistilBertModel,DistilBertConfig#在這myConfig裡的projection_#原來是:256Encodeimagestoafixedsizevedef__init__(self):super().__init__()#載入ResNet50預訓練模型self.model=models.resnet50(weights=models.ResNet50_Weights.IMAGENET1K_V1)#遷移學習不需要梯度(不更改權重)forparaminself.model.parameters():#增添自己的分類器,ResNet50的輸出,成為分類器的輸入fc_featuresfc_features=self.model.fc.in_featuresself.model.fc=nn.Sequential(nn.Linear(fc_features,nn.Softmax(dim=1))defforward(self,x):classTextEncoder(nn.Module):def__init__(self,model_name=CFG.text_encoder_model,pretrained=CFG.pretrained,trainable=CFG.trainable):super().__init__()self.model=DistilBertModel.from_pretrained(else:self.model=DistilBertModel(config=DistilBertConfig())forpinself.model.parameters(#weareusingtheCLStokenhiddenembeddingself.target_token_idx=0defforward(self,input_ids,attention_mask):output=self.model(input_ids=input_ids,attlast_hidden_state=output.last_hidden_sreturnlast_hidden_state[:,self.target_tokenclassProjectionHead(def__init__(self,embedding_dim,projectionprojection_dim=CFG.projection_dim,dropout=CFG.dropoutsuper().__init__()jection=nn.Linear(embedding_dim,projection_dim)self.gelu=nn.GELU()self.fc=nn.Linear(projection_dim,projection_dim)self.dropout=nn.Dropout(dropout)self.layer_norm=nn.defforward(self,x):x=self.gelu(projected)classCLIPModel(nn.Module):def__init__(self,temperature=CFG.temperature,image_embedding=CFG.image_embedding,text_embedding=CFG.text_embedding,super().__init__()self.image_encoder=ImageEncoder()self.text_encoder=TextEncoder()self.image_projection=ProjectionHead(embedding_dim=image_embedding)self.text_projection=ProjectionHead(embedding_dim=text_embedding)self.temperature=temperaturedefforward(self,batc#GettingImageandTextFeaturesimage_features=self.image_encoder(batch["imagtext_features=self.text_encoder(input_ids=batch["input_ids"],atinput_ids=batch["input_ids"],at)#GettingImageandTextEmbeddings(withsamedimension)image_embeddings=self.image_projection(image_featurtext_embeddings=self.text_projection(text_features)logits=(text_embeddings@image_embeddings.T)/simages_similarity=image_embeddings@image_emtexts_similarity=text_embeddings@text_embeddings.T(images_similarity+texts_similarity)/2*self.temperature,)texts_loss=cross_entropy(logits,targets,reduction='none')images_loss=cross_entropy(logits.T,targets.T,reduction='nloss=(images_loss+texts_loss)/2.0#returnloss.mean()defcross_entropy(preds,targets,reduction='none'):log_softmax=nn.LogSoftmax(dim=-1)loss=(-targets*log_softmax(preds))returnloss.mean()model=CLIPModel()#ENDStep-2:準備訓練資料於CLIP使用著名的預訓練模型:Resn]]接下來,繼續微調一下程式。程式碼如下:接下來,繼續微調一下程式。程式碼如下:#clip_ex_002.pyimporttorchvision.modelsasmodelsfromtorchvisionimporttransformsfromtorchvision.datasetsimportImageFolderfromtorch.utils.dataimportDataset,DataLoaderfromtransformersimportDistilBertModel,DistilBertConfigfromtransformersimportAutoTokenizer#在這myConfig裡的projection_#原來是:256classImageEncoder(nn.Module):Encodeimagestoafixedsizevedef__init__(self):super().__init__()#載入ResNet50預訓練模型selfself.model=models.resnet50(weights=models.ResNet50_Weights.IMAGENET1K_V1)#遷移學習不需要梯度(不更改權重)forparaminself.model.parameters():#增添自己的分類器,ResNet50的輸出,成為分類器的輸入fc_features=self.model.fc.in_features#萃取2048個特徵self.model.fc=nn.Sequential(nn.Linear(fc_features,CFG.image_embedding),#nn.Softmax(dim=1))defforward(self,x):returnself.model(x)classTextEncoder(nn.Module):def__init__(self,model_name=CFG.text_encoder_model,pretrained=CFG.pretratrainable=CFG.trainable):super().__init__()self.model=DistilBertModel.from_pretrained(else:self.model=DistilBertModel(config=DistilBertConfig())forpinself.model.parameters(#weareusingtheCLStokenhiddenreembeddingself.target_token_idx=0defforward(self,input_ids,attention_output=self.model(input_ids=input_ids,attlast_hidden_state=output.last_hidden_sreturnlast_hidden_state[:,self.target_tokenreturnlast_hidden_state[:,self.target_tokenclassProjectionHeaddef__init__(self,embedding_dim,projection_dim=CFG.projection_dim,dropout=CFG.dropoutsuper().__init__()jection=nn.Linear(embedding_dim,projection_dim)self.gelu=nn.GELU()self.fc=nn.Linear(projection_dim,projection_dim)self.dropout=nn.Dropout(dropout)self.layer_norm=nn.defforward(self,x):x=self.gelu(projected)classCLIPModel(nn.Module):def__init__(self,temperature=CFG.temperature,image_embedding=CFG.image_embedding,text_embedding=CFG.text_embedding,super().__init__()self.image_encoder=ImageEncoder()self.text_encoder=TextEncoder()self.image_projection=ProjectionHead(embedding_dim=image_embedding)selfself.text_projection=ProjectionHead(embedding_dim=text_embedding)self.temperature=temperaturedefforward(self,batc#GettingImageandTextFeaturesimage_features=self.image_encoder(batch["imagtext_features=self.text_encoder(input_ids=batch["input_ids"],at)#GettingImageandTextEmbeddings(withsamedimension)image_embeddings=self.image_projection(image_featurtext_embeddings=self.text_projection(text_features)logits=(text_embeddings@image_embeddings.T)/simages_similarity=image_embeddings@image_emtexts_similarity=text_embeddings@text_embeddi(images_similarity+texts_similarity)/2*self.temperature,)texts_loss=cross_entropy(logits,targets,reduction='none')images_loss=cross_entropy(logits.T,targets.T,reduction='none')loss=(images_loss+texts_loss)/2.0#returnloss.mean()defcross_entropy(preds,targets,reduction='none'):log_softmax=nn.LogSoftmax(dim=-1)loss=(-targets*log_softmax(preds))returnloss.mean()model=CLIPModel()root_path='c:/oopc/m_clip_data/traiim_list+=[os.path.join(root_path,'class_02',i)foriinos.listdir(root_path+'class_0im_list+=[os.path.join(root_path,'class_03',i)foriinos.listdir(root_path+'class_0im_list+=[os.path.join(root_path,'class_04',i)foriinos.listdir(root_path+'class_0#把圖片轉換成Tensortransform=transforms.Compose([transforms.Resize((224,224)),transforms.ToTensor()data_set=ImageFolder(root_path,transform=transform)train_loader=DataLoader(data_set,batch_size=7,shuffle=False)"一棵人蔘","一條韓國人蔘","一盤枸杞","這是銀川枸杞","這是兩只靈芝","許多靈芝","玫瑰香菇",]checkpoint="bert-base-uncased"tokenizer=AutoTokenizer.from_pretrained(checkpoint)batch=tokenizer(sequences,padding=True,truncation=True,return_tensors="pt")forbatch_idx,(data,target)inenumerate#END驗<訓練步驟>的程式碼,其目標只訓練一回合,輸出一次los##clip_ex_003.pyimporttorchvision.modelsasmodelsfromtorchvisionimporttransformsfromfromtorchvision.datasetsimportImageFolderfromtorch.utils.dataimportDataset,DataLoaderfromtransformersimportDistilBertModel,DistilBertConfigfromtransformersimportAutoTokenizerclassImageEncoder(nn.Module):Encodeimagestoafixedsizevedef__init__(self):super().__init__()#載入ResNet50預訓練模型self.model=models.resnet50(weights=models.ResNet50_Weights.IMAGENET1K_V1)#遷移學習不需要梯度(不更改權重)forparaminself.model.parameters():#增添自己的分類器,ResNet50的輸出,成為分類器的輸入fc_features=self.model.fc.in_features#萃取2048個特徵self.model.fc=nn.Sequential(nn.Linear(fc_features,CFG.image_embedding),#nn.Softmax(dim=1))defforward(self,x):classTextEncoder(nn.Module):def__init__(self,model_name=CFG.text_encoder_model,pretrained=CFG.pretratrainable=CFG.trainable):supersuper().__init__()self.model=DistilBertModel.from_pretrained(else:self.model=DistilBertModel(config=DistilBertConfig())forpinself.model.parameters(#weareusingtheCLStokenhiddenembeddingself.target_token_idx=0defforward(self,input_ids,attention_output=self.model(input_ids=input_ids,attention_mask=atlast_hidden_state=output.last_hidden_sreturnlast_hidden_state[:,self.target_tokenclassProjectionHeaddef__init__(self,embedding_dim,projection_dim=CFG.projection_dim,dropout=CFG.dropoutsuper().__init__()jection=nn.Linear(embedding_dim,projection_dim)self.gelu=nn.GELU()self.fc=nn.Linear(projection_dim,projection_dim)self.dropout=nn.Dropout(dropout)self.layer_norm=nn.defforward(self,x):x=self.gelu(projected)classCLIPModel(nn.Module):def__init__(self,temperature=CFG.temperature,image_embedding=CFG.image_embedding,text_embedding=CFG.text_embedding,super().__init__()self.image_encoder=ImageEncoder()self.text_encoder=TextEncoder()self.image_projection=ProjectionHead(embedding_dim=image_embedding)self.text_projection=ProjectionHead(embedding_dim=text_embedding)self.temperature=temperaturedefforward(self,batc#GettingImageandTextFeaturesimage_features=self.image_encoder(batch["imagtext_features=self.text_encoder(input_ids=batch["input_ids"],at)#GettingImageandTextEmbeddings(withsamedimension)image_embeddings=self.image_projection(image_features)text_embeddings=self.text_projection(text_features)logits=(text_embeddings@image_embeddings.T)/simages_similarity=image_embeddings@image_emtexts_similarity=text_embeddings@text_embeddings.T(images_similarity+texts_similarity)/2*self.temperature,)textstexts_loss=cross_entropy(logits,targets,reduction='none')images_loss=cross_entropy(logits.T,targets.T,reduction='nonloss=(images_loss+texts_loss)/2.0#returnloss.mean()defcross_entropy(preds,targets,reduction='none'):log_softmax=nn.LogSoftmax(dim=-1)loss=(-targets*log_softmax(preds))returnloss.mean()model=CLIPModel()optimizer=torch.optim.AdamW(model.parameters(),lr=CFG.lr,weight_decay=CFG.weight_decay)root_path='c:/oopc/m_clip_data/traiim_list+=[os.path.join(root_path,'class_02',i)foriinos.listdir(root_path+'class_0im_list+=[os.path.join(root_path,'class_03',i)foriinos.listdir(root_path+'class_0im_list+=[os.path.join(root_path,'class_04',i)foriinos.listdir(root_path+'class_04')#把圖片轉換成Tensortransform=transforms.Compose([transforms.Resize((224,224)),transforms.ToTensor()data_set=ImageFolder(root_path,transform=transform)train_loader=DataLoader(data_set,batch_size=7,shuffle=False)"一棵人蔘","一條韓國人蔘","一盤枸杞","這是銀川枸杞","這是兩只靈芝","許多靈芝","玫瑰香菇",]checkpoint="bert-base-uncased"tokenizer=AutoTokenizer.from_pretrained(checkpoint)batch=tokenizer(sequences,padding=True,truncation=True,return_tensors="pt")forbatch_idx,(data,target)inenumerateoptimizer.zero_grad()optimizer.step()#ENDStep-4.訓練多個回合,觀察loss值是否持續下降##clip_ex_004_train.pyimporttorchvision.modelsasmodelsfromtorchvisionimporttransformsfromtorchvision.datasetsimportImageFolderfromtorch.utils.dataimportDataset,DataLoaderfromtransformersimportDistilBertModel,DistilBertConfigfromtransformersimportAutoTokenizerclassImageEncoder(nn.Module):Encodeimagestoafixedsizevedef__init__(self):super().__init__()#載入ResNet50預訓練模型self.model=models.resnet50(weights=models.ResNet50_Weights.IMAGENET1K_V1)#遷移學習不需要梯度(不更改權重)#遷移學習不需要梯度(不更改權重)forparaminself.model.parameters():#增添自己的分類器,ResNet50的輸出,成為分類器的輸入fc_features=self.model.fc.in_features#萃取2048個特徵self.model.fc=nn.Sequential(nn.Linear(fc_features,CFG.image_embedding),#nn.Softmax(dim=1))defforward(self,x):classTextEncoder(nn.Module):def__init__(self,model_name=CFG.text_encoder_model,pretrained=CFG.pretratrainable=CFG.trainable):super().__init__()self.model=DistilBertModel.from_pretrained(else:self.model=DistilBertModel(config=DistilBertConfig())forpinself.model.parameters(#weareusingtheCLStokenhiddenembeddingself.target_token_idx=0defforward(self,input_ids,attention_output=self.model(input_ids=input_ids,attention_mask=atlast_hidden_state=output.last_hidden_sreturnlast_hidden_state[:,self.target_tokenclassProjectionHeaddef__init__(self,embedding_dim,projection_dim=CFG.projection_dim,dropout=CFG.dropoutsuper().__init__()jection=nn.Linear(embedding_dim,projection_dim)self.gelu=nn.GELU()self.fc=nn.Linear(projection_dim,projection_dim)self.dropout=nn.Dropout(dropout)self.layer_norm=nn.defforward(self,x):x=self.gelu(projected)classCLIPModel(nn.Module):def__init__(self,temperature=CFG.temperature,image_embedding=CFG.image_embedding,text_embedding=CFG.text_embedding,super().__init__()self.image_encoder=ImageEncoder()self.text_encoder=TextEncoder()self.image_projection=ProjectionHead(embedding_dim=image_embedding)self.text_projection=ProjectionHead(embedding_dim=text_embedding)self.temperatureself.temperature=temperaturedefforward(self,batc#GettingImageandTextFeaturesimage_features=self.image_encoder(batch["imagtext_features=self.text_encoder(input_ids=batch["input_ids"],at)#GettingImageandTextEmbeddings(withsamedimension)image_embeddings=self.image_projection(image_features)text_embeddings=self.text_projection(text_features)logits=(text_embeddings@image_embeddings.T)/simages_similarity=image_embeddings@image_emtexts_similarity=text_embeddings@text_embeddings.T(images_similarity+texts_similarity)/2*self.temperature,)texts_loss=cross_entropy(logits,targets,reduction='none')images_loss=cross_entropy(logits.T,targets.T,reduction='nonloss=(images_loss+texts_loss)/2.0#returnloss.mean()defcross_entropy(preds,targets,reduction='none'):log_softmax=nn.LogSoftmax(dim=-1)loss=(-targets*log_softmax(preds))returnloss.mean()model=CLIPModel()optimizer=torch.optim.AdamW(modelmodel.parameters(),lr=CFG.lr,weight_decay=CFG.weight_decay)root_path='c:/oopc/m_clip_data/traiim_list+=[os.path.join(root_path,'class_02',i)foriinos.listdir(root_path+'class_0im_list+=[os.path.join(root_path,'class_03',i)foriinos.listdir(root_path+'class_0im_list+=[os.path.join(root_path,'class_04',i)foriinos.listdir(root_path+'class_04')#把圖片轉換成Tensortransform=transforms.Compose([transforms.Resize((224,224)),transforms.ToTensor()data_set=ImageFolder(root_path,transform=transform)train_loader=DataLoader(data_set,batch_size=7,shuffle=False)checkpoint="bert-base-uncased"tokenizer=AutoTokenizer.from_pretrained(checkpoint)"一棵人蔘","一條韓國人蔘","一盤枸杞","這是銀川枸杞","這是兩只靈芝","許多靈芝","玫瑰香菇",]batch=tokenizer(sequences,padding=True,truncation=True,return_tensors="pt")forbatch_idx,(data,target)inenumerateoptimizer.zero_grad()optimizer.step()#END定義裡,現在把它移到類別定義之外。##clip_ex_005_loss.pyimporttorchvision.modelsasmodelsfromtorchvisionimporttransformsfromtorchvision.datasetsimportImageFolderfromtorch.utils.dataimportDataset,DataLoaderfromtransformersimportDistilBertModel,DistilBertConfigfromtransformersimportAutoTokenizerclassImageEncoder(nn.Module):Encodeimagestoafixedsizevedef__init__(self):super().__init__()#載入ResNet50預訓練模型self.model=models.resnet50(weights=models.ResNet50_Weights.IMAGENET1K_V1)#遷移學習不需要梯度(不更改權重)forparaminself.model.parameters():#增添自己的分類器,ResNet50的輸出,成為分類器的輸入fc_features=self.model.fc.in_features#萃取2048個特徵selfself.model.fc=nn.Sequential(nn.Linear(fc_features,CFG.image_embedding),#nn.Softmax(dim=1))defforward(self,x):classTextEncoder(nn.Module):def__init__(self,model_name=CFG.text_encoder_model,pretrained=CFG.pretratrainable=CFG.trainable):super().__init__()self.model=DistilBertModel.from_pretrained(else:self.model=DistilBertModel(config=DistilBertConfig())forpinself.model.parameters(#weareusingtheCLStokenhiddenembeddingself.target_token_idx=0defforward(self,input_ids,attention_output=self.model(input_ids=input_ids,attention_mask=last_hidden_state=output.last_hidden_sreturnlast_hidden_state[:,self.target_tokenclassProjectionHeaddef__init__(self,embedding_dim,projection_dim=CFG.projection_dim,dropout=CFG.dropoutsupersuper().__init__()jection=nn.Linear(embedding_dim,projection_dim)self.gelu=nn.GELU()self.fc=nn.Linear(projection_dim,projection_dim)self.dropout=nn.Dropout(dropout)self.layer_norm=nn.defforward(self,x):x=self.gelu(projected)classCLIPModel(nn.Module):def__init__(self,temperature=CFG.temperature,image_embedding=CFG.image_embedding,text_embedding=CFG.text_embedding,super().__init__()self.image_encoder=ImageEncoder()self.text_encoder=TextEncoder()self.image_projection=ProjectionHead(embedding_dim=image_embedding)self.text_projection=ProjectionHead(embedding_dim=text_embedding)self.temperature=temperaturedefforward(self,batc#GettingImageandTextFeaturesimage_features=self.image_encoder(batch["imagtext_features=self.text_encoder(input_ids=batch["input_ids"],at)#GettingImageandTextEmbeddings(withsamedimension)#GettingImageandTextEmbeddings(withsamedimension)image_embeddings=self.image_projection(image_features)text_embeddings=self.text_projection(text_features)returnimage_embeddings,text_embeddingsdefcross_entropy(preds,targets,reduction='none'):log_softmax=nn.LogSoftmax(dim=-1)loss=(-targets*log_softmax(preds))returnloss.mean()#================================================model=CLIPModel()optimizer=torch.optim.AdamW(model.parameters(),lr=CFG.lr,weight_decay=CFG.weight_decay)#================================================root_path='c:/oopc/m_clip_data/traiim_list+=[os.path.join(root_path,'class_02',i)foriinos.listdir(root_path+'class_0im_list+=[os.path.join(root_path,'class_03',i)foriinos.listdir(root_path+'class_0im_list+=[os.path.join(root_path,'class_04',i)foriinos.listdir(root_path+'class_04'#把圖片轉換成Tensortransform=transforms.Compose([transforms.Resize((224,224)),transforms.ToTensor()])data_set=ImageFolder(root_path,transform=transform)40traintrain_loader=DataLoader(data_set,batch_size=7,shuffle=False)checkpoint="bert-base-uncased"tokenizer=AutoTokenizer.from_pretrained(checkpoint)"一棵人蔘","一條韓國人蔘","一盤枸杞","這是銀川枸杞","這是兩只靈芝","許多靈芝","玫瑰香菇",]batch=tokenizer(sequences,padding=True,truncation=True,return_tensors="pt")forbatch_idx,(data,target)inenumerate#====================================================image_embeddings,text_embeddings=model(batch)temperature=CFG.temperaturelogits=(text_embeddings@image_embeddings.T)/temperatureimages_similarity=image_embeddings@image_embeddings.Ttexts_similarity=text_embeddings@text_embeddings.T41(images_similarity(images_similarity+texts_similarity)/2*tempera)texts_loss=cross_entropy(logits,targets,reduction='none')images_loss=cross_entropy(logits.T,targets.T,reduction='none')loss_v=(images_loss+texts_loss)/2.0#shape:(batoptimizer.zero_grad()optimizer.step()#END42訓練過程是可以暫停的,與暫停之前,可以將目前的參數值匯出(Export),段落,將模型W&B儲存於*.pt。下一個程式,就繼續加碼訓練。##clip_ex_006_pt_im.pyimporttorchvision.modelsasmodelsfromtorchvisionimporttransformsfromtorch.utils.dataimportDataset,DataLoaderfromtransformersimportDistilBertModel,DistilBertConfigfromtransformersimportAutoTokenizerdefcross_entropy(preds,targets,reduction='none'):log_softmax=nn.LogSoftmax(dim=-1)loss=(-targets*log_softmax(preds))returnloss.mean()#================================================model=CL.CLIPModel()optimizer=torch.optim.AdamW(model.parameters(),lr=CFG.lr,weight_decay=CFG.weight_decay)43#==========================#================================================#把圖片轉換成Tensortransform=transforms.Compose([transforms.Resize((224,224)),transforms.ToTensor()filenames=glob.glob(path+'**/*.jpg',recursive=True)forfnainfilenames:img_tensor=transform(image)img_tensor_list.append(img_tensor)print('\n',len(img_tensor_lisclassmyDataset(Dataset):def__init__(self):def__getitem__(self,idx):def__len__(self):returnlen(img_tensor_list)mydset=myDataset()#將data_set放入DataLoaader裡train_loader=DataLoader(mydset,batch_size=7,shuffle=False)checkpoint="bert-base-uncased"44tokenizertokenizer=AutoTokenizer.from_pretrained(checkpoint)"一棵人蔘","一條韓國人蔘","一盤枸杞","這是銀川枸杞","這是兩只靈芝","許多靈芝","玫瑰香菇",]batch=tokenizer(sequences,padding=True,truncation=True,return_tensors="pt")forbatch_idx,datainenumerate(train_loader):#====================================================print('訓練',epochs,'回合...')image_embeddings,text_embeddings=model(batch)temperature=CFG.temperaturelogits=(text_embeddings@image_embeddings.T)/temperatureimages_similarity=image_embeddings@image_embeddings.Ttexts_similarity=text_embeddings@text_embeddings.T(images_similarity+texts_similarity)/2)45textstexts_loss=cross_entropy(logits,targets,reduction='none')images_loss=cross_entropy(logits.T,targets.T,reduction='none')loss_v=(images_loss+texts_loss)/2.0#shape:(batch_optimizer.zero_grad()optimizer.step()print('modelsaved')torch.save(model,FILE)#END46就暫時匯出,儲存於c:/ox/目錄下的clip_111*.pt檔案,##clip_ex_007_pt.pyimporttorchvision.modelsasmodelsfromtorchvisionimporttransformsfromtorch.utils.dataimportDataset,DataLoaderfromtransformersimportDistilBertModel,DistilBertConfigfromtransformersimportAutoTokenizerdefcross_entropy(preds,targets,reduction='none'):log_softmax=nn.LogSoftmax(dim=-1)loss=(-targets*log_softmax(preds))returnloss.mean()#================================================model=CL.CLIPModel()47model=torch.load(model=torch.load(optimizer=torch.optim.AdamW(model.parameters(),lr=CFG.lr,weight_decay=CFG.weight_decay)#================================================#把圖片轉換成Tensortransform=transforms.Compose([transforms.Resize((224,224)),transforms.ToTensor()filenames=glob.glob(path+'**/*.jpg',recursive=True)forfnainfilenames:img_tensor=transform(image)img_tensor_list.append(img_tensor)print('\n',len(img_tensor_lisclassmyDataset(Dataset):def__init__(self):def__getitem__(self,idx):def__len__(self):returnlen(img_tensor_list)mydset=myDataset()48#將data_set#將data_set放入DataLoaader裡train_loader=DataLoader(mydset,batch_size=7,shuffle=False)checkpoint="bert-base-uncased"tokenizer=AutoTokenizer.from_pretrained(checkpoint)"一棵人蔘","一條韓國人蔘","一盤枸杞","這是銀川枸杞","這是兩只靈芝","許多靈芝","玫瑰香菇",]batch=tokenizer(sequences,padding=True,truncation=True,return_tensors="pt")forbatch_idx,datainenumerate(train_loader):#====================================================print('訓練',epochs,'回合...')image_embeddings,text_embeddings=model(batch)temperature=CFG.temperaturelogits=(text_embeddings@image_embeddings.T)/temperature49images_similarityimages_similarity=image_embeddings@image_embeddings.Ttexts_similarity=text_embeddings@text_embeddings.T(images_similarity+texts_similarity)/2)texts_loss=cross_entropy(logits,targets,reduction='none')images_loss=cross_entropy(logits.T,targets.T,reduction='none')loss_v=(images_loss+texts_loss)/2.0#shape:(batch_soptimizer.zero_grad()optimizer.step()print('modelsaved')torch.save(model,FILE)#ENDA.4觀察相似度矩陣文本與圖像之間的相似度,並以矩陣表達之,如下圖。##clip_ex_008_simi.pyimporttorchvision.modelsasmodelsfromtorchvisionimporttransformsfromtorch.utils.dataimportDataset,DataLoaderfromtransformersimportDistilBertModel,DistilBertConfigfromtransformersimportAutoTokenizerdefcross_entropy(preds,targets,reduction='none'):log_softmax=nn.LogSoftmax(dim=-1)loss=(-targets*log_softmax(preds))returnloss.mean()#================================================model=CL.CLIPModel()model=torch.load(optimizer=torch.optim.AdamW(model.parameters(),lr=CFG.lr,weight_decay=CFG.weight_decay)#================================================#把圖片轉換成Tensortransform=transforms.Compose([transforms.Resize((224,224)),transforms.ToTensor()filenames=glob.glob(path+'**/*.jpg',recursive=True)forfnainfilenames:img_tensor=transform(image)img_tensor_list.append(img_tensor)print('\n',len(img_tensor_lisclassmyDataset(Dataset):def__init__(self):def__getitem__(self,idx):def__len__(self):returnlen(img_tensor_list)mydset=myDataset()#將data_set放入DataLoaader裡train_loader=DataLoader(mydset,batch_size=7,shuffle=False)checkpoint="bert-base-uncased"tokenizer=AutoTokenizer.from_pretrained(checkpoint)"一棵人蔘","一條韓國人蔘","一盤枸杞","這是銀川枸杞","這是兩只靈芝","這是兩只靈芝","許多靈芝","玫瑰香菇",]batch=tokenizer(sequences,padding=True,truncation=True,return_tensors="pt")forbatch_idx,datainenumerate(train_loader):#====================================================print('繼續訓練',epochs,'回合..image_embeddings,text_embeddings=model(batch)temperature=CFG.temperaturelogits=(text_embeddings@image_embeddings.T)/temperatureimages_similarity=image_embeddings@image_embeddings.Ttexts_similarity=text_embeddings@text_embeddings.T(images_similarity+texts_similarity)/2)texts_loss=cross_entropy(logits,targets,reduction='none')images_loss=cross_entropy(logits.T,targets.T,reduction='none')loss_v=(images_loss+texts_loss)/2.0#shape:(batch_optimizer.zero_grad()optimizeroptimizer.step()print('modelsaved')torch.save(model,FILE)#===========預測==================================test_sequences=sequences.copy()model.eval()withtorch.no_grad():t_batch=tokenizer(test_sequences,padding=True,truncation=True,return_tensors="pt")t_text_features=model.text_encoder(input_ids=t_batch["input_ids"],atte)t_text_embeddings=model.text_projection(t_text_features)t_image_features=model.image_encoder(data)t_image_embeddings=model.image_projection(t_image_features)t_image_embeddings_n=F.normalize(t_image_embeddings,p=2,dim=-1)t_text_embeddings_n=F.normalize(t_text_embeddings,p=2,dim=dot_similarity=t_text_embeddings_n@t_image_embsimi=F.softmax(dot_s#ENDA.5預測(一):輸入一張圖,找出最相似(接近)文句會輸出:玫瑰蘑菇。##clip_ex_009_pred.pyfromfromtorch.utils.dataimportDataset,DataLoaderfromtorchvisionimporttransformsfromtransformersimportAutoTokenizerdefcross_entropy(preds,targets,re
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 肠道病毒所致各系统感染病因介绍
- 功能介绍的课件
- 《NFC概述及认证》课件
- 耻骨炎病因介绍
- 智能制造生产线技术及应用 教案 5-2 AGV小车搬运系统
- 特发性骨质疏松病因介绍
- 《专利概述》课件
- 《债的发生原因》课件
- 二零二四年度汽车销售公司承包合同3篇
- 商业楼外墙施工方案(挤苯板、真石漆)
- 【企业盈利能力探析的国内外文献综述2400字】
- 危急值的考试题及答案
- 轻医美技术合作项目协议书范本
- 课件:《中华民族共同体概论》第十五讲:新时代与中华民族共同体建设
- 走进鱼类世界智慧树知到期末考试答案章节答案2024年中国海洋大学
- (正式版)SHT 3227-2024 石油化工装置固定水喷雾和水(泡沫)喷淋灭火系统技术标准
- 大学生国家安全教育智慧树知到期末考试答案2024年
- 2024年中煤鄂尔多斯能源化工有限公司招聘笔试参考题库含答案解析
- 给药错误护理安全警示教育
- 陕09J01 建筑用料及做法图集
- 2024继续教育《医学科研诚信与医学了研究伦理》答案
评论
0/150
提交评论