創建了自動誘因是為了使用TensorFlow和Python3訓練API,該API了解您的興趣並自動為您播放Tinder Swiping遊戲。

在本文檔中,我將解釋創建自動誘因所需的以下步驟:
Auto Tinder是一個純粹是出於娛樂和教育目的而創建的概念項目。它絕不會濫用傷害任何人或垃圾郵件。自動誘因腳本不應與您的Tinder配置文件一起使用,因為它們肯定違反了Tinders的服務條款。
我已經寫了這件軟件,主要是有兩個原因:
第一步是找出Tinder應用如何通信到Tinders後端服務器。由於Tinder提供了其門戶網站的網絡版本,因此這與訪問tinder.com,打開Chrome DevTools并快速查看網絡協議一樣容易。
上圖中顯示的內容是從https://api.gotinder.com/v2/recs/core的請求,該內容是在tinder.com登錄頁面加載時製作的。顯然,Tinder具有某種內部API,它們用於在前端和後端進行交流。
通過分析/recs /core的內容,很明顯,此API端點返回附近人的用戶配置文件列表。
數據包括(除其他許多字段),以下數據:
{
"meta" : {
"status" : 200
},
"data" : {
"results" : [
{
"type" : " user " ,
"user" : {
"_id" : " 4adfwe547s8df64df " ,
"bio" : " 19y. " ,
"birth_date" : " 1997-17-06T18:21:44.654Z " ,
"name" : " Anna " ,
"photos" : [
{
"id" : " 879sdfert-lskdföj-8asdf879-987sdflkj " ,
"crop_info" : {
"user" : {
"width_pct" : 1 ,
"x_offset_pct" : 0 ,
"height_pct" : 0.8 ,
"y_offset_pct" : 0.08975463
},
"algo" : {
"width_pct" : 0.45674357 ,
"x_offset_pct" : 0.984341657 ,
"height_pct" : 0.234165403 ,
"y_offset_pct" : 0.78902343
},
"processed_by_bullseye" : true ,
"user_customized" : false
},
"url" : " https://images-ssl.gotinder.com/4adfwe547s8df64df/original_879sdfert-lskdföj-8asdf879-987sdflkj.jpeg " ,
"processedFiles" : [
{
"url" : " https://images-ssl.gotinder.com/4adfwe547s8df64df/640x800_879sdfert-lskdföj-8asdf879-987sdflkj.jpg " ,
"height" : 800 ,
"width" : 640
},
{
"url" : " https://images-ssl.gotinder.com/4adfwe547s8df64df/320x400_879sdfert-lskdföj-8asdf879-987sdflkj.jpg " ,
"height" : 400 ,
"width" : 320
},
{
"url" : " https://images-ssl.gotinder.com/4adfwe547s8df64df/172x216_879sdfert-lskdföj-8asdf879-987sdflkj.jpg " ,
"height" : 216 ,
"width" : 172
},
{
"url" : " https://images-ssl.gotinder.com/4adfwe547s8df64df/84x106_879sdfert-lskdföj-8asdf879-987sdflkj.jpg " ,
"height" : 106 ,
"width" : 84
}
],
"last_update_time" : " 2019-10-03T16:18:30.532Z " ,
"fileName" : " 879sdfert-lskdföj-8asdf879-987sdflkj.webp " ,
"extension" : " jpg,webp " ,
"webp_qf" : [
75
]
}
],
"gender" : 1 ,
"jobs" : [],
"schools" : [],
"show_gender_on_profile" : false
},
"facebook" : {
"common_connections" : [],
"connection_count" : 0 ,
"common_interests" : []
},
"spotify" : {
"spotify_connected" : false
},
"distance_mi" : 1 ,
"content_hash" : " slkadjfiuwejsdfuzkejhrsdbfskdzufiuerwer " ,
"s_number" : 9876540657341 ,
"teaser" : {
"string" : " "
},
"teasers" : [],
"snap" : {
"snaps" : []
}
}
]
}
}
這裡有一些事情非常有趣(請注意,我將所有數據更改為不侵犯此人的隱私) :
通過分析內容標題,我們很快找到了我們的私人API鍵: X-Auth-Token 。
通過複製這個令牌並轉向Postman,我們可以驗證我們確實可以使用正確的URL和我們的auth令牌自由地與Tinder API自由通信。
通過單擊Tinders WebApp,我很快發現了所有相關的API端點:
| 類型 | URL | 描述 |
|---|---|---|
| 得到 | /v2/rec/core | 返回附近的人名單 |
| 得到 | /v2/profile?include = account%2cuser | 返回有關您自己個人資料的所有信息 |
| 得到 | /v2/匹配 | 返回所有與您匹配的人的清單 |
| 得到 | /light/{user_id} | 喜歡給定的用戶_id的人 |
| 得到 | /通過/{user_id} | 通過給定的User_id通過 |
因此,讓我們進入代碼。為了方便起見,我們將使用Python請求庫與API進行通信,並圍繞它編寫API包裝類。
同樣,我們寫了一個小人班,該課程從代表一個人的Tinder中獲得API響應,並為Tinder API提供了一些基本接口。
讓我們從班級開始。它應接收API數據,Tinder-API對象,並將所有相關數據保存為實例變量。它將進一步提供一些基本功能,例如“喜歡”或“不喜歡”,向Tinder-API提出請求,這使我們可以方便地使用“ some_person.like()”,以便喜歡我們發現有趣的個人資料。
import datetime
from geopy . geocoders import Nominatim
TINDER_URL = "https://api.gotinder.com"
geolocator = Nominatim ( user_agent = "auto-tinder" )
PROF_FILE = "./images/unclassified/profiles.txt"
class Person ( object ):
def __init__ ( self , data , api ):
self . _api = api
self . id = data [ "_id" ]
self . name = data . get ( "name" , "Unknown" )
self . bio = data . get ( "bio" , "" )
self . distance = data . get ( "distance_mi" , 0 ) / 1.60934
self . birth_date = datetime . datetime . strptime ( data [ "birth_date" ], '%Y-%m-%dT%H:%M:%S.%fZ' ) if data . get (
"birth_date" , False ) else None
self . gender = [ "Male" , "Female" , "Unknown" ][ data . get ( "gender" , 2 )]
self . images = list ( map ( lambda photo : photo [ "url" ], data . get ( "photos" , [])))
self . jobs = list (
map ( lambda job : { "title" : job . get ( "title" , {}). get ( "name" ), "company" : job . get ( "company" , {}). get ( "name" )}, data . get ( "jobs" , [])))
self . schools = list ( map ( lambda school : school [ "name" ], data . get ( "schools" , [])))
if data . get ( "pos" , False ):
self . location = geolocator . reverse ( f' { data [ "pos" ][ "lat" ] } , { data [ "pos" ][ "lon" ] } ' )
def __repr__ ( self ):
return f" { self . id } - { self . name } ( { self . birth_date . strftime ( '%d.%m.%Y' ) } )"
def like ( self ):
return self . _api . like ( self . id )
def dislike ( self ):
return self . _api . dislike ( self . id )我們的API包裝器不僅僅是一種使用類調用Tinder API的方式:
import requests
TINDER_URL = "https://api.gotinder.com"
class tinderAPI ():
def __init__ ( self , token ):
self . _token = token
def profile ( self ):
data = requests . get ( TINDER_URL + "/v2/profile?include=account%2Cuser" , headers = { "X-Auth-Token" : self . _token }). json ()
return Profile ( data [ "data" ], self )
def matches ( self , limit = 10 ):
data = requests . get ( TINDER_URL + f"/v2/matches?count= { limit } " , headers = { "X-Auth-Token" : self . _token }). json ()
return list ( map ( lambda match : Person ( match [ "person" ], self ), data [ "data" ][ "matches" ]))
def like ( self , user_id ):
data = requests . get ( TINDER_URL + f"/like/ { user_id } " , headers = { "X-Auth-Token" : self . _token }). json ()
return {
"is_match" : data [ "match" ],
"liked_remaining" : data [ "likes_remaining" ]
}
def dislike ( self , user_id ):
requests . get ( TINDER_URL + f"/pass/ { user_id } " , headers = { "X-Auth-Token" : self . _token }). json ()
return True
def nearby_persons ( self ):
data = requests . get ( TINDER_URL + "/v2/recs/core" , headers = { "X-Auth-Token" : self . _token }). json ()
return list ( map ( lambda user : Person ( user [ "user" ], self ), data [ "data" ][ "results" ]))現在,我們可以使用API在附近找到人們,並查看他們的個人資料,甚至像所有人一樣。用您早些時候在Chrome Dev Console中發現的X-Auth-Toke替換您的API token。
if __name__ == "__main__" :
token = "YOUR-API-TOKEN"
api = tinderAPI ( token )
while True :
persons = api . nearby_persons ()
for person in persons :
print ( person )
# person.like() 接下來,我們要自動下載附近的一些人的圖像,我們可以用於培訓我們的AI。有了“一些”,我的意思是像1500-2500張圖像。
首先,讓我們通過允許我們下載圖像的功能擴展我們的班級。
# At the top of auto_tinder.py
PROF_FILE = "./images/unclassified/profiles.txt"
# inside the Person-class
def download_images ( self , folder = "." , sleep_max_for = 0 ):
with open ( PROF_FILE , "r" ) as f :
lines = f . readlines ()
if self . id in lines :
return
with open ( PROF_FILE , "a" ) as f :
f . write ( self . id + " r n " )
index = - 1
for image_url in self . images :
index += 1
req = requests . get ( image_url , stream = True )
if req . status_code == 200 :
with open ( f" { folder } / { self . id } _ { self . name } _ { index } .jpeg" , "wb" ) as f :
f . write ( req . content )
sleep ( random () * sleep_max_for )請注意,我在這里和那裡添加了一些隨機的睡眠,僅僅是因為如果我們垃圾郵件垃圾cdn並在短短幾秒鐘內下載許多圖片,我們可能會被阻止。
我們將所有人民資料ID寫入名為“ profiles.txt”的文件中。首先掃描文檔是否已經在那裡,我們可以跳過已經遇到的人,並確保我們不會對人進行幾次分類(您稍後會看到為什麼這是一種風險)。
現在,我們可以循環瀏覽附近的人,然後將其圖像下載到“未分類”的文件夾中。
if __name__ == "__main__" :
token = "YOUR-API-TOKEN"
api = tinderAPI ( token )
while True :
persons = api . nearby_persons ()
for person in persons :
person . download_images ( folder = "./images/unclassified" , sleep_max_for = random () * 3 )
sleep ( random () * 10 )
sleep ( random () * 10 )現在,我們可以簡單地啟動此腳本,然後讓它運行幾個小時,以獲取附近人的一些飢餓個人資料圖像。如果您是Tinder Pro用戶,請不時更新您的位置以吸引新朋友。
現在,我們可以使用很多圖像,讓我們構建一個非常簡單且醜陋的分類器。
它只需在我們的“未分類”文件夾中的所有圖像上循環,然後在GUI窗口中打開圖像。通過右鍵單擊一個人,我們可以將該人標記為“不喜歡”,而左鍵單擊將其標記為“喜歡”。稍後將在文件名中表示: 4TZ3KJLDFJ3482.JPG如果我們將圖像標記為“喜歡”或0_4TZ3KJJLDFJ3482.JPG ,則將重命名為1_4TZ3KJLDFJ3482.JPG 。標籤類似/不喜歡的標籤在FileMae開頭編碼為1/0。
讓我們使用tkinter快速編寫此GUI:
from os import listdir , rename
from os . path import isfile , join
import tkinter as tk
from PIL import ImageTk , Image
IMAGE_FOLDER = "./images/unclassified"
images = [ f for f in listdir ( IMAGE_FOLDER ) if isfile ( join ( IMAGE_FOLDER , f ))]
unclassified_images = filter ( lambda image : not ( image . startswith ( "0_" ) or image . startswith ( "1_" )), images )
current = None
def next_img ():
global current , unclassified_images
try :
current = next ( unclassified_images )
except StopIteration :
root . quit ()
print ( current )
pil_img = Image . open ( IMAGE_FOLDER + "/" + current )
width , height = pil_img . size
max_height = 1000
if height > max_height :
resize_factor = max_height / height
pil_img = pil_img . resize (( int ( width * resize_factor ), int ( height * resize_factor )), resample = Image . LANCZOS )
img_tk = ImageTk . PhotoImage ( pil_img )
img_label . img = img_tk
img_label . config ( image = img_label . img )
def positive ( arg ):
global current
rename ( IMAGE_FOLDER + "/" + current , IMAGE_FOLDER + "/1_" + current )
next_img ()
def negative ( arg ):
global current
rename ( IMAGE_FOLDER + "/" + current , IMAGE_FOLDER + "/0_" + current )
next_img ()
if __name__ == "__main__" :
root = tk . Tk ()
img_label = tk . Label ( root )
img_label . pack ()
img_label . bind ( "<Button-1>" , positive )
img_label . bind ( "<Button-3>" , negative )
btn = tk . Button ( root , text = 'Next image' , command = next_img )
next_img () # load first image
root . mainloop ()我們將所有未分類的圖像加載到“ Unclansified_images”列表中,打開一個TKINTER窗口,通過調用Next_img()將第一個圖像打包到其中,然後調整圖像大小以適合屏幕。然後,我們註冊兩次點擊,左右鼠標按鈕,並將函數調用為正/負數,以根據其標籤重命名圖像並顯示下一個圖像。
醜陋但有效。
在下一步中,我們需要將圖像數據帶入允許我們進行分類的格式。考慮到我們的數據集,我們必須考慮一些困難。
我們通過以下方式應對這些挑戰:

第一部分就像使用枕頭打開我們的圖像並將其轉換為灰度一樣容易。在第二部分中,我們將TensorFlow對象檢測API與Mobilenet網絡體系結構一起使用,該架構在可可數據集上預處理,該數據集也包含“人”標籤。
我們的人檢測腳本有四個部分:
您在我的GitHub存儲庫中找到了Tensorflow Mobilenet可可圖的.bp文件。讓我們以張量圖圖表打開它:
import tensorflow as tf
def open_graph ():
detection_graph = tf . Graph ()
with detection_graph . as_default ():
od_graph_def = tf . GraphDef ()
with tf . gfile . GFile ( 'ssd_mobilenet_v1_coco_2017_11_17/frozen_inference_graph.pb' , 'rb' ) as fid :
serialized_graph = fid . read ()
od_graph_def . ParseFromString ( serialized_graph )
tf . import_graph_def ( od_graph_def , name = '' )
return detection_graph我們使用枕頭進行圖像操縱。由於TensorFlow需要原始的Numpy數組來使用數據,因此讓我們編寫一個將枕頭圖像轉換為numpy數組的小功能:
import numpy as np
def load_image_into_numpy_array ( image ):
( im_width , im_height ) = image . size
return np . array ( image . getdata ()). reshape (
( im_height , im_width , 3 )). astype ( np . uint8 )下一個功能獲取圖像和TensorFlow圖,使用它運行Tensorflow會話,並返回有關檢測類(對像類型),邊界框和分數的所有信息(確定對像已正確檢測到了)。
import numpy as np
from object_detection . utils import ops as utils_ops
import tensorflow as tf
def run_inference_for_single_image ( image , sess ):
ops = tf . get_default_graph (). get_operations ()
all_tensor_names = { output . name for op in ops for output in op . outputs }
tensor_dict = {}
for key in [
'num_detections' , 'detection_boxes' , 'detection_scores' ,
'detection_classes' , 'detection_masks'
]:
tensor_name = key + ':0'
if tensor_name in all_tensor_names :
tensor_dict [ key ] = tf . get_default_graph (). get_tensor_by_name (
tensor_name )
if 'detection_masks' in tensor_dict :
# The following processing is only for single image
detection_boxes = tf . squeeze ( tensor_dict [ 'detection_boxes' ], [ 0 ])
detection_masks = tf . squeeze ( tensor_dict [ 'detection_masks' ], [ 0 ])
# Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
real_num_detection = tf . cast ( tensor_dict [ 'num_detections' ][ 0 ], tf . int32 )
detection_boxes = tf . slice ( detection_boxes , [ 0 , 0 ], [ real_num_detection , - 1 ])
detection_masks = tf . slice ( detection_masks , [ 0 , 0 , 0 ], [ real_num_detection , - 1 , - 1 ])
detection_masks_reframed = utils_ops . reframe_box_masks_to_image_masks (
detection_masks , detection_boxes , image . shape [ 1 ], image . shape [ 2 ])
detection_masks_reframed = tf . cast (
tf . greater ( detection_masks_reframed , 0.5 ), tf . uint8 )
# Follow the convention by adding back the batch dimension
tensor_dict [ 'detection_masks' ] = tf . expand_dims (
detection_masks_reframed , 0 )
image_tensor = tf . get_default_graph (). get_tensor_by_name ( 'image_tensor:0' )
# Run inference
output_dict = sess . run ( tensor_dict ,
feed_dict = { image_tensor : image })
# all outputs are float32 numpy arrays, so convert types as appropriate
output_dict [ 'num_detections' ] = int ( output_dict [ 'num_detections' ][ 0 ])
output_dict [ 'detection_classes' ] = output_dict [
'detection_classes' ][ 0 ]. astype ( np . int64 )
output_dict [ 'detection_boxes' ] = output_dict [ 'detection_boxes' ][ 0 ]
output_dict [ 'detection_scores' ] = output_dict [ 'detection_scores' ][ 0 ]
if 'detection_masks' in output_dict :
output_dict [ 'detection_masks' ] = output_dict [ 'detection_masks' ][ 0 ]
return output_dict最後一步是編寫一個符合圖像路徑,使用枕頭打開的函數,調用對象檢測API接口,並根據檢測到的人的邊界框進行圖像。
import numpy as np
from PIL import Image
PERSON_CLASS = 1
SCORE_THRESHOLD = 0.5
def get_person ( image_path , sess ):
img = Image . open ( image_path )
image_np = load_image_into_numpy_array ( img )
image_np_expanded = np . expand_dims ( image_np , axis = 0 )
output_dict = run_inference_for_single_image ( image_np_expanded , sess )
persons_coordinates = []
for i in range ( len ( output_dict [ "detection_boxes" ])):
score = output_dict [ "detection_scores" ][ i ]
classtype = output_dict [ "detection_classes" ][ i ]
if score > SCORE_THRESHOLD and classtype == PERSON_CLASS :
persons_coordinates . append ( output_dict [ "detection_boxes" ][ i ])
w , h = img . size
for person_coordinate in persons_coordinates :
cropped_img = img . crop ((
int ( w * person_coordinate [ 1 ]),
int ( h * person_coordinate [ 0 ]),
int ( w * person_coordinate [ 3 ]),
int ( h * person_coordinate [ 2 ]),
))
return cropped_img
return None作為最後一步,我們編寫了一個腳本,該腳本在“未分類”文件夾中的所有圖像上循環,檢查它們是否具有名稱中的編碼標籤,並在“分類”文件夾中復製圖像,並應用以前開發的預處理步驟:
import os
import person_detector
import tensorflow as tf
IMAGE_FOLDER = "./images/unclassified"
POS_FOLDER = "./images/classified/positive"
NEG_FOLDER = "./images/classified/negative"
if __name__ == "__main__" :
detection_graph = person_detector . open_graph ()
images = [ f for f in os . listdir ( IMAGE_FOLDER ) if os . path . isfile ( os . path . join ( IMAGE_FOLDER , f ))]
positive_images = filter ( lambda image : ( image . startswith ( "1_" )), images )
negative_images = filter ( lambda image : ( image . startswith ( "0_" )), images )
with detection_graph . as_default ():
with tf . Session () as sess :
for pos in positive_images :
old_filename = IMAGE_FOLDER + "/" + pos
new_filename = POS_FOLDER + "/" + pos [: - 5 ] + ".jpg"
if not os . path . isfile ( new_filename ):
img = person_detector . get_person ( old_filename , sess )
if not img :
continue
img = img . convert ( 'L' )
img . save ( new_filename , "jpeg" )
for neg in negative_images :
old_filename = IMAGE_FOLDER + "/" + neg
new_filename = NEG_FOLDER + "/" + neg [: - 5 ] + ".jpg"
if not os . path . isfile ( new_filename ):
img = person_detector . get_person ( old_filename , sess )
if not img :
continue
img = img . convert ( 'L' )
img . save ( new_filename , "jpeg" )當我們運行此腳本時,所有標記的圖像都將被處理並移至“分類”目錄中的相應子文件夾中。
對於重新培訓部分,我們將僅將TensorFlows Retrain.py腳本與InceptionV3模型一起使用。
使用以下參數調用項目根目錄中的腳本:
python retrain.py --bottleneck_dir=tf/training_data/bottlenecks --model_dir=tf/training_data/inception --summaries_dir=tf/training_data/summaries/basic --output_graph=tf/training_output/retrained_graph.pb --output_labels=tf/training_output/retrained_labels.txt --image_dir=./images/classified --how_many_training_steps=50000 --testing_percentage=20 --learning_rate=0.001在GTX 1080 Ti上,學習大約需要15分鐘,最終的精度約為我的標籤數據集,但這在很大程度上取決於輸入數據的質量和標籤。
培訓過程的結果是“ TF/Training_Output/retrained_graph.pb”文件中的Retrest InceptionV3模型。現在,我們必須編寫一個分類器類,該類別有效地使用TensorFlow圖中的新權重進行分類預測。
讓我們編寫一個分類器類,將圖形打開作為會話打開,並使用圖像文件提供“分類”方法,該方法返回具有與我們的標籤“正面”和“負”的確定性值返回的dict。
該類同時將圖形的路徑和標籤文件的路徑作為輸入,均位於我們的“ TF/Training_output/”文件夾中。我們開發了用於將圖像文件轉換為張量的助手功能,我們可以將其輸入圖形,一個用於加載圖形和標籤的輔助功能,以及使用它完成後關閉圖形的重要小功能。
import numpy as np
import tensorflow as tf
class Classifier ():
def __init__ ( self , graph , labels ):
self . _graph = self . load_graph ( graph )
self . _labels = self . load_labels ( labels )
self . _input_operation = self . _graph . get_operation_by_name ( "import/Placeholder" )
self . _output_operation = self . _graph . get_operation_by_name ( "import/final_result" )
self . _session = tf . Session ( graph = self . _graph )
def classify ( self , file_name ):
t = self . read_tensor_from_image_file ( file_name )
# Open up a new tensorflow session and run it on the input
results = self . _session . run ( self . _output_operation . outputs [ 0 ], { self . _input_operation . outputs [ 0 ]: t })
results = np . squeeze ( results )
# Sort the output predictions by prediction accuracy
top_k = results . argsort ()[ - 5 :][:: - 1 ]
result = {}
for i in top_k :
result [ self . _labels [ i ]] = results [ i ]
# Return sorted result tuples
return result
def close ( self ):
self . _session . close ()
@ staticmethod
def load_graph ( model_file ):
graph = tf . Graph ()
graph_def = tf . GraphDef ()
with open ( model_file , "rb" ) as f :
graph_def . ParseFromString ( f . read ())
with graph . as_default ():
tf . import_graph_def ( graph_def )
return graph
@ staticmethod
def load_labels ( label_file ):
label = []
proto_as_ascii_lines = tf . gfile . GFile ( label_file ). readlines ()
for l in proto_as_ascii_lines :
label . append ( l . rstrip ())
return label
@ staticmethod
def read_tensor_from_image_file ( file_name ,
input_height = 299 ,
input_width = 299 ,
input_mean = 0 ,
input_std = 255 ):
input_name = "file_reader"
file_reader = tf . read_file ( file_name , input_name )
image_reader = tf . image . decode_jpeg (
file_reader , channels = 3 , name = "jpeg_reader" )
float_caster = tf . cast ( image_reader , tf . float32 )
dims_expander = tf . expand_dims ( float_caster , 0 )
resized = tf . image . resize_bilinear ( dims_expander , [ input_height , input_width ])
normalized = tf . divide ( tf . subtract ( resized , [ input_mean ]), [ input_std ])
sess = tf . Session ()
result = sess . run ( normalized )
return result 現在,我們已經擁有分類器,讓我們從較早的時候擴展“人”類,並使用“ precept_likeliness”函數擴展它,該功能使用分類器實例來驗證是否應該喜歡給定的人。
# In the Person class
def predict_likeliness ( self , classifier , sess ):
ratings = []
for image in self . images :
req = requests . get ( image , stream = True )
tmp_filename = f"./images/tmp/run.jpg"
if req . status_code == 200 :
with open ( tmp_filename , "wb" ) as f :
f . write ( req . content )
img = person_detector . get_person ( tmp_filename , sess )
if img :
img = img . convert ( 'L' )
img . save ( tmp_filename , "jpeg" )
certainty = classifier . classify ( tmp_filename )
pos = certainty [ "positive" ]
ratings . append ( pos )
ratings . sort ( reverse = True )
ratings = ratings [: 5 ]
if len ( ratings ) == 0 :
return 0.001
return ratings [ 0 ] * 0.6 + sum ( ratings [ 1 :]) / len ( ratings [ 1 :]) * 0.4現在,我們必須將所有的拼圖拼湊在一起。
首先,讓我們用API令牌初始化Tinder API。然後,我們使用REAR的圖形和標籤打開分類張量圖作為張量式會話。然後,我們在附近取出人,做出可能的預測。
一點點的獎勵,如果Tinder上的人與我一樣,我添加了1.2的可能性,以便我更有可能與當地學生匹配。
對於所有預測的可能得分為0.8的人來說,我稱之為其他人都不喜歡。
我將腳本開發為在啟動後的接下來的兩個小時內自動播放。
from likeliness_classifier import Classifier
import person_detector
import tensorflow as tf
from time import time
if __name__ == "__main__" :
token = "YOUR-API-TOKEN"
api = tinderAPI ( token )
detection_graph = person_detector . open_graph ()
with detection_graph . as_default ():
with tf . Session () as sess :
classifier = Classifier ( graph = "./tf/training_output/retrained_graph.pb" ,
labels = "./tf/training_output/retrained_labels.txt" )
end_time = time () + 60 * 60 * 2
while time () < end_time :
try :
persons = api . nearby_persons ()
pos_schools = [ "Universität Zürich" , "University of Zurich" , "UZH" ]
for person in persons :
score = person . predict_likeliness ( classifier , sess )
for school in pos_schools :
if school in person . schools :
print ()
score *= 1.2
print ( "-------------------------" )
print ( "ID: " , person . id )
print ( "Name: " , person . name )
print ( "Schools: " , person . schools )
print ( "Images: " , person . images )
print ( score )
if score > 0.8 :
res = person . like ()
print ( "LIKE" )
else :
res = person . dislike ()
print ( "DISLIKE" )
except Exception :
pass
classifier . close ()就是這樣!現在,只要我們喜歡,我們就可以讓我們的腳本運行,而在不濫用拇指的情況下玩Tinder!
如果您有疑問或發現的錯誤,請隨時為我的GitHub存儲庫做出貢獻。
麻省理工學院許可證
版權(C)2018 Joel Barmettler
特此免費授予任何獲得此軟件副本和相關文檔文件(“軟件”)的人,以無限制處理該軟件,包括無限制,使用,複製,修改,合併的權利,發布,分發,分佈和/或出售該軟件的副本,並允許提供該軟件的人,但要遵守以下條件:
上述版權通知和此許可通知應包含在軟件的所有副本或大量部分中。
該軟件是“原樣”提供的,沒有任何形式的明示或暗示保證,包括但不限於適銷性,特定目的的適用性和非侵權的保證。 IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE軟體.
僱用我們:蘇軟件在蘇黎世!