實戰(zhàn):OpenVINO+OpenCV 文本檢測與識別
點擊上方“小白學(xué)視覺”,選擇加"星標(biāo)"或“置頂”
重磅干貨,第一時間送達(dá)
本文轉(zhuǎn)自|OpenCV學(xué)堂
模型介紹
文本檢測模型
OpenVINO支持場景文字檢測是基于MobileNet的PixelLink模型,該模型有兩個輸出,分別是分割輸出與bounding Boxes輸出,結(jié)構(gòu)如下:

下面是基于VGG16作為backbone實現(xiàn)的PixelLink的模型結(jié)構(gòu):

輸入格式:1x3x768x1280 BGR彩色圖像
輸出格式:
name: "model/link_logits_/add", [1x16x192x320] – pixelLink的輸出name: "model/segm_logits/add", [1x2x192x320] – 像素分類text/no text
文本識別模型
基于VGG16+雙向LSTM,識別0~9與26個字符加空白,并且非大小寫敏感!基于CNN+LSTM的文本識別網(wǎng)絡(luò)結(jié)構(gòu)如下:

這里CNN使用類似VGG16結(jié)構(gòu)提前特征,序列預(yù)測使用雙向LSTM網(wǎng)絡(luò)。
輸入格式:1x1x32x120輸出格式:30, 1, 37輸出解釋是基于CTC貪心解析方式。
代碼演示
01
文本檢測
基于PixelLink完成文本檢測,其中加載模型與獲取輸入與輸出層名稱的代碼實現(xiàn)如下:
1log.info("Creating Inference Engine")
2ie = IECore()
3dete_net = ie.read_network(model=dete_text_xml, weights=dete_text_bin)
4reco_net = ie.read_network(model=reco_text_xml, weights=reco_text_bin)
5
6# 文本檢測網(wǎng)絡(luò), 輸入與輸出格式
7log.info("加載文本檢測網(wǎng)絡(luò),解析輸入與輸出格式...")
8input_it = iter(dete_net.input_info)
9input_det_blob = next(input_it)
10print(input_det_blob)
11output_it = iter(dete_net.outputs)
12out_det_blob1 = next(output_it)
13out_det_blob2 = next(output_it)
14
15# Read and pre-process input images
16print(dete_net.input_info[input_det_blob].input_data.shape)
17dn, dc, dh, dw = dete_net.input_info[input_det_blob].input_data.shape
18
19# Loading model to the plugin
20det_exec_net = ie.load_network(network=dete_net, device_name="CPU")
21print("out_det_blob1: ", out_det_blob1, "out_det_blob2: ", out_det_blob2)執(zhí)行推理與解析輸出的代碼如下:
1image = cv.imread("D:/images/openvino_ocr.jpg")
2# image = cv.imread("D:/facedb/tiaoma/1.png")
3h, w, c = image.shape
4cv.imshow("input", image)
5img_blob = cv.resize(image, (dw, dh))
6img_blob = img_blob.transpose(2, 0, 1)
7# Start sync inference
8log.info("Starting inference in synchronous mode")
9inf_start1 = time.time()
10res = det_exec_net.infer(inputs={input_det_blob: [img_blob]})
11inf_end1 = time.time() - inf_start1
12print("inference time(ms) : %.3f" % (inf_end1 * 1000))
13link_logits_ = res[out_det_blob1][0]
14segm_logits = res[out_det_blob2][0]
15link_logits_ = link_logits_.transpose(1, 2, 0)
16segm_logits = segm_logits.transpose(1, 2, 0)
17pixel_mask = np.zeros((192, 320), dtype=np.uint8)
18print(link_logits_.shape, segm_logits.shape)
19# 192, 320
20for row in range(192):
21 for col in range(320):
22 pv1 = segm_logits[row, col, 0]
23 pv2 = segm_logits[row, col, 1]
24 if pv2 > 1.0:
25 pixel_mask[row, col] = 255
26
27mask = cv.resize(pixel_mask, (w, h))
28cv.imshow("mask", mask)運行結(jié)果如下:運行結(jié)果:


02
文本識別
文本識別跟文本檢測的代碼流程類似,首先需要加載模型,獲取輸入與輸出層格式與屬性,代碼實現(xiàn)如下:
1ie = IECore()
2reco_net = ie.read_network(model=reco_text_xml, weights=reco_text_bin)
3
4# 文本識別網(wǎng)絡(luò)
5log.info("加載文本識別網(wǎng)絡(luò),解析輸入與輸出格式...")
6input_rec_it = iter(reco_net.input_info)
7input_rec_blob = next(input_rec_it)
8print(input_rec_blob)
9output_rec_it = iter(reco_net.outputs)
10out_rec_blob = next(output_rec_it)
11
12# Read and pre-process input images
13print(reco_net.input_info[input_rec_blob].input_data.shape)
14rn, rc, rh, rw = reco_net.input_info[input_rec_blob].input_data.shape
15
16# Loading model to the plugin
17rec_exec_net = ie.load_network(network=reco_net, device_name="CPU")
18print("out_rec_blob1: ", out_rec_blob)
19
20# 文字識別
21image = cv.imread("D:/images/zsxq/ocr3.png")
22gray = cv.cvtColor(image, cv.COLOR_BGR2GRAY)
23ret, binary = cv.threshold(gray, 0, 255, cv.THRESH_BINARY_INV | cv.THRESH_OTSU)
24se = cv.getStructuringElement(cv.MORPH_RECT, (5, 1))
25binary = cv.dilate(binary, se)
26cv.imshow("binary", binary)
27cv.waitKey(0)
28contours, hireachy = cv.findContours(binary, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
29for cnt in range(len(contours)):
30 x, y, iw, ih = cv.boundingRect(contours[cnt])
31 roi = gray[y:y + ih, x:x + iw]
32 rec_roi = cv.resize(roi, (rw, rh))
33 rec_roi_blob = np.expand_dims(rec_roi, 0)
34
35 # Start sync inference
36 log.info("Starting inference in synchronous mode")
37 inf_start1 = time.time()
38 res = rec_exec_net.infer(inputs={input_rec_blob: [rec_roi_blob]})
39 inf_end1 = time.time() - inf_start1
40 print("inference time(ms) : %.3f" % (inf_end1 * 1000))
41 res = res[out_rec_blob]
42 txt = greedy_prase_text(res)
43 cv.putText(image, txt, (x, y), cv.FONT_HERSHEY_PLAIN, 1.0, (0, 0, 255), 1, 8)
44cv.imshow("recognition text demo", image)
45cv.waitKey(0)
46cv.destroyAllWindows()運行結(jié)果如下:

檢測+識別一起,運行結(jié)果如下:

CTC貪心解析
重新整理了一下,CTC貪心解析部分的代碼函數(shù)。不用看公式,看完你會暈倒而且寫不出代碼!實現(xiàn)如下:
def ctc_soft_max(data):sum = 0;max_val = max(data)index = np.argmax(data)for i in range(len(data)):sum += np.exp(data[i]- max_val)prob = 1.0 / sumreturn index, probdef greedy_prase_text(res):# CTC greedy decode from hereprint(res.shape)# 解析輸出textocrstr = ""prev_pad = False;for i in range(res.shape[0]):ctc = res[i] # 1x13ctc = np.squeeze(ctc, 0)index, prob = ctc_soft_max(ctc)if digit_nums[index] == '#':prev_pad = Trueelse:if len(ocrstr) == 0 or prev_pad or (len(ocrstr) > 0 and digit_nums[index] != ocrstr[-1]):prev_pad = Falseocrstr += digit_nums[index]print(ocrstr)return ocrstr
交流群
歡迎加入公眾號讀者群一起和同行交流,目前有SLAM、三維視覺、傳感器、自動駕駛、計算攝影、檢測、分割、識別、醫(yī)學(xué)影像、GAN、算法競賽等微信群(以后會逐漸細(xì)分),請掃描下面微信號加群,備注:”昵稱+學(xué)校/公司+研究方向“,例如:”張三 + 上海交大 + 視覺SLAM“。請按照格式備注,否則不予通過。添加成功后會根據(jù)研究方向邀請進(jìn)入相關(guān)微信群。請勿在群內(nèi)發(fā)送廣告,否則會請出群,謝謝理解~

