多测师是一家拥有先进的教学理念,强大的师资团队,业内好评甚多的接口自动化测试培训机构!

17727591462

联系电话

您现在所在位置:接口自动化测试培训 > 新闻资讯

UI自动化测试框架图像识别原理

更新时间:2022-03-18 09:40:24 作者:多测师 浏览:303

  AirtestIDE 是一个跨平台的UI自动化测试编辑器,适用于游戏和App。

  ·自动化脚本录制、一键回放、报告查看,轻而易举实现自动化测试流程

  ·支持基于图像识别的Airtest框架,适用于所有Android和Windows游戏

  ·支持基于UI控件搜索的Poco框架,适用于Unity3d,Cocos2d与Android App

  一句话总结:我们推出了两款基于Python的UI自动化测试框架Airtest(用截图写脚本)和Poco(用界面UI元素来写脚本),可以用我们提供的AirtestIDE来快速编写你的自动化测试脚本~

  本文重点是针对Airtest中的图像识别进行代码走读,加深对图像识别原理的理解

UI自动化测试框架图像识别原理

  1、准备工作,下载好源码 https://github.com/AirtestProject/Airtest

  2、我们从最简单的touch方法入手,即为点击某个传入的图片,源码在api.py里面

  @logwrap

  def touch(v, times=1, **kwargs):

  """

  Perform the touch action on the device screen

  :param v: target to touch, either a Template instance or absolute coordinates (x, y)

  :param times: how many touches to be performed

  :param kwargs: platform specific `kwargs`, please refer to corresponding docs

  :return: finial position to be clicked

  :platforms: Android, Windows, iOS

  """

  if isinstance(v, Template):

  pos = loop_find(v, timeout=ST.FIND_TIMEOUT)

  else:

  try_log_screen()

  pos = v

  for _ in range(times):

  G.DEVICE.touch(pos, **kwargs)

  time.sleep(0.05)

  delay_after_operation()

  return pos

  click = touch # click is alias of touch

  这个函数执行点击操作的是 G.DEVICE.touch(pos, **kwargs)

  而pos就是图片匹配返回的坐标位置,我们重点看loop_find这个函数

  @logwrap

  def loop_find(query, timeout=ST.FIND_TIMEOUT, threshold=None, interval=0.5, intervalfunc=None):

  """

  Search for image template in the screen until timeout

  Args:

  query: image template to be found in screenshot

  timeout: time interval how long to look for the image template

  threshold: default is None

  interval: sleep interval before next attempt to find the image template

  intervalfunc: function that is executed after unsuccessful attempt to find the image template

  Raises:

  TargetNotFoundError: when image template is not found in screenshot

  Returns:

  TargetNotFoundError if image template not found, otherwise returns the position where the image template has

  been found in screenshot

  """

  G.LOGGING.info("Try finding:\n%s", query)

  start_time = time.time()

  while True:

  screen = G.DEVICE.snapshot(filename=None)

  if screen is None:

  G.LOGGING.warning("Screen is None, may be locked")

  else:

  if threshold:

  query.threshold = threshold

  match_pos = query.match_in(screen)

  if match_pos:

  try_log_screen(screen)

  return match_pos

  if intervalfunc is not None:

  intervalfunc()

  # 超时则raise,未超时则进行下次循环:

  if (time.time() - start_time) > timeout:

  try_log_screen(screen)

  raise TargetNotFoundError('Picture %s not found in screen' % query)

  else:

  time.sleep(interval)

  首先会获取手机屏幕截图,然后对比脚本传入图片获取匹配上的位置

  match_pos = query.match_in(screen)

  在cv.py 里面找到 Template类的 match_in方法

  def match_in(self, screen):

  match_result = self._cv_match(screen)

  G.LOGGING.debug("match result: %s", match_result)

  if not match_result:

  return None

  focus_pos = TargetPos().getXY(match_result, self.target_pos)

  return focus_pos

  重要的是 self._cv_match(screen)

  @logwrap

  def _cv_match(self, screen):

  # in case image file not exist in current directory:

  image = self._imread()

  image = self._resize_image(image, screen, ST.RESIZE_METHOD)

  ret = None

  for method in ST.CVSTRATEGY:

  if method == "tpl":

  ret = self._try_match(self._find_template, image, screen)

  elif method == "sift":

  ret = self._try_match(self._find_sift_in_predict_area, image, screen)

  if not ret:

  ret = self._try_match(self._find_sift, image, screen)

  else:

  G.LOGGING.warning("Undefined method in CV_STRATEGY: %s", method)

  if ret:

  break

  return ret

  这里传入的图像需要进行缩放变化,写用例时候的截图进行变换后转换成跑用例时候的截图,以提高匹配成功率

  image = self._resize_image(image, screen, ST.RESIZE_METHOD)

  这里的匹配方法会遍历ST.CVSTRATEGY里面的方法,这个定义在Setting.py文件里面,默认是包含两种方法的

  CVSTRATEGY = ["tpl", "sift"]

  如果某个方法匹配上了,就返回匹配结果,那么接下来就是重点搞清楚这几个方法是怎样实现的了。

  _find_sift_in_predict_area也会调用到 _find_sift,那么接下重点就是分析这两个方法了

  cv.py 中的 _find_template _find_sift

  def _find_template(self, image, screen):

  return aircv.find_template(screen, image, threshold=self.threshold, rgb=self.rgb)

  def _find_sift(self, image, screen):

  return aircv.find_sift(screen, image, threshold=self.threshold, rgb=self.rgb)

  3、先看

  aircv.find_template 具体实现在 template.py

  def find_template(im_source, im_search, threshold=0.8, rgb=False):

  """函数功能:找到最优结果."""

  # 第一步:校验图像输入

  check_source_larger_than_search(im_source, im_search)

  # 第二步:计算模板匹配的结果矩阵res

  res = _get_template_result_matrix(im_source, im_search)

  # 第三步:依次获取匹配结果

  min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)

  h, w = im_search.shape[:2]

  # 求取可信度:

  confidence = _get_confidence_from_matrix(im_source, im_search, max_loc, max_val, w, h, rgb)

  # 求取识别位置: 目标中心 + 目标区域:

  middle_point, rectangle = _get_target_rectangle(max_loc, w, h)

  best_match = generate_result(middle_point, rectangle, confidence)

  LOGGING.debug("threshold=%s, result=%s" % (threshold, best_match))

  return best_match if confidence >= threshold else None

  重点在 _get_template_result_matrix

  def _get_template_result_matrix(im_source, im_search):

  """求取模板匹配的结果矩阵."""

  # 灰度识别: cv2.matchTemplate( )只能处理灰度图片参数

  s_gray, i_gray = img_mat_rgb_2_gray(im_search), img_mat_rgb_2_gray(im_source)

  return cv2.matchTemplate(i_gray, s_gray, cv2.TM_CCOEFF_NORMED)

  这里可以看到,Airtest也没有自己研究一套很牛逼的算法,直接用的OpenCV的模板匹配方法

  4、接着看另外一个方法

  sift.py

  def find_sift(im_source, im_search, threshold=0.8, rgb=True, good_ratio=FILTER_RATIO):

  """基于sift进行图像识别,只筛选出最优区域."""

  # 第一步:检验图像是否正常:

  if not check_image_valid(im_source, im_search):

  return None

  # 第二步:获取特征点集并匹配出特征点对: 返回值 good, pypts, kp_sch, kp_src

  kp_sch, kp_src, good = _get_key_points(im_source, im_search, good_ratio)

  # 第三步:根据匹配点对(good),提取出来识别区域:

  if len(good) == 0:

  # 匹配点对为0,无法提取识别区域:

  return None

  elif len(good) == 1:

  # 匹配点对为1,可信度赋予设定值,并直接返回:

  return _handle_one_good_points(kp_src, good, threshold) if ONE_POINT_CONFI >= threshold else None

  elif len(good) == 2:

  # 匹配点对为2,根据点对求出目标区域,据此算出可信度:

  origin_result = _handle_two_good_points(im_source, im_search, kp_src, kp_sch, good)

  if isinstance(origin_result, dict):

  return origin_result if ONE_POINT_CONFI >= threshold else None

  else:

  middle_point, pypts, w_h_range = _handle_two_good_points(im_source, im_search, kp_src, kp_sch, good)

  elif len(good) == 3:

  # 匹配点对为3,取出点对,求出目标区域,据此算出可信度:

  origin_result = _handle_three_good_points(im_source, im_search, kp_src, kp_sch, good)

  if isinstance(origin_result, dict):

  return origin_result if ONE_POINT_CONFI >= threshold else None

  else:

  middle_point, pypts, w_h_range = _handle_three_good_points(im_source, im_search, kp_src, kp_sch, good)

  else:

  # 匹配点对 >= 4个,使用单矩阵映射求出目标区域,据此算出可信度:

  middle_point, pypts, w_h_range = _many_good_pts(im_source, im_search, kp_sch, kp_src, good)

  # 第四步:根据识别区域,求出结果可信度,并将结果进行返回:

  # 对识别结果进行合理性校验: 小于5个像素的,或者缩放超过5倍的,一律视为不合法直接raise.

  _target_error_check(w_h_range)

  # 将截图和识别结果缩放到大小一致,准备计算可信度

  x_min, x_max, y_min, y_max, w, h = w_h_range

  target_img = im_source[y_min:y_max, x_min:x_max]

  resize_img = cv2.resize(target_img, (w, h))

  confidence = _cal_sift_confidence(im_search, resize_img, rgb=rgb)

  best_match = generate_result(middle_point, pypts, confidence)

  print("[aircv][sift] threshold=%s, result=%s" % (threshold, best_match))

  return best_match if confidence >= threshold else None

  重点看如何找到特征点集

  def _get_key_points(im_source, im_search, good_ratio):

  """根据传入图像,计算图像所有的特征点,并得到匹配特征点对."""

  # 准备工作: 初始化sift算子

  sift = _init_sift()

  # 第一步:获取特征点集,并匹配出特征点对: 返回值 good, pypts, kp_sch, kp_src

  kp_sch, des_sch = sift.detectAndCompute(im_search, None)

  kp_src, des_src = sift.detectAndCompute(im_source, None)

  # When apply knnmatch , make sure that number of features in both test and

  # query image is greater than or equal to number of nearest neighbors in knn match.

  if len(kp_sch) < 2 or len(kp_src) < 2:

  raise NoSiftMatchPointError("Not enough feature points in input images !")

  # 匹配两个图片中的特征点集,k=2表示每个特征点取出2个最匹配的对应点:

  matches = FLANN.knnMatch(des_sch, des_src, k=2)

  good = []

  # good为特征点初选结果,剔除掉前两名匹配太接近的特征点,不是独特优秀的特征点直接筛除(多目标识别情况直接不适用)

  for m, n in matches:

  if m.distance < good_ratio * n.distance:

  good.append(m)

  # good点需要去除重复的部分,(设定源图像不能有重复点)去重时将src图像中的重复点找出即可

  # 去重策略:允许搜索图像对源图像的特征点映射一对多,不允许多对一重复(即不能源图像上一个点对应搜索图像的多个点)

  good_diff, diff_good_point = [], [[]]

  for m in good:

  diff_point = [int(kp_src[m.trainIdx].pt[0]), int(kp_src[m.trainIdx].pt[1])]

  if diff_point not in diff_good_point:

  good_diff.append(m)

  diff_good_point.append(diff_point)

  good = good_diff

  return kp_sch, kp_src, good

  至于这个sift是什么对象

  def _init_sift():

  """Make sure that there is SIFT module in OpenCV."""

  if cv2.__version__.startswith("3."):

  # OpenCV3.x, sift is in contrib module, you need to compile it seperately.

  try:

  sift = cv2.xfeatures2d.SIFT_create(edgeThreshold=10)

  except:

  print("to use SIFT, you should build contrib with opencv3.0")

  raise NoSIFTModuleError("There is no SIFT module in your OpenCV environment !")

  else:

  # OpenCV2.x, just use it.

  sift = cv2.SIFT(edgeThreshold=10)

  return sift

  可以看到,用到的也是OpenCV的方法,如果是OpenCV3则查找图像特征点集的方法就是

  cv2.xfeatures2d.SIFT_create(edgeThreshold=10).detectAndCompute()

  5.总结下来,最终用到的就是OpenCV的两个方法,模版匹配和特征匹配

  模板匹配 cv2.mathTemplate

  特征匹配 cv2.FlannBasedMatcher(index_params,search_params).knnMatch(des1,des2,k=2)

  哪个优先匹配上了,就直接返回结果

  6、总结

  图像识别,对不能用ui控件定位的地方的,使用图像识别来定位,对一些自定义控件、H5、小程序、游戏,都可以支持;

  支持多个终端,使用图像识别的话可以一套代码兼容android和ios哦,用ui控件定位的话需要兼容一下。

  缺点:对于背景透明的按钮或者控件,识别难度加大

  以上内容为大家介绍了UI自动化测试框架图像识别原理,本文由多测师亲自撰写,希望对大家有所帮助。了解更多自动化测试相关知识:https://www.aichudan.com/xwzx/

联系电话

17727591462

返回顶部