1.功能描述
使能图算融合尝试网络性能调优
2.功能简介
当前网络性能要进行调优的时候,可以尝试打开图算融合开关。而单纯设置图算融合为True的时候,并不一定挖掘了其所有的优化空间,在图算融合内部,优化的幅度是分等级的,一般分为以下4个等级。
3.解决方法
正如前文提及的,图算融合的优化幅度是有等级的,具体的等级设置可以用一个flag opt_level来进行控制,具体的操作,则是可以在正式训练前使用context进行设置,如在run_train()一类的函数前加一行使能图算最高等级的优化:
context.set_context(enable_graph_kernel=True,graph_kernel_flags="--opt_level=3")
这样便可将图算尚且还在实验中的一些优化也给加上了,通过这样的设置可以方便观察性能是否有了进一步的提升。 如下代码所示,分别是默认开图算和开图算且设置了最高阶优化的形式:
- # Copyright 2021 Huawei Technologies Co., Ltd
- #
- # Licensed under the Apache License, Version 2.0 (the "License");
- # you may not use this file except in compliance with the License.
- # You may obtain a copy of the License at
- #
- # http://www.apache.org/licenses/LICENSE-2.0
- #
- # Unless required by applicable law or agreed to in writing, software
- # distributed under the License is distributed on an "AS IS" BASIS,
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- # See the License for the specific language governing permissions and
- # limitations under the License.
- # ============================================================================
-
- import numpy as np
- import mindspore.context as context
- from mindspore import Tensor
- import mindspore.nn as nn
- from mindspore.nn import Cell
- from mindspore.ops import operations as P
- import mindspore.ops.functional as F
- import pytest
-
- context.set_context(mode=context.GRAPH_MODE, device_target="GPU")
- # enable graph kernel optimization.
- context.set_context(enable_graph_kernel=True)
-
-
- class BertAttentionPiece(Cell):
- def __init__(self):
- super(BertAttentionPiece, self).__init__()
- self.add = P.Add()
- self.dropout = nn.Dropout(1 - 0.1)
- self.softmax = nn.Softmax()
- self.multiply_data = -10000.0
- self.sub = P.Sub()
- self.multiply = P.Mul()
- self.get_dtype = P.DType()
- self.cast = P.Cast()
-
- def construct(self, attention_mask, attention_scores):
- multiply_out = self.sub(self.cast(F.tuple_to_array((1.0,)), self.get_dtype(attention_scores)),
- self.cast(attention_mask, self.get_dtype(attention_scores)))
- adder = self.multiply(multiply_out, self.multiply_data)
- attention_scores = self.add(adder, attention_scores)
- attention_probs = self.softmax(attention_scores)
- attention_probs = self.dropout(attention_probs)
- return attention_probs
-
-
- def get_rtol_atol(dtype):
- if dtype == np.float16:
- return 1.e-3, 1.e-3
- return 1.e-4, 1.e-4
-
-
- def compare_result(expect, output, dtype):
- rtol, atol = get_rtol_atol(dtype)
- if isinstance(expect, (list, tuple)):
- assert isinstance(output, (list, tuple)) and len(expect) == len(output)
- expect_list = list(expect)
- output_list = list(output)
- for e, o in zip(expect_list, output_list):
- assert np.allclose(e.asnumpy(), o.asnumpy(), rtol, atol, equal_nan=True)
- else:
- assert np.allclose(expect.asnumpy(), output.asnumpy(), rtol, atol, equal_nan=True)
-
-
- def get_softmax_output(x, y, use_experimental_features):
- # use experimental features such as stitch fusion.
- if use_experimental_features:
- context.set_context(graph_kernel_flags="--opt_level=3")
- net = BertAttentionPiece()
- result = net(x, y)
- return result
-
-
- def test_softmax(shape, dtype):
- np.random.seed(0)
- x = Tensor(np.random.normal(0, 1, shape).astype(dtype))
- y = Tensor(np.random.normal(0, 1, shape).astype(dtype))
- expect = get_softmax_output(x, y, False)
- output = get_softmax_output(x, y, True)
- compare_result(expect, output, dtype)
-
-
- def test_softmax_gpu():
- context.set_context(mode=context.GRAPH_MODE, device_target="GPU")
- test_softmax([64, 12, 128, 128], np.float16)
-
- if __name__ == '__main__':
- test_softmax_gpu()
在均能正确跑出结果的同时,使能了opt_level=3的话,会对这个用例进行一个stitch fusion的优化,以便获取更优性能。

上图为单纯使能图算会生成的融合pattern,为3个,而下图为使能了高阶优化后,图算融合会多生成一个融合pattern,增加了融合机会,从而能够获取更优性能。
4.建议与总结
对于GPU后端而言,使能图算融合,一般可以直接打开进行优化尝试,也即优化等级开到2。而在Ascend后端上使能图算融合,可能优化等级开到2会有潜在Bug,如果想尝试图算优化,却遇到Bug的情况,请将opt_level设置为1,也即把最基本的融合优化打开试验性能的优劣情况。最后,尽管GPU后端可以尽可能的使能高的优化层级,但毕竟最高阶的优化在上面也有提及是属于内部实验阶段,可能打开后不一定会有优化,还需用户自己多进行尝试。
5.相关参考文档
对于图算融合的内部flag设置,详情可以参考图算flag定义文档 下的说明,可以参照这个说明更灵活的使用图算融合,以尝试获取更优的网络性能。