-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
异常无法捕获 #235
Comments
Hi @LYS86, can you provide an exported android studio project? Also can you let me know where you got your Yolo10/11 model from? |
关于该问题的讨论 yolov11 export to tflite does not work on Android GPU 模型导出,yolov8n,yolov9t不会触发崩溃。
|
Hi, @LYS86 Thank you for your understanding and patience. |
Hi @LYS86, I believe you are applying a GPU delegate or something like that? Can you show me the code and the context around where/how you apply the delegate? Preferably the more you share of your project the faster/less turn around it will take to dig into this. Thanks. |
@gaikwadrahul8 是的,同样的问题,yolo11的模型转换出了问题,一个有效的解决方法是不使用yolo的导出代码,自行转换模型 这份代码导出的模型会触发异常,导致崩溃,yolo11n_saved_model/yolo11n_float16.tflite from ultralytics import YOLO
model = YOLO("yolo11n.pt")
model.export(format="tflite") 这里导出的模型不会触发异常,saved_model/yolo11n_float16.tflite,两个模型使用cpu无异常 onnx2tf -i yolo11n.onnx
import android.util.Log
import org.tensorflow.lite.InterpreterApi
import org.tensorflow.lite.gpu.CompatibilityList
import org.tensorflow.lite.gpu.GpuDelegate
import java.nio.MappedByteBuffer
class test {
private var gpuDelegate: GpuDelegate? = null
protected lateinit var options: InterpreterApi.Options
private lateinit var interpreter: InterpreterApi
init {
CompatibilityList().use { compatibilityList ->
if (compatibilityList.isDelegateSupportedOnThisDevice) {
// val gpu=compatibilityList.bestOptionsForThisDevice
// gpuDelegate = GpuDelegate(gpu)
gpuDelegate=GpuDelegate()
}
}
}
private fun getOptions(isGPU: Boolean): InterpreterApi.Options {
val opts = InterpreterApi.Options().apply {
useNNAPI = true
}
if (!isGPU) return opts
if (gpuDelegate == null) {
Log.w("test", "GPU不支持")
return opts
}
return opts.apply { addDelegate(gpuDelegate) }
}
/**
* 加载模型
* @param path 模型文件路径,sd路径
* @param isGPU 是否使用GPU
*/
fun loadModel(path: String, isGPU: Boolean) {
options = getOptions(isGPU)
loadModel(path, options)
}
private fun loadModel(path: String, options: InterpreterApi.Options) {
try {
val modelBuffer: MappedByteBuffer = FileUtil.loadModel(path)
interpreter = InterpreterApi.create(modelBuffer, options)
} catch (e: Exception) {
throw RuntimeException("${e.message}")
}
}
} 这份示例是否足够,是否需要更多信息。 |
Hi @LYS86, thanks for the extra information, I think that's good for now... I tried adjusting one of our examples to naively use a GPU delegate and it seems to be failing for a different reason, so there does seem to be a bug independent of the specific model: the example I used: git clone https://github.com/google-ai-edge/litert-samples.git
# open litert-samples/examples/image_classification/android modify android/app/build.gradle.kts (add two lines to dependencies): implementation("com.google.ai.edge.litert:litert-gpu:1.0.1")
implementation("com.google.ai.edge.litert:litert-gpu-api:1.0.1") In "ImageClassificationHelper.kt", after options is defined (~line 95) add: val gpuDelegate = GpuDelegate()
options.addDelegate(gpuDelegate) Then execute... this error occurs with the default models in that example. I should note I'm using an emulator (Pixel 8 Pro API 34-ext12 ARM). my logcat:
|
你好,我在使用yolo模型时遇到了一些问题,使用gpu时部署yolo10,yolo11模型,会崩溃,无法捕获异常
The text was updated successfully, but these errors were encountered: