首页 » 技术分享 » Android使用BitmapRegionDecoder加载超大图片方案

Android使用BitmapRegionDecoder加载超大图片方案

 

BitmapRegionDecoder主要用于显示图片的某一块矩形区域,如果你需要显示某个图片的指定区域,那么这个类非常合适。

对于该类的用法,非常简单,既然是显示图片的某一块区域,那么至少只需要一个方法去设置图片;一个方法传入显示的区域即可;详见:

  • BitmapRegionDecoder提供了一系列的newInstance方法来构造对象,支持传入文件路径,文件描述符,文件的inputstrem等。

    例如:

    BitmapRegionDecoder bitmapRegionDecoder =
      BitmapRegionDecoder.newInstance(inputStream, false);

  • 上述解决了传入我们需要处理的图片,那么接下来就是显示指定的区域。

    bitmapRegionDecoder.decodeRegion(rect, options);

    参数一很明显是一个rect,参数二是BitmapFactory.Options,你可以控制图片的inSampleSize,inPreferredConfig等。

那么下面看一个超级简单的例子:

import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.BitmapRegionDecoder;
import android.graphics.Rect;
import android.os.Bundle;
import android.support.v7.app.AppCompatActivity;
import android.widget.ImageView;


import java.io.IOException;
import java.io.InputStream;

public class LargeImageViewActivity extends AppCompatActivity
{
    private ImageView mImageView;

    @Override
    protected void onCreate(Bundle savedInstanceState)
    {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_large_image_view);

        mImageView = (ImageView) findViewById(R.id.id_imageview);
        try
        {
            InputStream inputStream = getAssets().open("tangyan.jpg");

            //获得图片的宽、高
            BitmapFactory.Options tmpOptions = new BitmapFactory.Options();
            tmpOptions.inJustDecodeBounds = true;
            BitmapFactory.decodeStream(inputStream, null, tmpOptions);
            int width = tmpOptions.outWidth;
            int height = tmpOptions.outHeight;

            //设置显示图片的中心区域
            BitmapRegionDecoder bitmapRegionDecoder = BitmapRegionDecoder.newInstance(inputStream, false);
            BitmapFactory.Options options = new BitmapFactory.Options();
            options.inPreferredConfig = Bitmap.Config.RGB_565;
            Bitmap bitmap = bitmapRegionDecoder.decodeRegion(new Rect(width / 2 - 100, height / 2 - 100, width / 2 + 100, height / 2 + 100), options);
            mImageView.setImageBitmap(bitmap);


        } catch (IOException e)
        {
            e.printStackTrace();
        }


    }

}

上述代码,就是使用BitmapRegionDecoder去加载assets中的图片,调用bitmapRegionDecoder.decodeRegion解析图片的中间矩形区域,返回bitmap,最终显示在ImageView上。

根据上面的分析呢,我们这个自定义控件思路就非常清晰了:

  • 提供一个设置图片的入口
  • 重写onTouchEvent,在里面根据用户移动的手势,去更新显示区域的参数
  • 每次更新区域参数后,调用invalidate,onDraw里面去regionDecoder.decodeRegion拿到bitmap,去draw

代码如下:

import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.BitmapRegionDecoder;
import android.graphics.Canvas;
import android.graphics.Rect;
import android.util.AttributeSet;
import android.view.MotionEvent;
import android.view.View;

import java.io.IOException;
import java.io.InputStream;

public class LargeImageView extends View
{
    private BitmapRegionDecoder mDecoder;
    /**
     * 图片的宽度和高度
     */
    private int mImageWidth, mImageHeight;
    /**
     * 绘制的区域
     */
    private volatile Rect mRect = new Rect();

    private MoveGestureDetector mDetector;


    private static final BitmapFactory.Options options = new BitmapFactory.Options();

    static
    {
        options.inPreferredConfig = Bitmap.Config.RGB_565;
    }

    public void setInputStream(InputStream is)
    {
        try
        {
            mDecoder = BitmapRegionDecoder.newInstance(is, false);
            BitmapFactory.Options tmpOptions = new BitmapFactory.Options();
            // Grab the bounds for the scene dimensions
            tmpOptions.inJustDecodeBounds = true;
            BitmapFactory.decodeStream(is, null, tmpOptions);
            mImageWidth = tmpOptions.outWidth;
            mImageHeight = tmpOptions.outHeight;

            requestLayout();
            invalidate();
        } catch (IOException e)
        {
            e.printStackTrace();
        } finally
        {

            try
            {
                if (is != null) is.close();
            } catch (Exception e)
            {
            }
        }
    }


    public void init()
    {
        mDetector = new MoveGestureDetector(getContext(), new MoveGestureDetector.SimpleMoveGestureDetector()
        {
            @Override
            public boolean onMove(MoveGestureDetector detector)
            {
                int moveX = (int) detector.getMoveX();
                int moveY = (int) detector.getMoveY();

                if (mImageWidth > getWidth())
                {
                    mRect.offset(-moveX, 0);
                    checkWidth();
                    invalidate();
                }
                if (mImageHeight > getHeight())
                {
                    mRect.offset(0, -moveY);
                    checkHeight();
                    invalidate();
                }

                return true;
            }
        });
    }


    private void checkWidth()
    {


        Rect rect = mRect;
        int imageWidth = mImageWidth;
        int imageHeight = mImageHeight;

        if (rect.right > imageWidth)
        {
            rect.right = imageWidth;
            rect.left = imageWidth - getWidth();
        }

        if (rect.left < 0)
        {
            rect.left = 0;
            rect.right = getWidth();
        }
    }


    private void checkHeight()
    {

        Rect rect = mRect;
        int imageWidth = mImageWidth;
        int imageHeight = mImageHeight;

        if (rect.bottom > imageHeight)
        {
            rect.bottom = imageHeight;
            rect.top = imageHeight - getHeight();
        }

        if (rect.top < 0)
        {
            rect.top = 0;
            rect.bottom = getHeight();
        }
    }


    public LargeImageView(Context context, AttributeSet attrs)
    {
        super(context, attrs);
        init();
    }

    @Override
    public boolean onTouchEvent(MotionEvent event)
    {
        mDetector.onToucEvent(event);
        return true;
    }

    @Override
    protected void onDraw(Canvas canvas)
    {
        Bitmap bm = mDecoder.decodeRegion(mRect, options);
        canvas.drawBitmap(bm, 0, 0, null);
    }

    @Override
    protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec)
    {
        super.onMeasure(widthMeasureSpec, heightMeasureSpec);

        int width = getMeasuredWidth();
        int height = getMeasuredHeight();

        int imageWidth = mImageWidth;
        int imageHeight = mImageHeight;

         //默认直接显示图片的中心区域,可以自己去调节
        mRect.left = imageWidth / 2 - width / 2;
        mRect.top = imageHeight / 2 - height / 2;
        mRect.right = mRect.left + width;
        mRect.bottom = mRect.top + height;

    }


}

根据上述源码:

  1. setInputStream里面去获得图片的真实的宽度和高度,以及初始化我们的mDecoder
  2. onMeasure里面为我们的显示区域的rect赋值,大小为view的尺寸
  3. onTouchEvent里面我们监听move的手势,在监听的回调里面去改变rect的参数,以及做边界检查,最后invalidate
  4. 在onDraw里面就是根据rect拿到bitmap,然后draw了

ok,上面并不复杂,不过大家有没有注意到,这个监听用户move手势的代码写的有点奇怪,恩,这里模仿了系统的ScaleGestureDetector,编写了MoveGestureDetector,代码如下:

  • MoveGestureDetector

    import android.content.Context;
    import android.graphics.PointF;
    import android.view.MotionEvent;
    
    public class MoveGestureDetector extends BaseGestureDetector
    {
    
        private PointF mCurrentPointer;
        private PointF mPrePointer;
        //仅仅为了减少创建内存
        private PointF mDeltaPointer = new PointF();
    
        //用于记录最终结果,并返回
        private PointF mExtenalPointer = new PointF();
    
        private OnMoveGestureListener mListenter;
    
    
        public MoveGestureDetector(Context context, OnMoveGestureListener listener)
        {
            super(context);
            mListenter = listener;
        }
    
        @Override
        protected void handleInProgressEvent(MotionEvent event)
        {
            int actionCode = event.getAction() & MotionEvent.ACTION_MASK;
            switch (actionCode)
            {
                case MotionEvent.ACTION_CANCEL:
                case MotionEvent.ACTION_UP:
                    mListenter.onMoveEnd(this);
                    resetState();
                    break;
                case MotionEvent.ACTION_MOVE:
                    updateStateByEvent(event);
                    boolean update = mListenter.onMove(this);
                    if (update)
                    {
                        mPreMotionEvent.recycle();
                        mPreMotionEvent = MotionEvent.obtain(event);
                    }
                    break;
    
            }
        }
    
        @Override
        protected void handleStartProgressEvent(MotionEvent event)
        {
            int actionCode = event.getAction() & MotionEvent.ACTION_MASK;
            switch (actionCode)
            {
                case MotionEvent.ACTION_DOWN:
                    resetState();//防止没有接收到CANCEL or UP ,保险起见
                    mPreMotionEvent = MotionEvent.obtain(event);
                    updateStateByEvent(event);
                    break;
                case MotionEvent.ACTION_MOVE:
                    mGestureInProgress = mListenter.onMoveBegin(this);
                    break;
            }
    
        }
    
        protected void updateStateByEvent(MotionEvent event)
        {
            final MotionEvent prev = mPreMotionEvent;
    
            mPrePointer = caculateFocalPointer(prev);
            mCurrentPointer = caculateFocalPointer(event);
    
            //Log.e("TAG", mPrePointer.toString() + " ,  " + mCurrentPointer);
    
            boolean mSkipThisMoveEvent = prev.getPointerCount() != event.getPointerCount();
    
            //Log.e("TAG", "mSkipThisMoveEvent = " + mSkipThisMoveEvent);
            mExtenalPointer.x = mSkipThisMoveEvent ? 0 : mCurrentPointer.x - mPrePointer.x;
            mExtenalPointer.y = mSkipThisMoveEvent ? 0 : mCurrentPointer.y - mPrePointer.y;
    
        }
    
        /**
         * 根据event计算多指中心点
         *
         * @param event
         * @return
         */
        private PointF caculateFocalPointer(MotionEvent event)
        {
            final int count = event.getPointerCount();
            float x = 0, y = 0;
            for (int i = 0; i < count; i++)
            {
                x += event.getX(i);
                y += event.getY(i);
            }
    
            x /= count;
            y /= count;
    
            return new PointF(x, y);
        }
    
    
        public float getMoveX()
        {
            return mExtenalPointer.x;
    
        }
    
        public float getMoveY()
        {
            return mExtenalPointer.y;
        }
    
    
        public interface OnMoveGestureListener
        {
            public boolean onMoveBegin(MoveGestureDetector detector);
    
            public boolean onMove(MoveGestureDetector detector);
    
            public void onMoveEnd(MoveGestureDetector detector);
        }
    
        public static class SimpleMoveGestureDetector implements OnMoveGestureListener
        {
    
            @Override
            public boolean onMoveBegin(MoveGestureDetector detector)
            {
                return true;
            }
    
            @Override
            public boolean onMove(MoveGestureDetector detector)
            {
                return false;
            }
    
            @Override
            public void onMoveEnd(MoveGestureDetector detector)
            {
            }
        }
    
    }

  • BaseGestureDetector

    import android.content.Context;
    import android.view.MotionEvent;
    
    
    public abstract class BaseGestureDetector
    {
    
        protected boolean mGestureInProgress;
    
        protected MotionEvent mPreMotionEvent;
        protected MotionEvent mCurrentMotionEvent;
    
        protected Context mContext;
    
        public BaseGestureDetector(Context context)
        {
            mContext = context;
        }
    
    
        public boolean onToucEvent(MotionEvent event)
        {
    
            if (!mGestureInProgress)
            {
                handleStartProgressEvent(event);
            } else
            {
                handleInProgressEvent(event);
            }
    
            return true;
    
        }
    
        protected abstract void handleInProgressEvent(MotionEvent event);
    
        protected abstract void handleStartProgressEvent(MotionEvent event);
    
        protected abstract void updateStateByEvent(MotionEvent event);
    
        protected void resetState()
        {
            if (mPreMotionEvent != null)
            {
                mPreMotionEvent.recycle();
                mPreMotionEvent = null;
            }
            if (mCurrentMotionEvent != null)
            {
                mCurrentMotionEvent.recycle();
                mCurrentMotionEvent = null;
            }
            mGestureInProgress = false;
        }
    
    
    }

    你可能会说,一个move手势搞这么多代码,太麻烦了。的确是的,move手势的检测非常简单,那么之所以这么写呢,主要是为了可以复用,比如现在有一堆的XXXGestureDetector,当我们需要监听什么手势,就直接拿个detector来检测多方便。我相信大家肯定也郁闷过Google,为什么只有ScaleGestureDetector而没有RotateGestureDetector呢。

  • 可以下载测试代码查看

转载自原文链接, 如需删除请联系管理员。

原文链接:Android使用BitmapRegionDecoder加载超大图片方案,转载请注明来源!

0