国产精品天干天干,亚洲毛片在线,日韩gay小鲜肉啪啪18禁,女同Gay自慰喷水

歡迎光臨散文網(wǎng) 會員登陸 & 注冊

1小時(shí)學(xué)會 Segment Anything Model (SAM) 遙感影像分割 | 第三節(jié)

2023-07-18 00:55 作者:GIS數(shù)據(jù)棧  | 我要投稿

## Install dependencies


Uncomment and run the following cell to install the required dependencies.


```python

import os

os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE" #OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized

os.environ['PROJ_LIB'] =r"F:\Anaconda3\envs\samgeo\Lib\site-packages\pyproj\proj_dir\share\proj"

```


```python

import leafmap

from samgeo import tms_to_geotiff

from samgeo.text_sam import LangSAM

```


## Create an interactive map


```python

m = leafmap.Map(center=[-22.17615, -51.253043], zoom=18, height="800px")

m.add_basemap("Esri.WorldImagery")

m

```


? ? Map(center=[-22.17615, -51.253043], controls=(ZoomControl(options=['position', 'zoom_in_text', 'zoom_in_title'…


## Download a sample image


Pan and zoom the map to select the area of interest. Use the draw tools to draw a polygon or rectangle on the map


```python

bbox = m.user_roi_bounds()

if bbox is None:

? ? bbox = [-51.2565, -22.1777, -51.2512, -22.175]

```


```python

image = "Image.tif"

# tms_to_geotiff(output=image, bbox=bbox, zoom=19, source="Satellite", overwrite=True)

```


You can also use your own image. Uncomment and run the following cell to use your own image.


Display the downloaded image on the map.


```python

m.layers[-1].visible = False

m.add_raster(image, layer_name="Image")

m

```


? ? Map(bottom=18898354.0, center=[-22.17615, -51.253043], controls=(ZoomControl(options=['position', 'zoom_in_tex…


## Initialize LangSAM class


The initialization of the LangSAM class might take a few minutes. The initialization downloads the model weights and sets up the model for inference.


```python

# import samgeo

# samgeo.update_package()

```


```python

sam = LangSAM()

```


? ? final text_encoder_type: bert-base-uncased

? ?


? ? Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertModel: ['cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.bias', 'cls.seq_relationship.weight', 'cls.predictions.transform.dense.bias']

? ? - This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).

? ? - This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).

? ?


## Specify text prompts


```python

text_prompt = "tree"

```


## Segment the image


Part of the model prediction includes setting appropriate thresholds for object detection and text association with the detected objects. These threshold values range from 0 to 1 and are set while calling the predict method of the LangSAM class.


`box_threshold`: This value is used for object detection in the image. A higher value makes the model more selective, identifying only the most confident object instances, leading to fewer overall detections. A lower value, conversely, makes the model more tolerant, leading to increased detections, including potentially less confident ones.


`text_threshold`: This value is used to associate the detected objects with the provided text prompt. A higher value requires a stronger association between the object and the text prompt, leading to more precise but potentially fewer associations. A lower value allows for looser associations, which could increase the number of associations but also introduce less precise matches.


Remember to test different threshold values on your specific data. The optimal threshold can vary depending on the quality and nature of your images, as well as the specificity of your text prompts. Make sure to choose a balance that suits your requirements, whether that's precision or recall.


```python

sam.predict(image, text_prompt, box_threshold=0.24, text_threshold=0.24)

```


## Visualize the results


Show the result with bounding boxes on the map.


```python

sam.show_anns(

? ? cmap='Greens',

? ? box_color='red',

? ? title='Automatic Segmentation of Trees',

? ? blend=True,

)

```


? ?

![png](output_19_0.png)

? ?


Show the result without bounding boxes on the map.


```python

sam.show_anns(

? ? cmap='Greens',

? ? add_boxes=False,

? ? alpha=0.5,

? ? title='Automatic Segmentation of Trees',

)

```


? ?

![png](output_21_0.png)

? ?


```python

sam.show_anns(

? ? cmap='Greys_r',

? ? add_boxes=False,

? ? alpha=1,

? ? title='Automatic Segmentation of Trees',

? ? blend=False,

? ? output='trees.tif',

)

```


? ?

![png](output_22_0.png)

? ?


Convert the result to a vector format. ?


```python

sam.raster_to_vector("trees.tif", "trees.shp")

```


Show the results on the interactive map.


```python

m.add_raster("trees.tif", layer_name="Trees", palette="Greens", opacity=0.5, nodata=0)

style = {

? ? "color": "#3388ff",

? ? "weight": 2,

? ? "fillColor": "#7c4185",

? ? "fillOpacity": 0.5,

}

m.add_vector("trees.shp", layer_name="Vector", style=style)

m

```


? ? Map(bottom=1209461600.0, center=[-22.176349999999996, -51.25385], controls=(ZoomControl(options=['position', '…


#### Interactive segmentation


```python

# sam.show_map()

```


# ANOTHER 2023-07-11 11:25


```python

from samgeo import SamGeo

sam1 = SamGeo()

sam1.clear_cuda_cache()

```


```python

sam.predict(image, "roads", box_threshold=0.24, text_threshold=0.24)

sam.show_anns(

? ? cmap='Reds',

? ? add_boxes=False,

? ? alpha=0.5,

? ? title='Automatic Segmentation of roads',

)

```


```python

sam.show_anns(

? ? cmap='Greys_r',

? ? add_boxes=False,

? ? alpha=1,

? ? title='Automatic Segmentation of roads',

? ? blend=False,

? ? output='roads.tif',

)

```


```python

sam.raster_to_vector("roads.tif", "roads.shp")

```


```python

m.add_raster("trees.tif", layer_name="Trees", palette="Greens", opacity=0.5, nodata=0)

style = {

? ? "color": "#3388ff",

? ? "weight": 2,

? ? "fillColor": "#7c4185",

? ? "fillOpacity": 0.5,

}

m.add_vector("roads.shp", layer_name="Vector", style=style)

m

```



1小時(shí)學(xué)會 Segment Anything Model (SAM) 遙感影像分割 | 第三節(jié)的評論 (共 條)

分享到微博請遵守國家法律
磐石市| 宣恩县| 龙南县| 德庆县| 商洛市| 久治县| 永定县| 镇安县| 通许县| 新化县| 吉安县| 彰武县| 涿鹿县| 阿勒泰市| 久治县| 宁国市| 邳州市| 沈阳市| 镇江市| 怀来县| 洱源县| 翁源县| 扬州市| 娱乐| 都江堰市| 宜城市| 五台县| 芒康县| 柘荣县| 望城县| 贵德县| 泰安市| 昌江| 屏南县| 崇义县| 北宁市| 苍溪县| 高碑店市| 大悟县| 嘉峪关市| 颍上县|