Loading…
Dynamic Fourier ptychography with deep spatiotemporal priors
Fourier ptychography (FP) involves the acquisition of several low-resolution intensity images of a sample under varying illumination angles. They are then combined into a high-resolution complex-valued image by solving a phase-retrieval problem. The objective in dynamic FP is to obtain a sequence of...
Saved in:
Published in: | Inverse problems 2023-06, Vol.39 (6), p.64005 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c322t-ffc739ee6c5ab0f3097ad9c2253d852e11907184012a145f005f2b087601c2ab3 |
---|---|
cites | cdi_FETCH-LOGICAL-c322t-ffc739ee6c5ab0f3097ad9c2253d852e11907184012a145f005f2b087601c2ab3 |
container_end_page | |
container_issue | 6 |
container_start_page | 64005 |
container_title | Inverse problems |
container_volume | 39 |
creator | Bohra, Pakshal Pham, Thanh-an Long, Yuxuan Yoo, Jaejun Unser, Michael |
description | Fourier ptychography (FP) involves the acquisition of several low-resolution intensity images of a sample under varying illumination angles. They are then combined into a high-resolution complex-valued image by solving a phase-retrieval problem. The objective in dynamic FP is to obtain a sequence of high-resolution images of a moving sample. There, the application of standard frame-by-frame reconstruction methods limits the temporal resolution due to the large number of measurements that must be acquired for each frame. In this work instead, we propose a neural-network-based reconstruction framework for dynamic FP. Specifically, each reconstructed image in the sequence is the output of a shared deep convolutional network fed with an input vector that lies on a one-dimensional manifold that encodes time. We then optimize the parameters of the network to fit the acquired measurements. The architecture of the network and the constraints on the input vectors impose a spatiotemporal regularization on the sequence of images. This enables our method to achieve high temporal resolution without compromising the spatial resolution. The proposed framework does not require training data. It also recovers the pupil function of the microscope. Through numerical experiments, we show that our framework paves the way for high-quality ultrafast FP. |
doi_str_mv | 10.1088/1361-6420/acca72 |
format | article |
fullrecord | <record><control><sourceid>iop_cross</sourceid><recordid>TN_cdi_iop_journals_10_1088_1361_6420_acca72</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>ipacca72</sourcerecordid><originalsourceid>FETCH-LOGICAL-c322t-ffc739ee6c5ab0f3097ad9c2253d852e11907184012a145f005f2b087601c2ab3</originalsourceid><addsrcrecordid>eNp1z8FKxDAQBuAgCtbVu8c-gHVn0iZtwYusrgoLXvQc0jSxWbabkFSkb29LZW-eBob5Z-Yj5BbhHqGq1phzzHhBYS2VkiU9I8mpdU4SoJxnjCNekqsY9wCIFZYJeXgaj7K3Kt2672B1SP0wqs59Bem7Mf2xQ5e2Wvs0ejlYN-jeuyAPqQ_WhXhNLow8RH3zV1fkc_v8sXnNdu8vb5vHXaZySofMGFXmtdZcMdmAyaEuZVsrSlneVoxqxBpKrApAKrFgBoAZ2kBVckBFZZOvCCx7VXAxBm3EdL-XYRQIYtaLmSpmqlj0U-RuiVjnxX6yHacH_x__Bf0HW7k</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Dynamic Fourier ptychography with deep spatiotemporal priors</title><source>Institute of Physics:Jisc Collections:IOP Publishing Read and Publish 2024-2025 (Reading List)</source><creator>Bohra, Pakshal ; Pham, Thanh-an ; Long, Yuxuan ; Yoo, Jaejun ; Unser, Michael</creator><creatorcontrib>Bohra, Pakshal ; Pham, Thanh-an ; Long, Yuxuan ; Yoo, Jaejun ; Unser, Michael</creatorcontrib><description>Fourier ptychography (FP) involves the acquisition of several low-resolution intensity images of a sample under varying illumination angles. They are then combined into a high-resolution complex-valued image by solving a phase-retrieval problem. The objective in dynamic FP is to obtain a sequence of high-resolution images of a moving sample. There, the application of standard frame-by-frame reconstruction methods limits the temporal resolution due to the large number of measurements that must be acquired for each frame. In this work instead, we propose a neural-network-based reconstruction framework for dynamic FP. Specifically, each reconstructed image in the sequence is the output of a shared deep convolutional network fed with an input vector that lies on a one-dimensional manifold that encodes time. We then optimize the parameters of the network to fit the acquired measurements. The architecture of the network and the constraints on the input vectors impose a spatiotemporal regularization on the sequence of images. This enables our method to achieve high temporal resolution without compromising the spatial resolution. The proposed framework does not require training data. It also recovers the pupil function of the microscope. Through numerical experiments, we show that our framework paves the way for high-quality ultrafast FP.</description><identifier>ISSN: 0266-5611</identifier><identifier>EISSN: 1361-6420</identifier><identifier>DOI: 10.1088/1361-6420/acca72</identifier><identifier>CODEN: INPEEY</identifier><language>eng</language><publisher>IOP Publishing</publisher><subject>dynamic imaging ; Fourier ptychography ; neural networks ; regularization</subject><ispartof>Inverse problems, 2023-06, Vol.39 (6), p.64005</ispartof><rights>2023 The Author(s). Published by IOP Publishing Ltd</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c322t-ffc739ee6c5ab0f3097ad9c2253d852e11907184012a145f005f2b087601c2ab3</citedby><cites>FETCH-LOGICAL-c322t-ffc739ee6c5ab0f3097ad9c2253d852e11907184012a145f005f2b087601c2ab3</cites><orcidid>0000-0002-2611-3834 ; 0000-0001-6231-2569</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Bohra, Pakshal</creatorcontrib><creatorcontrib>Pham, Thanh-an</creatorcontrib><creatorcontrib>Long, Yuxuan</creatorcontrib><creatorcontrib>Yoo, Jaejun</creatorcontrib><creatorcontrib>Unser, Michael</creatorcontrib><title>Dynamic Fourier ptychography with deep spatiotemporal priors</title><title>Inverse problems</title><addtitle>ip</addtitle><addtitle>Inverse Problems</addtitle><description>Fourier ptychography (FP) involves the acquisition of several low-resolution intensity images of a sample under varying illumination angles. They are then combined into a high-resolution complex-valued image by solving a phase-retrieval problem. The objective in dynamic FP is to obtain a sequence of high-resolution images of a moving sample. There, the application of standard frame-by-frame reconstruction methods limits the temporal resolution due to the large number of measurements that must be acquired for each frame. In this work instead, we propose a neural-network-based reconstruction framework for dynamic FP. Specifically, each reconstructed image in the sequence is the output of a shared deep convolutional network fed with an input vector that lies on a one-dimensional manifold that encodes time. We then optimize the parameters of the network to fit the acquired measurements. The architecture of the network and the constraints on the input vectors impose a spatiotemporal regularization on the sequence of images. This enables our method to achieve high temporal resolution without compromising the spatial resolution. The proposed framework does not require training data. It also recovers the pupil function of the microscope. Through numerical experiments, we show that our framework paves the way for high-quality ultrafast FP.</description><subject>dynamic imaging</subject><subject>Fourier ptychography</subject><subject>neural networks</subject><subject>regularization</subject><issn>0266-5611</issn><issn>1361-6420</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp1z8FKxDAQBuAgCtbVu8c-gHVn0iZtwYusrgoLXvQc0jSxWbabkFSkb29LZW-eBob5Z-Yj5BbhHqGq1phzzHhBYS2VkiU9I8mpdU4SoJxnjCNekqsY9wCIFZYJeXgaj7K3Kt2672B1SP0wqs59Bem7Mf2xQ5e2Wvs0ejlYN-jeuyAPqQ_WhXhNLow8RH3zV1fkc_v8sXnNdu8vb5vHXaZySofMGFXmtdZcMdmAyaEuZVsrSlneVoxqxBpKrApAKrFgBoAZ2kBVckBFZZOvCCx7VXAxBm3EdL-XYRQIYtaLmSpmqlj0U-RuiVjnxX6yHacH_x__Bf0HW7k</recordid><startdate>20230601</startdate><enddate>20230601</enddate><creator>Bohra, Pakshal</creator><creator>Pham, Thanh-an</creator><creator>Long, Yuxuan</creator><creator>Yoo, Jaejun</creator><creator>Unser, Michael</creator><general>IOP Publishing</general><scope>O3W</scope><scope>TSCCA</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-2611-3834</orcidid><orcidid>https://orcid.org/0000-0001-6231-2569</orcidid></search><sort><creationdate>20230601</creationdate><title>Dynamic Fourier ptychography with deep spatiotemporal priors</title><author>Bohra, Pakshal ; Pham, Thanh-an ; Long, Yuxuan ; Yoo, Jaejun ; Unser, Michael</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c322t-ffc739ee6c5ab0f3097ad9c2253d852e11907184012a145f005f2b087601c2ab3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>dynamic imaging</topic><topic>Fourier ptychography</topic><topic>neural networks</topic><topic>regularization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Bohra, Pakshal</creatorcontrib><creatorcontrib>Pham, Thanh-an</creatorcontrib><creatorcontrib>Long, Yuxuan</creatorcontrib><creatorcontrib>Yoo, Jaejun</creatorcontrib><creatorcontrib>Unser, Michael</creatorcontrib><collection>IOP_英国物理学会OA刊</collection><collection>IOPscience (Open Access)</collection><collection>CrossRef</collection><jtitle>Inverse problems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bohra, Pakshal</au><au>Pham, Thanh-an</au><au>Long, Yuxuan</au><au>Yoo, Jaejun</au><au>Unser, Michael</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Dynamic Fourier ptychography with deep spatiotemporal priors</atitle><jtitle>Inverse problems</jtitle><stitle>ip</stitle><addtitle>Inverse Problems</addtitle><date>2023-06-01</date><risdate>2023</risdate><volume>39</volume><issue>6</issue><spage>64005</spage><pages>64005-</pages><issn>0266-5611</issn><eissn>1361-6420</eissn><coden>INPEEY</coden><abstract>Fourier ptychography (FP) involves the acquisition of several low-resolution intensity images of a sample under varying illumination angles. They are then combined into a high-resolution complex-valued image by solving a phase-retrieval problem. The objective in dynamic FP is to obtain a sequence of high-resolution images of a moving sample. There, the application of standard frame-by-frame reconstruction methods limits the temporal resolution due to the large number of measurements that must be acquired for each frame. In this work instead, we propose a neural-network-based reconstruction framework for dynamic FP. Specifically, each reconstructed image in the sequence is the output of a shared deep convolutional network fed with an input vector that lies on a one-dimensional manifold that encodes time. We then optimize the parameters of the network to fit the acquired measurements. The architecture of the network and the constraints on the input vectors impose a spatiotemporal regularization on the sequence of images. This enables our method to achieve high temporal resolution without compromising the spatial resolution. The proposed framework does not require training data. It also recovers the pupil function of the microscope. Through numerical experiments, we show that our framework paves the way for high-quality ultrafast FP.</abstract><pub>IOP Publishing</pub><doi>10.1088/1361-6420/acca72</doi><tpages>23</tpages><orcidid>https://orcid.org/0000-0002-2611-3834</orcidid><orcidid>https://orcid.org/0000-0001-6231-2569</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0266-5611 |
ispartof | Inverse problems, 2023-06, Vol.39 (6), p.64005 |
issn | 0266-5611 1361-6420 |
language | eng |
recordid | cdi_iop_journals_10_1088_1361_6420_acca72 |
source | Institute of Physics:Jisc Collections:IOP Publishing Read and Publish 2024-2025 (Reading List) |
subjects | dynamic imaging Fourier ptychography neural networks regularization |
title | Dynamic Fourier ptychography with deep spatiotemporal priors |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T04%3A49%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-iop_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Dynamic%20Fourier%20ptychography%20with%20deep%20spatiotemporal%20priors&rft.jtitle=Inverse%20problems&rft.au=Bohra,%20Pakshal&rft.date=2023-06-01&rft.volume=39&rft.issue=6&rft.spage=64005&rft.pages=64005-&rft.issn=0266-5611&rft.eissn=1361-6420&rft.coden=INPEEY&rft_id=info:doi/10.1088/1361-6420/acca72&rft_dat=%3Ciop_cross%3Eipacca72%3C/iop_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c322t-ffc739ee6c5ab0f3097ad9c2253d852e11907184012a145f005f2b087601c2ab3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |