diff --git a/competing_methods/my_KPConv/.gitignore b/competing_methods/my_KPConv/.gitignore new file mode 100644 index 00000000..b6e47617 --- /dev/null +++ b/competing_methods/my_KPConv/.gitignore @@ -0,0 +1,129 @@ +# Byte-compiled / optimized / DLL files +__pycache__/ +*.py[cod] +*$py.class + +# C extensions +*.so + +# Distribution / packaging +.Python +build/ +develop-eggs/ +dist/ +downloads/ +eggs/ +.eggs/ +lib/ +lib64/ +parts/ +sdist/ +var/ +wheels/ +pip-wheel-metadata/ +share/python-wheels/ +*.egg-info/ +.installed.cfg +*.egg +MANIFEST + +# PyInstaller +# Usually these files are written by a python script from a template +# before PyInstaller builds the exe, so as to inject date/other infos into it. +*.manifest +*.spec + +# Installer logs +pip-log.txt +pip-delete-this-directory.txt + +# Unit test / coverage reports +htmlcov/ +.tox/ +.nox/ +.coverage +.coverage.* +.cache +nosetests.xml +coverage.xml +*.cover +*.py,cover +.hypothesis/ +.pytest_cache/ + +# Translations +*.mo +*.pot + +# Django stuff: +*.log +local_settings.py +db.sqlite3 +db.sqlite3-journal + +# Flask stuff: +instance/ +.webassets-cache + +# Scrapy stuff: +.scrapy + +# Sphinx documentation +docs/_build/ + +# PyBuilder +target/ + +# Jupyter Notebook +.ipynb_checkpoints + +# IPython +profile_default/ +ipython_config.py + +# pyenv +.python-version + +# pipenv +# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. +# However, in case of collaboration, if having platform-specific dependencies or dependencies +# having no cross-platform support, pipenv may install dependencies that don't work, or not +# install all needed dependencies. +#Pipfile.lock + +# PEP 582; used by e.g. github.com/David-OConnor/pyflow +__pypackages__/ + +# Celery stuff +celerybeat-schedule +celerybeat.pid + +# SageMath parsed files +*.sage.py + +# Environments +.env +.venv +env/ +venv/ +ENV/ +env.bak/ +venv.bak/ + +# Spyder project settings +.spyderproject +.spyproject + +# Rope project settings +.ropeproject + +# mkdocs documentation +/site + +# mypy +.mypy_cache/ +.dmypy.json +dmypy.json + +# Pyre type checker +.pyre/ diff --git a/competing_methods/my_KPConv/INSTALL.md b/competing_methods/my_KPConv/INSTALL.md new file mode 100644 index 00000000..369fcb9c --- /dev/null +++ b/competing_methods/my_KPConv/INSTALL.md @@ -0,0 +1,56 @@ + +# Installation instructions + +## Ubuntu 18.04 + +* Make sure CUDA and cuDNN are installed. One configuration has been tested: + - PyTorch 1.4.0, CUDA 10.1 and cuDNN 7.6 + +* Ensure all python packages are installed : + + sudo apt update + sudo apt install python3-dev python3-pip python3-tk + +* Follow PyTorch installation procedure. + +* Install the other dependencies with pip: + - numpy + - scikit-learn + - PyYAML + - matplotlib (for visualization) + - mayavi (for visualization) + - PyQt5 (for visualization) + +* Compile the C++ extension modules for python located in `cpp_wrappers`. Open a terminal in this folder, and run: + + sh compile_wrappers.sh + +You should now be able to train Kernel-Point Convolution models + +## Windows 10 + +* Make sure CUDA and cuDNN are installed. One configuration has been tested: + - PyTorch 1.4.0, CUDA 10.1 and cuDNN 7.5 + +* Follow PyTorch installation procedure. + +* We used the PyCharm IDE to pip install all python dependencies (including PyTorch) in a venv: + - torch + - torchvision + - numpy + - scikit-learn + - PyYAML + - matplotlib (for visualization) + - mayavi (for visualization) + - PyQt5 (for visualization) + +* Compile the C++ extension modules for python located in `cpp_wrappers`. You just have to execute two .bat files: + + cpp_wrappers/cpp_neighbors/build.bat + + and + + cpp_wrappers/cpp_subsampling/build.bat + +You should now be able to train Kernel-Point Convolution models + diff --git a/competing_methods/my_KPConv/LICENSE b/competing_methods/my_KPConv/LICENSE new file mode 100644 index 00000000..f288702d --- /dev/null +++ b/competing_methods/my_KPConv/LICENSE @@ -0,0 +1,674 @@ + GNU GENERAL PUBLIC LICENSE + Version 3, 29 June 2007 + + Copyright (C) 2007 Free Software Foundation, Inc. + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + Preamble + + The GNU General Public License is a free, copyleft license for +software and other kinds of works. + + The licenses for most software and other practical works are designed +to take away your freedom to share and change the works. By contrast, +the GNU General Public License is intended to guarantee your freedom to +share and change all versions of a program--to make sure it remains free +software for all its users. We, the Free Software Foundation, use the +GNU General Public License for most of our software; it applies also to +any other work released this way by its authors. You can apply it to +your programs, too. + + When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +them if you wish), that you receive source code or can get it if you +want it, that you can change the software or use pieces of it in new +free programs, and that you know you can do these things. + + To protect your rights, we need to prevent others from denying you +these rights or asking you to surrender the rights. Therefore, you have +certain responsibilities if you distribute copies of the software, or if +you modify it: responsibilities to respect the freedom of others. + + For example, if you distribute copies of such a program, whether +gratis or for a fee, you must pass on to the recipients the same +freedoms that you received. You must make sure that they, too, receive +or can get the source code. And you must show them these terms so they +know their rights. + + Developers that use the GNU GPL protect your rights with two steps: +(1) assert copyright on the software, and (2) offer you this License +giving you legal permission to copy, distribute and/or modify it. + + For the developers' and authors' protection, the GPL clearly explains +that there is no warranty for this free software. For both users' and +authors' sake, the GPL requires that modified versions be marked as +changed, so that their problems will not be attributed erroneously to +authors of previous versions. + + Some devices are designed to deny users access to install or run +modified versions of the software inside them, although the manufacturer +can do so. This is fundamentally incompatible with the aim of +protecting users' freedom to change the software. The systematic +pattern of such abuse occurs in the area of products for individuals to +use, which is precisely where it is most unacceptable. Therefore, we +have designed this version of the GPL to prohibit the practice for those +products. If such problems arise substantially in other domains, we +stand ready to extend this provision to those domains in future versions +of the GPL, as needed to protect the freedom of users. + + Finally, every program is threatened constantly by software patents. +States should not allow patents to restrict development and use of +software on general-purpose computers, but in those that do, we wish to +avoid the special danger that patents applied to a free program could +make it effectively proprietary. To prevent this, the GPL assures that +patents cannot be used to render the program non-free. + + The precise terms and conditions for copying, distribution and +modification follow. + + TERMS AND CONDITIONS + + 0. Definitions. + + "This License" refers to version 3 of the GNU General Public License. + + "Copyright" also means copyright-like laws that apply to other kinds of +works, such as semiconductor masks. + + "The Program" refers to any copyrightable work licensed under this +License. Each licensee is addressed as "you". "Licensees" and +"recipients" may be individuals or organizations. + + To "modify" a work means to copy from or adapt all or part of the work +in a fashion requiring copyright permission, other than the making of an +exact copy. The resulting work is called a "modified version" of the +earlier work or a work "based on" the earlier work. + + A "covered work" means either the unmodified Program or a work based +on the Program. + + To "propagate" a work means to do anything with it that, without +permission, would make you directly or secondarily liable for +infringement under applicable copyright law, except executing it on a +computer or modifying a private copy. Propagation includes copying, +distribution (with or without modification), making available to the +public, and in some countries other activities as well. + + To "convey" a work means any kind of propagation that enables other +parties to make or receive copies. Mere interaction with a user through +a computer network, with no transfer of a copy, is not conveying. + + An interactive user interface displays "Appropriate Legal Notices" +to the extent that it includes a convenient and prominently visible +feature that (1) displays an appropriate copyright notice, and (2) +tells the user that there is no warranty for the work (except to the +extent that warranties are provided), that licensees may convey the +work under this License, and how to view a copy of this License. If +the interface presents a list of user commands or options, such as a +menu, a prominent item in the list meets this criterion. + + 1. Source Code. + + The "source code" for a work means the preferred form of the work +for making modifications to it. "Object code" means any non-source +form of a work. + + A "Standard Interface" means an interface that either is an official +standard defined by a recognized standards body, or, in the case of +interfaces specified for a particular programming language, one that +is widely used among developers working in that language. + + The "System Libraries" of an executable work include anything, other +than the work as a whole, that (a) is included in the normal form of +packaging a Major Component, but which is not part of that Major +Component, and (b) serves only to enable use of the work with that +Major Component, or to implement a Standard Interface for which an +implementation is available to the public in source code form. A +"Major Component", in this context, means a major essential component +(kernel, window system, and so on) of the specific operating system +(if any) on which the executable work runs, or a compiler used to +produce the work, or an object code interpreter used to run it. + + The "Corresponding Source" for a work in object code form means all +the source code needed to generate, install, and (for an executable +work) run the object code and to modify the work, including scripts to +control those activities. However, it does not include the work's +System Libraries, or general-purpose tools or generally available free +programs which are used unmodified in performing those activities but +which are not part of the work. For example, Corresponding Source +includes interface definition files associated with source files for +the work, and the source code for shared libraries and dynamically +linked subprograms that the work is specifically designed to require, +such as by intimate data communication or control flow between those +subprograms and other parts of the work. + + The Corresponding Source need not include anything that users +can regenerate automatically from other parts of the Corresponding +Source. + + The Corresponding Source for a work in source code form is that +same work. + + 2. Basic Permissions. + + All rights granted under this License are granted for the term of +copyright on the Program, and are irrevocable provided the stated +conditions are met. This License explicitly affirms your unlimited +permission to run the unmodified Program. The output from running a +covered work is covered by this License only if the output, given its +content, constitutes a covered work. This License acknowledges your +rights of fair use or other equivalent, as provided by copyright law. + + You may make, run and propagate covered works that you do not +convey, without conditions so long as your license otherwise remains +in force. You may convey covered works to others for the sole purpose +of having them make modifications exclusively for you, or provide you +with facilities for running those works, provided that you comply with +the terms of this License in conveying all material for which you do +not control copyright. Those thus making or running the covered works +for you must do so exclusively on your behalf, under your direction +and control, on terms that prohibit them from making any copies of +your copyrighted material outside their relationship with you. + + Conveying under any other circumstances is permitted solely under +the conditions stated below. Sublicensing is not allowed; section 10 +makes it unnecessary. + + 3. Protecting Users' Legal Rights From Anti-Circumvention Law. + + No covered work shall be deemed part of an effective technological +measure under any applicable law fulfilling obligations under article +11 of the WIPO copyright treaty adopted on 20 December 1996, or +similar laws prohibiting or restricting circumvention of such +measures. + + When you convey a covered work, you waive any legal power to forbid +circumvention of technological measures to the extent such circumvention +is effected by exercising rights under this License with respect to +the covered work, and you disclaim any intention to limit operation or +modification of the work as a means of enforcing, against the work's +users, your or third parties' legal rights to forbid circumvention of +technological measures. + + 4. Conveying Verbatim Copies. + + You may convey verbatim copies of the Program's source code as you +receive it, in any medium, provided that you conspicuously and +appropriately publish on each copy an appropriate copyright notice; +keep intact all notices stating that this License and any +non-permissive terms added in accord with section 7 apply to the code; +keep intact all notices of the absence of any warranty; and give all +recipients a copy of this License along with the Program. + + You may charge any price or no price for each copy that you convey, +and you may offer support or warranty protection for a fee. + + 5. Conveying Modified Source Versions. + + You may convey a work based on the Program, or the modifications to +produce it from the Program, in the form of source code under the +terms of section 4, provided that you also meet all of these conditions: + + a) The work must carry prominent notices stating that you modified + it, and giving a relevant date. + + b) The work must carry prominent notices stating that it is + released under this License and any conditions added under section + 7. This requirement modifies the requirement in section 4 to + "keep intact all notices". + + c) You must license the entire work, as a whole, under this + License to anyone who comes into possession of a copy. This + License will therefore apply, along with any applicable section 7 + additional terms, to the whole of the work, and all its parts, + regardless of how they are packaged. This License gives no + permission to license the work in any other way, but it does not + invalidate such permission if you have separately received it. + + d) If the work has interactive user interfaces, each must display + Appropriate Legal Notices; however, if the Program has interactive + interfaces that do not display Appropriate Legal Notices, your + work need not make them do so. + + A compilation of a covered work with other separate and independent +works, which are not by their nature extensions of the covered work, +and which are not combined with it such as to form a larger program, +in or on a volume of a storage or distribution medium, is called an +"aggregate" if the compilation and its resulting copyright are not +used to limit the access or legal rights of the compilation's users +beyond what the individual works permit. Inclusion of a covered work +in an aggregate does not cause this License to apply to the other +parts of the aggregate. + + 6. Conveying Non-Source Forms. + + You may convey a covered work in object code form under the terms +of sections 4 and 5, provided that you also convey the +machine-readable Corresponding Source under the terms of this License, +in one of these ways: + + a) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by the + Corresponding Source fixed on a durable physical medium + customarily used for software interchange. + + b) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by a + written offer, valid for at least three years and valid for as + long as you offer spare parts or customer support for that product + model, to give anyone who possesses the object code either (1) a + copy of the Corresponding Source for all the software in the + product that is covered by this License, on a durable physical + medium customarily used for software interchange, for a price no + more than your reasonable cost of physically performing this + conveying of source, or (2) access to copy the + Corresponding Source from a network server at no charge. + + c) Convey individual copies of the object code with a copy of the + written offer to provide the Corresponding Source. This + alternative is allowed only occasionally and noncommercially, and + only if you received the object code with such an offer, in accord + with subsection 6b. + + d) Convey the object code by offering access from a designated + place (gratis or for a charge), and offer equivalent access to the + Corresponding Source in the same way through the same place at no + further charge. You need not require recipients to copy the + Corresponding Source along with the object code. If the place to + copy the object code is a network server, the Corresponding Source + may be on a different server (operated by you or a third party) + that supports equivalent copying facilities, provided you maintain + clear directions next to the object code saying where to find the + Corresponding Source. Regardless of what server hosts the + Corresponding Source, you remain obligated to ensure that it is + available for as long as needed to satisfy these requirements. + + e) Convey the object code using peer-to-peer transmission, provided + you inform other peers where the object code and Corresponding + Source of the work are being offered to the general public at no + charge under subsection 6d. + + A separable portion of the object code, whose source code is excluded +from the Corresponding Source as a System Library, need not be +included in conveying the object code work. + + A "User Product" is either (1) a "consumer product", which means any +tangible personal property which is normally used for personal, family, +or household purposes, or (2) anything designed or sold for incorporation +into a dwelling. In determining whether a product is a consumer product, +doubtful cases shall be resolved in favor of coverage. For a particular +product received by a particular user, "normally used" refers to a +typical or common use of that class of product, regardless of the status +of the particular user or of the way in which the particular user +actually uses, or expects or is expected to use, the product. A product +is a consumer product regardless of whether the product has substantial +commercial, industrial or non-consumer uses, unless such uses represent +the only significant mode of use of the product. + + "Installation Information" for a User Product means any methods, +procedures, authorization keys, or other information required to install +and execute modified versions of a covered work in that User Product from +a modified version of its Corresponding Source. The information must +suffice to ensure that the continued functioning of the modified object +code is in no case prevented or interfered with solely because +modification has been made. + + If you convey an object code work under this section in, or with, or +specifically for use in, a User Product, and the conveying occurs as +part of a transaction in which the right of possession and use of the +User Product is transferred to the recipient in perpetuity or for a +fixed term (regardless of how the transaction is characterized), the +Corresponding Source conveyed under this section must be accompanied +by the Installation Information. But this requirement does not apply +if neither you nor any third party retains the ability to install +modified object code on the User Product (for example, the work has +been installed in ROM). + + The requirement to provide Installation Information does not include a +requirement to continue to provide support service, warranty, or updates +for a work that has been modified or installed by the recipient, or for +the User Product in which it has been modified or installed. Access to a +network may be denied when the modification itself materially and +adversely affects the operation of the network or violates the rules and +protocols for communication across the network. + + Corresponding Source conveyed, and Installation Information provided, +in accord with this section must be in a format that is publicly +documented (and with an implementation available to the public in +source code form), and must require no special password or key for +unpacking, reading or copying. + + 7. Additional Terms. + + "Additional permissions" are terms that supplement the terms of this +License by making exceptions from one or more of its conditions. +Additional permissions that are applicable to the entire Program shall +be treated as though they were included in this License, to the extent +that they are valid under applicable law. If additional permissions +apply only to part of the Program, that part may be used separately +under those permissions, but the entire Program remains governed by +this License without regard to the additional permissions. + + When you convey a copy of a covered work, you may at your option +remove any additional permissions from that copy, or from any part of +it. (Additional permissions may be written to require their own +removal in certain cases when you modify the work.) You may place +additional permissions on material, added by you to a covered work, +for which you have or can give appropriate copyright permission. + + Notwithstanding any other provision of this License, for material you +add to a covered work, you may (if authorized by the copyright holders of +that material) supplement the terms of this License with terms: + + a) Disclaiming warranty or limiting liability differently from the + terms of sections 15 and 16 of this License; or + + b) Requiring preservation of specified reasonable legal notices or + author attributions in that material or in the Appropriate Legal + Notices displayed by works containing it; or + + c) Prohibiting misrepresentation of the origin of that material, or + requiring that modified versions of such material be marked in + reasonable ways as different from the original version; or + + d) Limiting the use for publicity purposes of names of licensors or + authors of the material; or + + e) Declining to grant rights under trademark law for use of some + trade names, trademarks, or service marks; or + + f) Requiring indemnification of licensors and authors of that + material by anyone who conveys the material (or modified versions of + it) with contractual assumptions of liability to the recipient, for + any liability that these contractual assumptions directly impose on + those licensors and authors. + + All other non-permissive additional terms are considered "further +restrictions" within the meaning of section 10. If the Program as you +received it, or any part of it, contains a notice stating that it is +governed by this License along with a term that is a further +restriction, you may remove that term. If a license document contains +a further restriction but permits relicensing or conveying under this +License, you may add to a covered work material governed by the terms +of that license document, provided that the further restriction does +not survive such relicensing or conveying. + + If you add terms to a covered work in accord with this section, you +must place, in the relevant source files, a statement of the +additional terms that apply to those files, or a notice indicating +where to find the applicable terms. + + Additional terms, permissive or non-permissive, may be stated in the +form of a separately written license, or stated as exceptions; +the above requirements apply either way. + + 8. Termination. + + You may not propagate or modify a covered work except as expressly +provided under this License. Any attempt otherwise to propagate or +modify it is void, and will automatically terminate your rights under +this License (including any patent licenses granted under the third +paragraph of section 11). + + However, if you cease all violation of this License, then your +license from a particular copyright holder is reinstated (a) +provisionally, unless and until the copyright holder explicitly and +finally terminates your license, and (b) permanently, if the copyright +holder fails to notify you of the violation by some reasonable means +prior to 60 days after the cessation. + + Moreover, your license from a particular copyright holder is +reinstated permanently if the copyright holder notifies you of the +violation by some reasonable means, this is the first time you have +received notice of violation of this License (for any work) from that +copyright holder, and you cure the violation prior to 30 days after +your receipt of the notice. + + Termination of your rights under this section does not terminate the +licenses of parties who have received copies or rights from you under +this License. If your rights have been terminated and not permanently +reinstated, you do not qualify to receive new licenses for the same +material under section 10. + + 9. Acceptance Not Required for Having Copies. + + You are not required to accept this License in order to receive or +run a copy of the Program. Ancillary propagation of a covered work +occurring solely as a consequence of using peer-to-peer transmission +to receive a copy likewise does not require acceptance. However, +nothing other than this License grants you permission to propagate or +modify any covered work. These actions infringe copyright if you do +not accept this License. Therefore, by modifying or propagating a +covered work, you indicate your acceptance of this License to do so. + + 10. Automatic Licensing of Downstream Recipients. + + Each time you convey a covered work, the recipient automatically +receives a license from the original licensors, to run, modify and +propagate that work, subject to this License. You are not responsible +for enforcing compliance by third parties with this License. + + An "entity transaction" is a transaction transferring control of an +organization, or substantially all assets of one, or subdividing an +organization, or merging organizations. If propagation of a covered +work results from an entity transaction, each party to that +transaction who receives a copy of the work also receives whatever +licenses to the work the party's predecessor in interest had or could +give under the previous paragraph, plus a right to possession of the +Corresponding Source of the work from the predecessor in interest, if +the predecessor has it or can get it with reasonable efforts. + + You may not impose any further restrictions on the exercise of the +rights granted or affirmed under this License. For example, you may +not impose a license fee, royalty, or other charge for exercise of +rights granted under this License, and you may not initiate litigation +(including a cross-claim or counterclaim in a lawsuit) alleging that +any patent claim is infringed by making, using, selling, offering for +sale, or importing the Program or any portion of it. + + 11. Patents. + + A "contributor" is a copyright holder who authorizes use under this +License of the Program or a work on which the Program is based. The +work thus licensed is called the contributor's "contributor version". + + A contributor's "essential patent claims" are all patent claims +owned or controlled by the contributor, whether already acquired or +hereafter acquired, that would be infringed by some manner, permitted +by this License, of making, using, or selling its contributor version, +but do not include claims that would be infringed only as a +consequence of further modification of the contributor version. For +purposes of this definition, "control" includes the right to grant +patent sublicenses in a manner consistent with the requirements of +this License. + + Each contributor grants you a non-exclusive, worldwide, royalty-free +patent license under the contributor's essential patent claims, to +make, use, sell, offer for sale, import and otherwise run, modify and +propagate the contents of its contributor version. + + In the following three paragraphs, a "patent license" is any express +agreement or commitment, however denominated, not to enforce a patent +(such as an express permission to practice a patent or covenant not to +sue for patent infringement). To "grant" such a patent license to a +party means to make such an agreement or commitment not to enforce a +patent against the party. + + If you convey a covered work, knowingly relying on a patent license, +and the Corresponding Source of the work is not available for anyone +to copy, free of charge and under the terms of this License, through a +publicly available network server or other readily accessible means, +then you must either (1) cause the Corresponding Source to be so +available, or (2) arrange to deprive yourself of the benefit of the +patent license for this particular work, or (3) arrange, in a manner +consistent with the requirements of this License, to extend the patent +license to downstream recipients. "Knowingly relying" means you have +actual knowledge that, but for the patent license, your conveying the +covered work in a country, or your recipient's use of the covered work +in a country, would infringe one or more identifiable patents in that +country that you have reason to believe are valid. + + If, pursuant to or in connection with a single transaction or +arrangement, you convey, or propagate by procuring conveyance of, a +covered work, and grant a patent license to some of the parties +receiving the covered work authorizing them to use, propagate, modify +or convey a specific copy of the covered work, then the patent license +you grant is automatically extended to all recipients of the covered +work and works based on it. + + A patent license is "discriminatory" if it does not include within +the scope of its coverage, prohibits the exercise of, or is +conditioned on the non-exercise of one or more of the rights that are +specifically granted under this License. You may not convey a covered +work if you are a party to an arrangement with a third party that is +in the business of distributing software, under which you make payment +to the third party based on the extent of your activity of conveying +the work, and under which the third party grants, to any of the +parties who would receive the covered work from you, a discriminatory +patent license (a) in connection with copies of the covered work +conveyed by you (or copies made from those copies), or (b) primarily +for and in connection with specific products or compilations that +contain the covered work, unless you entered into that arrangement, +or that patent license was granted, prior to 28 March 2007. + + Nothing in this License shall be construed as excluding or limiting +any implied license or other defenses to infringement that may +otherwise be available to you under applicable patent law. + + 12. No Surrender of Others' Freedom. + + If conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot convey a +covered work so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you may +not convey it at all. For example, if you agree to terms that obligate you +to collect a royalty for further conveying from those to whom you convey +the Program, the only way you could satisfy both those terms and this +License would be to refrain entirely from conveying the Program. + + 13. Use with the GNU Affero General Public License. + + Notwithstanding any other provision of this License, you have +permission to link or combine any covered work with a work licensed +under version 3 of the GNU Affero General Public License into a single +combined work, and to convey the resulting work. The terms of this +License will continue to apply to the part which is the covered work, +but the special requirements of the GNU Affero General Public License, +section 13, concerning interaction through a network will apply to the +combination as such. + + 14. Revised Versions of this License. + + The Free Software Foundation may publish revised and/or new versions of +the GNU General Public License from time to time. Such new versions will +be similar in spirit to the present version, but may differ in detail to +address new problems or concerns. + + Each version is given a distinguishing version number. If the +Program specifies that a certain numbered version of the GNU General +Public License "or any later version" applies to it, you have the +option of following the terms and conditions either of that numbered +version or of any later version published by the Free Software +Foundation. If the Program does not specify a version number of the +GNU General Public License, you may choose any version ever published +by the Free Software Foundation. + + If the Program specifies that a proxy can decide which future +versions of the GNU General Public License can be used, that proxy's +public statement of acceptance of a version permanently authorizes you +to choose that version for the Program. + + Later license versions may give you additional or different +permissions. However, no additional obligations are imposed on any +author or copyright holder as a result of your choosing to follow a +later version. + + 15. Disclaimer of Warranty. + + THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY +APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT +HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY +OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, +THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM +IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF +ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + + 16. Limitation of Liability. + + IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS +THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY +GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE +USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF +DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD +PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), +EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF +SUCH DAMAGES. + + 17. Interpretation of Sections 15 and 16. + + If the disclaimer of warranty and limitation of liability provided +above cannot be given local legal effect according to their terms, +reviewing courts shall apply local law that most closely approximates +an absolute waiver of all civil liability in connection with the +Program, unless a warranty or assumption of liability accompanies a +copy of the Program in return for a fee. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Programs + + If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + + To do so, attach the following notices to the program. It is safest +to attach them to the start of each source file to most effectively +state the exclusion of warranty; and each file should have at least +the "copyright" line and a pointer to where the full notice is found. + + + Copyright (C) + + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . + +Also add information on how to contact you by electronic and paper mail. + + If the program does terminal interaction, make it output a short +notice like this when it starts in an interactive mode: + + Copyright (C) + This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. + This is free software, and you are welcome to redistribute it + under certain conditions; type `show c' for details. + +The hypothetical commands `show w' and `show c' should show the appropriate +parts of the General Public License. Of course, your program's commands +might be different; for a GUI interface, you would use an "about box". + + You should also get your employer (if you work as a programmer) or school, +if any, to sign a "copyright disclaimer" for the program, if necessary. +For more information on this, and how to apply and follow the GNU GPL, see +. + + The GNU General Public License does not permit incorporating your program +into proprietary programs. If your program is a subroutine library, you +may consider it more useful to permit linking proprietary applications with +the library. If this is what you want to do, use the GNU Lesser General +Public License instead of this License. But first, please read +. diff --git a/competing_methods/my_KPConv/README.md b/competing_methods/my_KPConv/README.md new file mode 100644 index 00000000..0512c027 --- /dev/null +++ b/competing_methods/my_KPConv/README.md @@ -0,0 +1,58 @@ + +![Intro figure](https://github.com/HuguesTHOMAS/KPConv-PyTorch/blob/master/doc/Github_intro.png) + +Created by Hugues THOMAS + +## Introduction + +This repository contains the implementation of **Kernel Point Convolution** (KPConv) in [PyTorch](https://pytorch.org/). + +KPConv is also available in [Tensorflow](https://github.com/HuguesTHOMAS/KPConv) (original but older implementation). + +Another implementation of KPConv is available in [PyTorch-Points-3D](https://github.com/nicolas-chaulet/torch-points3d) + +KPConv is a point convolution operator presented in our ICCV2019 paper ([arXiv](https://arxiv.org/abs/1904.08889)). If you find our work useful in your +research, please consider citing: + +``` +@article{thomas2019KPConv, + Author = {Thomas, Hugues and Qi, Charles R. and Deschaud, Jean-Emmanuel and Marcotegui, Beatriz and Goulette, Fran{\c{c}}ois and Guibas, Leonidas J.}, + Title = {KPConv: Flexible and Deformable Convolution for Point Clouds}, + Journal = {Proceedings of the IEEE International Conference on Computer Vision}, + Year = {2019} +} +``` + +## Installation + +This implementation has been tested on Ubuntu 18.04 and Windows 10. Details are provided in [INSTALL.md](./INSTALL.md). + + +## Experiments + +We provide scripts for three experiments: ModelNet40, S3DIS and SemanticKitti. The instructions to run these +experiments are in the [doc](./doc) folder. + +* [Object Classification](./doc/object_classification_guide.md): Instructions to train KP-CNN on an object classification + task (Modelnet40). + +* [Scene Segmentation](./doc/scene_segmentation_guide.md): Instructions to train KP-FCNN on a scene segmentation + task (S3DIS). + +* [SLAM Segmentation](./doc/slam_segmentation_guide.md): Instructions to train KP-FCNN on a slam segmentation + task (SemanticKitti). + +* [Pretrained models](./doc/pretrained_models_guide.md): We provide pretrained weights and instructions to load them. + +* [Visualization scripts](./doc/visualization_guide.md): For now only one visualization script has been implemented: +the kernel deformations display. + +## Acknowledgment + +Our code uses the nanoflann library. + +## License +Our code is released under MIT License (see LICENSE file for details). + +## Updates +* 27/04/2020: Initial release. diff --git a/competing_methods/my_KPConv/cpp_wrappers/compile_wrappers.sh b/competing_methods/my_KPConv/cpp_wrappers/compile_wrappers.sh new file mode 100644 index 00000000..8ba35b0b --- /dev/null +++ b/competing_methods/my_KPConv/cpp_wrappers/compile_wrappers.sh @@ -0,0 +1,11 @@ +#!/bin/bash + +# Compile cpp subsampling +cd cpp_subsampling +python setup.py build_ext --inplace +cd .. + +# Compile cpp neighbors +cd cpp_neighbors +python setup.py build_ext --inplace +cd .. \ No newline at end of file diff --git a/competing_methods/my_KPConv/cpp_wrappers/cpp_neighbors/build.bat b/competing_methods/my_KPConv/cpp_wrappers/cpp_neighbors/build.bat new file mode 100644 index 00000000..8679a29a --- /dev/null +++ b/competing_methods/my_KPConv/cpp_wrappers/cpp_neighbors/build.bat @@ -0,0 +1,5 @@ +@echo off +py setup.py build_ext --inplace + + +pause \ No newline at end of file diff --git a/competing_methods/my_KPConv/cpp_wrappers/cpp_neighbors/neighbors/neighbors.cpp b/competing_methods/my_KPConv/cpp_wrappers/cpp_neighbors/neighbors/neighbors.cpp new file mode 100644 index 00000000..bf22af8f --- /dev/null +++ b/competing_methods/my_KPConv/cpp_wrappers/cpp_neighbors/neighbors/neighbors.cpp @@ -0,0 +1,333 @@ + +#include "neighbors.h" + + +void brute_neighbors(vector& queries, vector& supports, vector& neighbors_indices, float radius, int verbose) +{ + + // Initialize variables + // ****************** + + // square radius + float r2 = radius * radius; + + // indices + int i0 = 0; + + // Counting vector + int max_count = 0; + vector> tmp(queries.size()); + + // Search neigbors indices + // *********************** + + for (auto& p0 : queries) + { + int i = 0; + for (auto& p : supports) + { + if ((p0 - p).sq_norm() < r2) + { + tmp[i0].push_back(i); + if (tmp[i0].size() > max_count) + max_count = tmp[i0].size(); + } + i++; + } + i0++; + } + + // Reserve the memory + neighbors_indices.resize(queries.size() * max_count); + i0 = 0; + for (auto& inds : tmp) + { + for (int j = 0; j < max_count; j++) + { + if (j < inds.size()) + neighbors_indices[i0 * max_count + j] = inds[j]; + else + neighbors_indices[i0 * max_count + j] = -1; + } + i0++; + } + + return; +} + +void ordered_neighbors(vector& queries, + vector& supports, + vector& neighbors_indices, + float radius) +{ + + // Initialize variables + // ****************** + + // square radius + float r2 = radius * radius; + + // indices + int i0 = 0; + + // Counting vector + int max_count = 0; + float d2; + vector> tmp(queries.size()); + vector> dists(queries.size()); + + // Search neigbors indices + // *********************** + + for (auto& p0 : queries) + { + int i = 0; + for (auto& p : supports) + { + d2 = (p0 - p).sq_norm(); + if (d2 < r2) + { + // Find order of the new point + auto it = std::upper_bound(dists[i0].begin(), dists[i0].end(), d2); + int index = std::distance(dists[i0].begin(), it); + + // Insert element + dists[i0].insert(it, d2); + tmp[i0].insert(tmp[i0].begin() + index, i); + + // Update max count + if (tmp[i0].size() > max_count) + max_count = tmp[i0].size(); + } + i++; + } + i0++; + } + + // Reserve the memory + neighbors_indices.resize(queries.size() * max_count); + i0 = 0; + for (auto& inds : tmp) + { + for (int j = 0; j < max_count; j++) + { + if (j < inds.size()) + neighbors_indices[i0 * max_count + j] = inds[j]; + else + neighbors_indices[i0 * max_count + j] = -1; + } + i0++; + } + + return; +} + +void batch_ordered_neighbors(vector& queries, + vector& supports, + vector& q_batches, + vector& s_batches, + vector& neighbors_indices, + float radius) +{ + + // Initialize variables + // ****************** + + // square radius + float r2 = radius * radius; + + // indices + int i0 = 0; + + // Counting vector + int max_count = 0; + float d2; + vector> tmp(queries.size()); + vector> dists(queries.size()); + + // batch index + int b = 0; + int sum_qb = 0; + int sum_sb = 0; + + + // Search neigbors indices + // *********************** + + for (auto& p0 : queries) + { + // Check if we changed batch + if (i0 == sum_qb + q_batches[b]) + { + sum_qb += q_batches[b]; + sum_sb += s_batches[b]; + b++; + } + + // Loop only over the supports of current batch + vector::iterator p_it; + int i = 0; + for(p_it = supports.begin() + sum_sb; p_it < supports.begin() + sum_sb + s_batches[b]; p_it++ ) + { + d2 = (p0 - *p_it).sq_norm(); + if (d2 < r2) + { + // Find order of the new point + auto it = std::upper_bound(dists[i0].begin(), dists[i0].end(), d2); + int index = std::distance(dists[i0].begin(), it); + + // Insert element + dists[i0].insert(it, d2); + tmp[i0].insert(tmp[i0].begin() + index, sum_sb + i); + + // Update max count + if (tmp[i0].size() > max_count) + max_count = tmp[i0].size(); + } + i++; + } + i0++; + } + + // Reserve the memory + neighbors_indices.resize(queries.size() * max_count); + i0 = 0; + for (auto& inds : tmp) + { + for (int j = 0; j < max_count; j++) + { + if (j < inds.size()) + neighbors_indices[i0 * max_count + j] = inds[j]; + else + neighbors_indices[i0 * max_count + j] = supports.size(); + } + i0++; + } + + return; +} + + +void batch_nanoflann_neighbors(vector& queries, + vector& supports, + vector& q_batches, + vector& s_batches, + vector& neighbors_indices, + float radius) +{ + + // Initialize variables + // ****************** + + // indices + int i0 = 0; + + // Square radius + float r2 = radius * radius; + + // Counting vector + int max_count = 0; + float d2; + vector>> all_inds_dists(queries.size()); + + // batch index + int b = 0; + int sum_qb = 0; + int sum_sb = 0; + + // Nanoflann related variables + // *************************** + + // CLoud variable + PointCloud current_cloud; + + // Tree parameters + nanoflann::KDTreeSingleIndexAdaptorParams tree_params(10 /* max leaf */); + + // KDTree type definition + typedef nanoflann::KDTreeSingleIndexAdaptor< nanoflann::L2_Simple_Adaptor , + PointCloud, + 3 > my_kd_tree_t; + + // Pointer to trees + my_kd_tree_t* index; + + // Build KDTree for the first batch element + current_cloud.pts = vector(supports.begin() + sum_sb, supports.begin() + sum_sb + s_batches[b]); + index = new my_kd_tree_t(3, current_cloud, tree_params); + index->buildIndex(); + + + // Search neigbors indices + // *********************** + + // Search params + nanoflann::SearchParams search_params; + search_params.sorted = true; + + for (auto& p0 : queries) + { + + // Check if we changed batch + if (i0 == sum_qb + q_batches[b]) + { + sum_qb += q_batches[b]; + sum_sb += s_batches[b]; + b++; + + // Change the points + current_cloud.pts.clear(); + current_cloud.pts = vector(supports.begin() + sum_sb, supports.begin() + sum_sb + s_batches[b]); + + // Build KDTree of the current element of the batch + delete index; + index = new my_kd_tree_t(3, current_cloud, tree_params); + index->buildIndex(); + } + + // Initial guess of neighbors size + all_inds_dists[i0].reserve(max_count); + + // Find neighbors + float query_pt[3] = { p0.x, p0.y, p0.z}; + size_t nMatches = index->radiusSearch(query_pt, r2, all_inds_dists[i0], search_params); + + // Update max count + if (nMatches > max_count) + max_count = nMatches; + + // Increment query idx + i0++; + } + + // Reserve the memory + neighbors_indices.resize(queries.size() * max_count); + i0 = 0; + sum_sb = 0; + sum_qb = 0; + b = 0; + for (auto& inds_dists : all_inds_dists) + { + // Check if we changed batch + if (i0 == sum_qb + q_batches[b]) + { + sum_qb += q_batches[b]; + sum_sb += s_batches[b]; + b++; + } + + for (int j = 0; j < max_count; j++) + { + if (j < inds_dists.size()) + neighbors_indices[i0 * max_count + j] = inds_dists[j].first + sum_sb; + else + neighbors_indices[i0 * max_count + j] = supports.size(); + } + i0++; + } + + delete index; + + return; +} + diff --git a/competing_methods/my_KPConv/cpp_wrappers/cpp_neighbors/neighbors/neighbors.h b/competing_methods/my_KPConv/cpp_wrappers/cpp_neighbors/neighbors/neighbors.h new file mode 100644 index 00000000..ff612b0f --- /dev/null +++ b/competing_methods/my_KPConv/cpp_wrappers/cpp_neighbors/neighbors/neighbors.h @@ -0,0 +1,29 @@ + + +#include "../../cpp_utils/cloud/cloud.h" +#include "../../cpp_utils/nanoflann/nanoflann.hpp" + +#include +#include + +using namespace std; + + +void ordered_neighbors(vector& queries, + vector& supports, + vector& neighbors_indices, + float radius); + +void batch_ordered_neighbors(vector& queries, + vector& supports, + vector& q_batches, + vector& s_batches, + vector& neighbors_indices, + float radius); + +void batch_nanoflann_neighbors(vector& queries, + vector& supports, + vector& q_batches, + vector& s_batches, + vector& neighbors_indices, + float radius); diff --git a/competing_methods/my_KPConv/cpp_wrappers/cpp_neighbors/setup.py b/competing_methods/my_KPConv/cpp_wrappers/cpp_neighbors/setup.py new file mode 100644 index 00000000..8f53a9c3 --- /dev/null +++ b/competing_methods/my_KPConv/cpp_wrappers/cpp_neighbors/setup.py @@ -0,0 +1,28 @@ +from distutils.core import setup, Extension +import numpy.distutils.misc_util + +# Adding OpenCV to project +# ************************ + +# Adding sources of the project +# ***************************** + +SOURCES = ["../cpp_utils/cloud/cloud.cpp", + "neighbors/neighbors.cpp", + "wrapper.cpp"] + +module = Extension(name="radius_neighbors", + sources=SOURCES, + extra_compile_args=['-std=c++11', + '-D_GLIBCXX_USE_CXX11_ABI=0']) + + +setup(ext_modules=[module], include_dirs=numpy.distutils.misc_util.get_numpy_include_dirs()) + + + + + + + + diff --git a/competing_methods/my_KPConv/cpp_wrappers/cpp_neighbors/wrapper.cpp b/competing_methods/my_KPConv/cpp_wrappers/cpp_neighbors/wrapper.cpp new file mode 100644 index 00000000..a4e28090 --- /dev/null +++ b/competing_methods/my_KPConv/cpp_wrappers/cpp_neighbors/wrapper.cpp @@ -0,0 +1,238 @@ +#include +#include +#include "neighbors/neighbors.h" +#include + + + +// docstrings for our module +// ************************* + +static char module_docstring[] = "This module provides two methods to compute radius neighbors from pointclouds or batch of pointclouds"; + +static char batch_query_docstring[] = "Method to get radius neighbors in a batch of stacked pointclouds"; + + +// Declare the functions +// ********************* + +static PyObject *batch_neighbors(PyObject *self, PyObject *args, PyObject *keywds); + + +// Specify the members of the module +// ********************************* + +static PyMethodDef module_methods[] = +{ + { "batch_query", (PyCFunction)batch_neighbors, METH_VARARGS | METH_KEYWORDS, batch_query_docstring }, + {NULL, NULL, 0, NULL} +}; + + +// Initialize the module +// ********************* + +static struct PyModuleDef moduledef = +{ + PyModuleDef_HEAD_INIT, + "radius_neighbors", // m_name + module_docstring, // m_doc + -1, // m_size + module_methods, // m_methods + NULL, // m_reload + NULL, // m_traverse + NULL, // m_clear + NULL, // m_free +}; + +PyMODINIT_FUNC PyInit_radius_neighbors(void) +{ + import_array(); + return PyModule_Create(&moduledef); +} + + +// Definition of the batch_subsample method +// ********************************** + +static PyObject* batch_neighbors(PyObject* self, PyObject* args, PyObject* keywds) +{ + + // Manage inputs + // ************* + + // Args containers + PyObject* queries_obj = NULL; + PyObject* supports_obj = NULL; + PyObject* q_batches_obj = NULL; + PyObject* s_batches_obj = NULL; + + // Keywords containers + static char* kwlist[] = { "queries", "supports", "q_batches", "s_batches", "radius", NULL }; + float radius = 0.1; + + // Parse the input + if (!PyArg_ParseTupleAndKeywords(args, keywds, "OOOO|$f", kwlist, &queries_obj, &supports_obj, &q_batches_obj, &s_batches_obj, &radius)) + { + PyErr_SetString(PyExc_RuntimeError, "Error parsing arguments"); + return NULL; + } + + + // Interpret the input objects as numpy arrays. + PyObject* queries_array = PyArray_FROM_OTF(queries_obj, NPY_FLOAT, NPY_IN_ARRAY); + PyObject* supports_array = PyArray_FROM_OTF(supports_obj, NPY_FLOAT, NPY_IN_ARRAY); + PyObject* q_batches_array = PyArray_FROM_OTF(q_batches_obj, NPY_INT, NPY_IN_ARRAY); + PyObject* s_batches_array = PyArray_FROM_OTF(s_batches_obj, NPY_INT, NPY_IN_ARRAY); + + // Verify data was load correctly. + if (queries_array == NULL) + { + Py_XDECREF(queries_array); + Py_XDECREF(supports_array); + Py_XDECREF(q_batches_array); + Py_XDECREF(s_batches_array); + PyErr_SetString(PyExc_RuntimeError, "Error converting query points to numpy arrays of type float32"); + return NULL; + } + if (supports_array == NULL) + { + Py_XDECREF(queries_array); + Py_XDECREF(supports_array); + Py_XDECREF(q_batches_array); + Py_XDECREF(s_batches_array); + PyErr_SetString(PyExc_RuntimeError, "Error converting support points to numpy arrays of type float32"); + return NULL; + } + if (q_batches_array == NULL) + { + Py_XDECREF(queries_array); + Py_XDECREF(supports_array); + Py_XDECREF(q_batches_array); + Py_XDECREF(s_batches_array); + PyErr_SetString(PyExc_RuntimeError, "Error converting query batches to numpy arrays of type int32"); + return NULL; + } + if (s_batches_array == NULL) + { + Py_XDECREF(queries_array); + Py_XDECREF(supports_array); + Py_XDECREF(q_batches_array); + Py_XDECREF(s_batches_array); + PyErr_SetString(PyExc_RuntimeError, "Error converting support batches to numpy arrays of type int32"); + return NULL; + } + + // Check that the input array respect the dims + if ((int)PyArray_NDIM(queries_array) != 2 || (int)PyArray_DIM(queries_array, 1) != 3) + { + Py_XDECREF(queries_array); + Py_XDECREF(supports_array); + Py_XDECREF(q_batches_array); + Py_XDECREF(s_batches_array); + PyErr_SetString(PyExc_RuntimeError, "Wrong dimensions : query.shape is not (N, 3)"); + return NULL; + } + if ((int)PyArray_NDIM(supports_array) != 2 || (int)PyArray_DIM(supports_array, 1) != 3) + { + Py_XDECREF(queries_array); + Py_XDECREF(supports_array); + Py_XDECREF(q_batches_array); + Py_XDECREF(s_batches_array); + PyErr_SetString(PyExc_RuntimeError, "Wrong dimensions : support.shape is not (N, 3)"); + return NULL; + } + if ((int)PyArray_NDIM(q_batches_array) > 1) + { + Py_XDECREF(queries_array); + Py_XDECREF(supports_array); + Py_XDECREF(q_batches_array); + Py_XDECREF(s_batches_array); + PyErr_SetString(PyExc_RuntimeError, "Wrong dimensions : queries_batches.shape is not (B,) "); + return NULL; + } + if ((int)PyArray_NDIM(s_batches_array) > 1) + { + Py_XDECREF(queries_array); + Py_XDECREF(supports_array); + Py_XDECREF(q_batches_array); + Py_XDECREF(s_batches_array); + PyErr_SetString(PyExc_RuntimeError, "Wrong dimensions : supports_batches.shape is not (B,) "); + return NULL; + } + if ((int)PyArray_DIM(q_batches_array, 0) != (int)PyArray_DIM(s_batches_array, 0)) + { + Py_XDECREF(queries_array); + Py_XDECREF(supports_array); + Py_XDECREF(q_batches_array); + Py_XDECREF(s_batches_array); + PyErr_SetString(PyExc_RuntimeError, "Wrong number of batch elements: different for queries and supports "); + return NULL; + } + + // Number of points + int Nq = (int)PyArray_DIM(queries_array, 0); + int Ns= (int)PyArray_DIM(supports_array, 0); + + // Number of batches + int Nb = (int)PyArray_DIM(q_batches_array, 0); + + // Call the C++ function + // ********************* + + // Convert PyArray to Cloud C++ class + vector queries; + vector supports; + vector q_batches; + vector s_batches; + queries = vector((PointXYZ*)PyArray_DATA(queries_array), (PointXYZ*)PyArray_DATA(queries_array) + Nq); + supports = vector((PointXYZ*)PyArray_DATA(supports_array), (PointXYZ*)PyArray_DATA(supports_array) + Ns); + q_batches = vector((int*)PyArray_DATA(q_batches_array), (int*)PyArray_DATA(q_batches_array) + Nb); + s_batches = vector((int*)PyArray_DATA(s_batches_array), (int*)PyArray_DATA(s_batches_array) + Nb); + + // Create result containers + vector neighbors_indices; + + // Compute results + //batch_ordered_neighbors(queries, supports, q_batches, s_batches, neighbors_indices, radius); + batch_nanoflann_neighbors(queries, supports, q_batches, s_batches, neighbors_indices, radius); + + // Check result + if (neighbors_indices.size() < 1) + { + PyErr_SetString(PyExc_RuntimeError, "Error"); + return NULL; + } + + // Manage outputs + // ************** + + // Maximal number of neighbors + int max_neighbors = neighbors_indices.size() / Nq; + + // Dimension of output containers + npy_intp* neighbors_dims = new npy_intp[2]; + neighbors_dims[0] = Nq; + neighbors_dims[1] = max_neighbors; + + // Create output array + PyObject* res_obj = PyArray_SimpleNew(2, neighbors_dims, NPY_INT); + PyObject* ret = NULL; + + // Fill output array with values + size_t size_in_bytes = Nq * max_neighbors * sizeof(int); + memcpy(PyArray_DATA(res_obj), neighbors_indices.data(), size_in_bytes); + + // Merge results + ret = Py_BuildValue("N", res_obj); + + // Clean up + // ******** + + Py_XDECREF(queries_array); + Py_XDECREF(supports_array); + Py_XDECREF(q_batches_array); + Py_XDECREF(s_batches_array); + + return ret; +} diff --git a/competing_methods/my_KPConv/cpp_wrappers/cpp_subsampling/build.bat b/competing_methods/my_KPConv/cpp_wrappers/cpp_subsampling/build.bat new file mode 100644 index 00000000..8679a29a --- /dev/null +++ b/competing_methods/my_KPConv/cpp_wrappers/cpp_subsampling/build.bat @@ -0,0 +1,5 @@ +@echo off +py setup.py build_ext --inplace + + +pause \ No newline at end of file diff --git a/competing_methods/my_KPConv/cpp_wrappers/cpp_subsampling/grid_subsampling/grid_subsampling.cpp b/competing_methods/my_KPConv/cpp_wrappers/cpp_subsampling/grid_subsampling/grid_subsampling.cpp new file mode 100644 index 00000000..24276bb3 --- /dev/null +++ b/competing_methods/my_KPConv/cpp_wrappers/cpp_subsampling/grid_subsampling/grid_subsampling.cpp @@ -0,0 +1,211 @@ + +#include "grid_subsampling.h" + + +void grid_subsampling(vector& original_points, + vector& subsampled_points, + vector& original_features, + vector& subsampled_features, + vector& original_classes, + vector& subsampled_classes, + float sampleDl, + int verbose) { + + // Initialize variables + // ****************** + + // Number of points in the cloud + size_t N = original_points.size(); + + // Dimension of the features + size_t fdim = original_features.size() / N; + size_t ldim = original_classes.size() / N; + + // Limits of the cloud + PointXYZ minCorner = min_point(original_points); + PointXYZ maxCorner = max_point(original_points); + PointXYZ originCorner = floor(minCorner * (1/sampleDl)) * sampleDl; + + // Dimensions of the grid + size_t sampleNX = (size_t)floor((maxCorner.x - originCorner.x) / sampleDl) + 1; + size_t sampleNY = (size_t)floor((maxCorner.y - originCorner.y) / sampleDl) + 1; + //size_t sampleNZ = (size_t)floor((maxCorner.z - originCorner.z) / sampleDl) + 1; + + // Check if features and classes need to be processed + bool use_feature = original_features.size() > 0; + bool use_classes = original_classes.size() > 0; + + + // Create the sampled map + // ********************** + + // Verbose parameters + int i = 0; + int nDisp = N / 100; + + // Initialize variables + size_t iX, iY, iZ, mapIdx; + unordered_map data; + + for (auto& p : original_points) + { + // Position of point in sample map + iX = (size_t)floor((p.x - originCorner.x) / sampleDl); + iY = (size_t)floor((p.y - originCorner.y) / sampleDl); + iZ = (size_t)floor((p.z - originCorner.z) / sampleDl); + mapIdx = iX + sampleNX*iY + sampleNX*sampleNY*iZ; + + // If not already created, create key + if (data.count(mapIdx) < 1) + data.emplace(mapIdx, SampledData(fdim, ldim)); + + // Fill the sample map + if (use_feature && use_classes) + data[mapIdx].update_all(p, original_features.begin() + i * fdim, original_classes.begin() + i * ldim); + else if (use_feature) + data[mapIdx].update_features(p, original_features.begin() + i * fdim); + else if (use_classes) + data[mapIdx].update_classes(p, original_classes.begin() + i * ldim); + else + data[mapIdx].update_points(p); + + // Display + i++; + if (verbose > 1 && i%nDisp == 0) + std::cout << "\rSampled Map : " << std::setw(3) << i / nDisp << "%"; + + } + + // Divide for barycentre and transfer to a vector + subsampled_points.reserve(data.size()); + if (use_feature) + subsampled_features.reserve(data.size() * fdim); + if (use_classes) + subsampled_classes.reserve(data.size() * ldim); + for (auto& v : data) + { + subsampled_points.push_back(v.second.point * (1.0 / v.second.count)); + if (use_feature) + { + float count = (float)v.second.count; + transform(v.second.features.begin(), + v.second.features.end(), + v.second.features.begin(), + [count](float f) { return f / count;}); + subsampled_features.insert(subsampled_features.end(),v.second.features.begin(),v.second.features.end()); + } + if (use_classes) + { + for (int i = 0; i < ldim; i++) + subsampled_classes.push_back(max_element(v.second.labels[i].begin(), v.second.labels[i].end(), + [](const pair&a, const pair&b){return a.second < b.second;})->first); + } + } + + return; +} + + +void batch_grid_subsampling(vector& original_points, + vector& subsampled_points, + vector& original_features, + vector& subsampled_features, + vector& original_classes, + vector& subsampled_classes, + vector& original_batches, + vector& subsampled_batches, + float sampleDl, + int max_p) +{ + // Initialize variables + // ****************** + + int b = 0; + int sum_b = 0; + + // Number of points in the cloud + size_t N = original_points.size(); + + // Dimension of the features + size_t fdim = original_features.size() / N; + size_t ldim = original_classes.size() / N; + + // Handle max_p = 0 + if (max_p < 1) + max_p = N; + + // Loop over batches + // ***************** + + for (b = 0; b < original_batches.size(); b++) + { + + // Extract batch points features and labels + vector b_o_points = vector(original_points.begin () + sum_b, + original_points.begin () + sum_b + original_batches[b]); + + vector b_o_features; + if (original_features.size() > 0) + { + b_o_features = vector(original_features.begin () + sum_b * fdim, + original_features.begin () + (sum_b + original_batches[b]) * fdim); + } + + vector b_o_classes; + if (original_classes.size() > 0) + { + b_o_classes = vector(original_classes.begin () + sum_b * ldim, + original_classes.begin () + sum_b + original_batches[b] * ldim); + } + + + // Create result containers + vector b_s_points; + vector b_s_features; + vector b_s_classes; + + // Compute subsampling on current batch + grid_subsampling(b_o_points, + b_s_points, + b_o_features, + b_s_features, + b_o_classes, + b_s_classes, + sampleDl, + 0); + + // Stack batches points features and labels + // **************************************** + + // If too many points remove some + if (b_s_points.size() <= max_p) + { + subsampled_points.insert(subsampled_points.end(), b_s_points.begin(), b_s_points.end()); + + if (original_features.size() > 0) + subsampled_features.insert(subsampled_features.end(), b_s_features.begin(), b_s_features.end()); + + if (original_classes.size() > 0) + subsampled_classes.insert(subsampled_classes.end(), b_s_classes.begin(), b_s_classes.end()); + + subsampled_batches.push_back(b_s_points.size()); + } + else + { + subsampled_points.insert(subsampled_points.end(), b_s_points.begin(), b_s_points.begin() + max_p); + + if (original_features.size() > 0) + subsampled_features.insert(subsampled_features.end(), b_s_features.begin(), b_s_features.begin() + max_p * fdim); + + if (original_classes.size() > 0) + subsampled_classes.insert(subsampled_classes.end(), b_s_classes.begin(), b_s_classes.begin() + max_p * ldim); + + subsampled_batches.push_back(max_p); + } + + // Stack new batch lengths + sum_b += original_batches[b]; + } + + return; +} diff --git a/competing_methods/my_KPConv/cpp_wrappers/cpp_subsampling/grid_subsampling/grid_subsampling.h b/competing_methods/my_KPConv/cpp_wrappers/cpp_subsampling/grid_subsampling/grid_subsampling.h new file mode 100644 index 00000000..37f775d8 --- /dev/null +++ b/competing_methods/my_KPConv/cpp_wrappers/cpp_subsampling/grid_subsampling/grid_subsampling.h @@ -0,0 +1,101 @@ + + +#include "../../cpp_utils/cloud/cloud.h" + +#include +#include + +using namespace std; + +class SampledData +{ +public: + + // Elements + // ******** + + int count; + PointXYZ point; + vector features; + vector> labels; + + + // Methods + // ******* + + // Constructor + SampledData() + { + count = 0; + point = PointXYZ(); + } + + SampledData(const size_t fdim, const size_t ldim) + { + count = 0; + point = PointXYZ(); + features = vector(fdim); + labels = vector>(ldim); + } + + // Method Update + void update_all(const PointXYZ p, vector::iterator f_begin, vector::iterator l_begin) + { + count += 1; + point += p; + transform (features.begin(), features.end(), f_begin, features.begin(), plus()); + int i = 0; + for(vector::iterator it = l_begin; it != l_begin + labels.size(); ++it) + { + labels[i][*it] += 1; + i++; + } + return; + } + void update_features(const PointXYZ p, vector::iterator f_begin) + { + count += 1; + point += p; + transform (features.begin(), features.end(), f_begin, features.begin(), plus()); + return; + } + void update_classes(const PointXYZ p, vector::iterator l_begin) + { + count += 1; + point += p; + int i = 0; + for(vector::iterator it = l_begin; it != l_begin + labels.size(); ++it) + { + labels[i][*it] += 1; + i++; + } + return; + } + void update_points(const PointXYZ p) + { + count += 1; + point += p; + return; + } +}; + +void grid_subsampling(vector& original_points, + vector& subsampled_points, + vector& original_features, + vector& subsampled_features, + vector& original_classes, + vector& subsampled_classes, + float sampleDl, + int verbose); + +void batch_grid_subsampling(vector& original_points, + vector& subsampled_points, + vector& original_features, + vector& subsampled_features, + vector& original_classes, + vector& subsampled_classes, + vector& original_batches, + vector& subsampled_batches, + float sampleDl, + int max_p); + diff --git a/competing_methods/my_KPConv/cpp_wrappers/cpp_subsampling/setup.py b/competing_methods/my_KPConv/cpp_wrappers/cpp_subsampling/setup.py new file mode 100644 index 00000000..32062999 --- /dev/null +++ b/competing_methods/my_KPConv/cpp_wrappers/cpp_subsampling/setup.py @@ -0,0 +1,28 @@ +from distutils.core import setup, Extension +import numpy.distutils.misc_util + +# Adding OpenCV to project +# ************************ + +# Adding sources of the project +# ***************************** + +SOURCES = ["../cpp_utils/cloud/cloud.cpp", + "grid_subsampling/grid_subsampling.cpp", + "wrapper.cpp"] + +module = Extension(name="grid_subsampling", + sources=SOURCES, + extra_compile_args=['-std=c++11', + '-D_GLIBCXX_USE_CXX11_ABI=0']) + + +setup(ext_modules=[module], include_dirs=numpy.distutils.misc_util.get_numpy_include_dirs()) + + + + + + + + diff --git a/competing_methods/my_KPConv/cpp_wrappers/cpp_subsampling/wrapper.cpp b/competing_methods/my_KPConv/cpp_wrappers/cpp_subsampling/wrapper.cpp new file mode 100644 index 00000000..8a92aaab --- /dev/null +++ b/competing_methods/my_KPConv/cpp_wrappers/cpp_subsampling/wrapper.cpp @@ -0,0 +1,566 @@ +#include +#include +#include "grid_subsampling/grid_subsampling.h" +#include + + + +// docstrings for our module +// ************************* + +static char module_docstring[] = "This module provides an interface for the subsampling of a batch of stacked pointclouds"; + +static char subsample_docstring[] = "function subsampling a pointcloud"; + +static char subsample_batch_docstring[] = "function subsampling a batch of stacked pointclouds"; + + +// Declare the functions +// ********************* + +static PyObject *cloud_subsampling(PyObject* self, PyObject* args, PyObject* keywds); +static PyObject *batch_subsampling(PyObject *self, PyObject *args, PyObject *keywds); + + +// Specify the members of the module +// ********************************* + +static PyMethodDef module_methods[] = +{ + { "subsample", (PyCFunction)cloud_subsampling, METH_VARARGS | METH_KEYWORDS, subsample_docstring }, + { "subsample_batch", (PyCFunction)batch_subsampling, METH_VARARGS | METH_KEYWORDS, subsample_batch_docstring }, + {NULL, NULL, 0, NULL} +}; + + +// Initialize the module +// ********************* + +static struct PyModuleDef moduledef = +{ + PyModuleDef_HEAD_INIT, + "grid_subsampling", // m_name + module_docstring, // m_doc + -1, // m_size + module_methods, // m_methods + NULL, // m_reload + NULL, // m_traverse + NULL, // m_clear + NULL, // m_free +}; + +PyMODINIT_FUNC PyInit_grid_subsampling(void) +{ + import_array(); + return PyModule_Create(&moduledef); +} + + +// Definition of the batch_subsample method +// ********************************** + +static PyObject* batch_subsampling(PyObject* self, PyObject* args, PyObject* keywds) +{ + + // Manage inputs + // ************* + + // Args containers + PyObject* points_obj = NULL; + PyObject* features_obj = NULL; + PyObject* classes_obj = NULL; + PyObject* batches_obj = NULL; + + // Keywords containers + static char* kwlist[] = { "points", "batches", "features", "classes", "sampleDl", "method", "max_p", "verbose", NULL }; + float sampleDl = 0.1; + const char* method_buffer = "barycenters"; + int verbose = 0; + int max_p = 0; + + // Parse the input + if (!PyArg_ParseTupleAndKeywords(args, keywds, "OO|$OOfsii", kwlist, &points_obj, &batches_obj, &features_obj, &classes_obj, &sampleDl, &method_buffer, &max_p, &verbose)) + { + PyErr_SetString(PyExc_RuntimeError, "Error parsing arguments"); + return NULL; + } + + // Get the method argument + string method(method_buffer); + + // Interpret method + if (method.compare("barycenters") && method.compare("voxelcenters")) + { + PyErr_SetString(PyExc_RuntimeError, "Error parsing method. Valid method names are \"barycenters\" and \"voxelcenters\" "); + return NULL; + } + + // Check if using features or classes + bool use_feature = true, use_classes = true; + if (features_obj == NULL) + use_feature = false; + if (classes_obj == NULL) + use_classes = false; + + // Interpret the input objects as numpy arrays. + PyObject* points_array = PyArray_FROM_OTF(points_obj, NPY_FLOAT, NPY_IN_ARRAY); + PyObject* batches_array = PyArray_FROM_OTF(batches_obj, NPY_INT, NPY_IN_ARRAY); + PyObject* features_array = NULL; + PyObject* classes_array = NULL; + if (use_feature) + features_array = PyArray_FROM_OTF(features_obj, NPY_FLOAT, NPY_IN_ARRAY); + if (use_classes) + classes_array = PyArray_FROM_OTF(classes_obj, NPY_INT, NPY_IN_ARRAY); + + // Verify data was load correctly. + if (points_array == NULL) + { + Py_XDECREF(points_array); + Py_XDECREF(batches_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Error converting input points to numpy arrays of type float32"); + return NULL; + } + if (batches_array == NULL) + { + Py_XDECREF(points_array); + Py_XDECREF(batches_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Error converting input batches to numpy arrays of type int32"); + return NULL; + } + if (use_feature && features_array == NULL) + { + Py_XDECREF(points_array); + Py_XDECREF(batches_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Error converting input features to numpy arrays of type float32"); + return NULL; + } + if (use_classes && classes_array == NULL) + { + Py_XDECREF(points_array); + Py_XDECREF(batches_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Error converting input classes to numpy arrays of type int32"); + return NULL; + } + + // Check that the input array respect the dims + if ((int)PyArray_NDIM(points_array) != 2 || (int)PyArray_DIM(points_array, 1) != 3) + { + Py_XDECREF(points_array); + Py_XDECREF(batches_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Wrong dimensions : points.shape is not (N, 3)"); + return NULL; + } + if ((int)PyArray_NDIM(batches_array) > 1) + { + Py_XDECREF(points_array); + Py_XDECREF(batches_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Wrong dimensions : batches.shape is not (B,) "); + return NULL; + } + if (use_feature && ((int)PyArray_NDIM(features_array) != 2)) + { + Py_XDECREF(points_array); + Py_XDECREF(batches_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Wrong dimensions : features.shape is not (N, d)"); + return NULL; + } + + if (use_classes && (int)PyArray_NDIM(classes_array) > 2) + { + Py_XDECREF(points_array); + Py_XDECREF(batches_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Wrong dimensions : classes.shape is not (N,) or (N, d)"); + return NULL; + } + + // Number of points + int N = (int)PyArray_DIM(points_array, 0); + + // Number of batches + int Nb = (int)PyArray_DIM(batches_array, 0); + + // Dimension of the features + int fdim = 0; + if (use_feature) + fdim = (int)PyArray_DIM(features_array, 1); + + //Dimension of labels + int ldim = 1; + if (use_classes && (int)PyArray_NDIM(classes_array) == 2) + ldim = (int)PyArray_DIM(classes_array, 1); + + // Check that the input array respect the number of points + if (use_feature && (int)PyArray_DIM(features_array, 0) != N) + { + Py_XDECREF(points_array); + Py_XDECREF(batches_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Wrong dimensions : features.shape is not (N, d)"); + return NULL; + } + if (use_classes && (int)PyArray_DIM(classes_array, 0) != N) + { + Py_XDECREF(points_array); + Py_XDECREF(batches_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Wrong dimensions : classes.shape is not (N,) or (N, d)"); + return NULL; + } + + + // Call the C++ function + // ********************* + + // Create pyramid + if (verbose > 0) + cout << "Computing cloud pyramid with support points: " << endl; + + + // Convert PyArray to Cloud C++ class + vector original_points; + vector original_batches; + vector original_features; + vector original_classes; + original_points = vector((PointXYZ*)PyArray_DATA(points_array), (PointXYZ*)PyArray_DATA(points_array) + N); + original_batches = vector((int*)PyArray_DATA(batches_array), (int*)PyArray_DATA(batches_array) + Nb); + if (use_feature) + original_features = vector((float*)PyArray_DATA(features_array), (float*)PyArray_DATA(features_array) + N * fdim); + if (use_classes) + original_classes = vector((int*)PyArray_DATA(classes_array), (int*)PyArray_DATA(classes_array) + N * ldim); + + // Subsample + vector subsampled_points; + vector subsampled_features; + vector subsampled_classes; + vector subsampled_batches; + batch_grid_subsampling(original_points, + subsampled_points, + original_features, + subsampled_features, + original_classes, + subsampled_classes, + original_batches, + subsampled_batches, + sampleDl, + max_p); + + // Check result + if (subsampled_points.size() < 1) + { + PyErr_SetString(PyExc_RuntimeError, "Error"); + return NULL; + } + + // Manage outputs + // ************** + + // Dimension of input containers + npy_intp* point_dims = new npy_intp[2]; + point_dims[0] = subsampled_points.size(); + point_dims[1] = 3; + npy_intp* feature_dims = new npy_intp[2]; + feature_dims[0] = subsampled_points.size(); + feature_dims[1] = fdim; + npy_intp* classes_dims = new npy_intp[2]; + classes_dims[0] = subsampled_points.size(); + classes_dims[1] = ldim; + npy_intp* batches_dims = new npy_intp[1]; + batches_dims[0] = Nb; + + // Create output array + PyObject* res_points_obj = PyArray_SimpleNew(2, point_dims, NPY_FLOAT); + PyObject* res_batches_obj = PyArray_SimpleNew(1, batches_dims, NPY_INT); + PyObject* res_features_obj = NULL; + PyObject* res_classes_obj = NULL; + PyObject* ret = NULL; + + // Fill output array with values + size_t size_in_bytes = subsampled_points.size() * 3 * sizeof(float); + memcpy(PyArray_DATA(res_points_obj), subsampled_points.data(), size_in_bytes); + size_in_bytes = Nb * sizeof(int); + memcpy(PyArray_DATA(res_batches_obj), subsampled_batches.data(), size_in_bytes); + if (use_feature) + { + size_in_bytes = subsampled_points.size() * fdim * sizeof(float); + res_features_obj = PyArray_SimpleNew(2, feature_dims, NPY_FLOAT); + memcpy(PyArray_DATA(res_features_obj), subsampled_features.data(), size_in_bytes); + } + if (use_classes) + { + size_in_bytes = subsampled_points.size() * ldim * sizeof(int); + res_classes_obj = PyArray_SimpleNew(2, classes_dims, NPY_INT); + memcpy(PyArray_DATA(res_classes_obj), subsampled_classes.data(), size_in_bytes); + } + + + // Merge results + if (use_feature && use_classes) + ret = Py_BuildValue("NNNN", res_points_obj, res_batches_obj, res_features_obj, res_classes_obj); + else if (use_feature) + ret = Py_BuildValue("NNN", res_points_obj, res_batches_obj, res_features_obj); + else if (use_classes) + ret = Py_BuildValue("NNN", res_points_obj, res_batches_obj, res_classes_obj); + else + ret = Py_BuildValue("NN", res_points_obj, res_batches_obj); + + // Clean up + // ******** + + Py_DECREF(points_array); + Py_DECREF(batches_array); + Py_XDECREF(features_array); + Py_XDECREF(classes_array); + + return ret; +} + +// Definition of the subsample method +// **************************************** + +static PyObject* cloud_subsampling(PyObject* self, PyObject* args, PyObject* keywds) +{ + + // Manage inputs + // ************* + + // Args containers + PyObject* points_obj = NULL; + PyObject* features_obj = NULL; + PyObject* classes_obj = NULL; + + // Keywords containers + static char* kwlist[] = { "points", "features", "classes", "sampleDl", "method", "verbose", NULL }; + float sampleDl = 0.1; + const char* method_buffer = "barycenters"; + int verbose = 0; + + // Parse the input + if (!PyArg_ParseTupleAndKeywords(args, keywds, "O|$OOfsi", kwlist, &points_obj, &features_obj, &classes_obj, &sampleDl, &method_buffer, &verbose)) + { + PyErr_SetString(PyExc_RuntimeError, "Error parsing arguments"); + return NULL; + } + + // Get the method argument + string method(method_buffer); + + // Interpret method + if (method.compare("barycenters") && method.compare("voxelcenters")) + { + PyErr_SetString(PyExc_RuntimeError, "Error parsing method. Valid method names are \"barycenters\" and \"voxelcenters\" "); + return NULL; + } + + // Check if using features or classes + bool use_feature = true, use_classes = true; + if (features_obj == NULL) + use_feature = false; + if (classes_obj == NULL) + use_classes = false; + + // Interpret the input objects as numpy arrays. + PyObject* points_array = PyArray_FROM_OTF(points_obj, NPY_FLOAT, NPY_IN_ARRAY); + PyObject* features_array = NULL; + PyObject* classes_array = NULL; + if (use_feature) + features_array = PyArray_FROM_OTF(features_obj, NPY_FLOAT, NPY_IN_ARRAY); + if (use_classes) + classes_array = PyArray_FROM_OTF(classes_obj, NPY_INT, NPY_IN_ARRAY); + + // Verify data was load correctly. + if (points_array == NULL) + { + Py_XDECREF(points_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Error converting input points to numpy arrays of type float32"); + return NULL; + } + if (use_feature && features_array == NULL) + { + Py_XDECREF(points_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Error converting input features to numpy arrays of type float32"); + return NULL; + } + if (use_classes && classes_array == NULL) + { + Py_XDECREF(points_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Error converting input classes to numpy arrays of type int32"); + return NULL; + } + + // Check that the input array respect the dims + if ((int)PyArray_NDIM(points_array) != 2 || (int)PyArray_DIM(points_array, 1) != 3) + { + Py_XDECREF(points_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Wrong dimensions : points.shape is not (N, 3)"); + return NULL; + } + if (use_feature && ((int)PyArray_NDIM(features_array) != 2)) + { + Py_XDECREF(points_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Wrong dimensions : features.shape is not (N, d)"); + return NULL; + } + + if (use_classes && (int)PyArray_NDIM(classes_array) > 2) + { + Py_XDECREF(points_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Wrong dimensions : classes.shape is not (N,) or (N, d)"); + return NULL; + } + + // Number of points + int N = (int)PyArray_DIM(points_array, 0); + + // Dimension of the features + int fdim = 0; + if (use_feature) + fdim = (int)PyArray_DIM(features_array, 1); + + //Dimension of labels + int ldim = 1; + if (use_classes && (int)PyArray_NDIM(classes_array) == 2) + ldim = (int)PyArray_DIM(classes_array, 1); + + // Check that the input array respect the number of points + if (use_feature && (int)PyArray_DIM(features_array, 0) != N) + { + Py_XDECREF(points_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Wrong dimensions : features.shape is not (N, d)"); + return NULL; + } + if (use_classes && (int)PyArray_DIM(classes_array, 0) != N) + { + Py_XDECREF(points_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Wrong dimensions : classes.shape is not (N,) or (N, d)"); + return NULL; + } + + + // Call the C++ function + // ********************* + + // Create pyramid + if (verbose > 0) + cout << "Computing cloud pyramid with support points: " << endl; + + + // Convert PyArray to Cloud C++ class + vector original_points; + vector original_features; + vector original_classes; + original_points = vector((PointXYZ*)PyArray_DATA(points_array), (PointXYZ*)PyArray_DATA(points_array) + N); + if (use_feature) + original_features = vector((float*)PyArray_DATA(features_array), (float*)PyArray_DATA(features_array) + N * fdim); + if (use_classes) + original_classes = vector((int*)PyArray_DATA(classes_array), (int*)PyArray_DATA(classes_array) + N * ldim); + + // Subsample + vector subsampled_points; + vector subsampled_features; + vector subsampled_classes; + grid_subsampling(original_points, + subsampled_points, + original_features, + subsampled_features, + original_classes, + subsampled_classes, + sampleDl, + verbose); + + // Check result + if (subsampled_points.size() < 1) + { + PyErr_SetString(PyExc_RuntimeError, "Error"); + return NULL; + } + + // Manage outputs + // ************** + + // Dimension of input containers + npy_intp* point_dims = new npy_intp[2]; + point_dims[0] = subsampled_points.size(); + point_dims[1] = 3; + npy_intp* feature_dims = new npy_intp[2]; + feature_dims[0] = subsampled_points.size(); + feature_dims[1] = fdim; + npy_intp* classes_dims = new npy_intp[2]; + classes_dims[0] = subsampled_points.size(); + classes_dims[1] = ldim; + + // Create output array + PyObject* res_points_obj = PyArray_SimpleNew(2, point_dims, NPY_FLOAT); + PyObject* res_features_obj = NULL; + PyObject* res_classes_obj = NULL; + PyObject* ret = NULL; + + // Fill output array with values + size_t size_in_bytes = subsampled_points.size() * 3 * sizeof(float); + memcpy(PyArray_DATA(res_points_obj), subsampled_points.data(), size_in_bytes); + if (use_feature) + { + size_in_bytes = subsampled_points.size() * fdim * sizeof(float); + res_features_obj = PyArray_SimpleNew(2, feature_dims, NPY_FLOAT); + memcpy(PyArray_DATA(res_features_obj), subsampled_features.data(), size_in_bytes); + } + if (use_classes) + { + size_in_bytes = subsampled_points.size() * ldim * sizeof(int); + res_classes_obj = PyArray_SimpleNew(2, classes_dims, NPY_INT); + memcpy(PyArray_DATA(res_classes_obj), subsampled_classes.data(), size_in_bytes); + } + + + // Merge results + if (use_feature && use_classes) + ret = Py_BuildValue("NNN", res_points_obj, res_features_obj, res_classes_obj); + else if (use_feature) + ret = Py_BuildValue("NN", res_points_obj, res_features_obj); + else if (use_classes) + ret = Py_BuildValue("NN", res_points_obj, res_classes_obj); + else + ret = Py_BuildValue("N", res_points_obj); + + // Clean up + // ******** + + Py_DECREF(points_array); + Py_XDECREF(features_array); + Py_XDECREF(classes_array); + + return ret; +} \ No newline at end of file diff --git a/competing_methods/my_KPConv/cpp_wrappers/cpp_utils/cloud/cloud.cpp b/competing_methods/my_KPConv/cpp_wrappers/cpp_utils/cloud/cloud.cpp new file mode 100644 index 00000000..c285140d --- /dev/null +++ b/competing_methods/my_KPConv/cpp_wrappers/cpp_utils/cloud/cloud.cpp @@ -0,0 +1,67 @@ +// +// +// 0==========================0 +// | Local feature test | +// 0==========================0 +// +// version 1.0 : +// > +// +//--------------------------------------------------- +// +// Cloud source : +// Define usefull Functions/Methods +// +//---------------------------------------------------- +// +// Hugues THOMAS - 10/02/2017 +// + + +#include "cloud.h" + + +// Getters +// ******* + +PointXYZ max_point(std::vector points) +{ + // Initialize limits + PointXYZ maxP(points[0]); + + // Loop over all points + for (auto p : points) + { + if (p.x > maxP.x) + maxP.x = p.x; + + if (p.y > maxP.y) + maxP.y = p.y; + + if (p.z > maxP.z) + maxP.z = p.z; + } + + return maxP; +} + +PointXYZ min_point(std::vector points) +{ + // Initialize limits + PointXYZ minP(points[0]); + + // Loop over all points + for (auto p : points) + { + if (p.x < minP.x) + minP.x = p.x; + + if (p.y < minP.y) + minP.y = p.y; + + if (p.z < minP.z) + minP.z = p.z; + } + + return minP; +} \ No newline at end of file diff --git a/competing_methods/my_KPConv/cpp_wrappers/cpp_utils/cloud/cloud.h b/competing_methods/my_KPConv/cpp_wrappers/cpp_utils/cloud/cloud.h new file mode 100644 index 00000000..99d4e194 --- /dev/null +++ b/competing_methods/my_KPConv/cpp_wrappers/cpp_utils/cloud/cloud.h @@ -0,0 +1,185 @@ +// +// +// 0==========================0 +// | Local feature test | +// 0==========================0 +// +// version 1.0 : +// > +// +//--------------------------------------------------- +// +// Cloud header +// +//---------------------------------------------------- +// +// Hugues THOMAS - 10/02/2017 +// + + +# pragma once + +#include +#include +#include +#include +#include +#include +#include +#include + +#include + + + + +// Point class +// *********** + + +class PointXYZ +{ +public: + + // Elements + // ******** + + float x, y, z; + + + // Methods + // ******* + + // Constructor + PointXYZ() { x = 0; y = 0; z = 0; } + PointXYZ(float x0, float y0, float z0) { x = x0; y = y0; z = z0; } + + // array type accessor + float operator [] (int i) const + { + if (i == 0) return x; + else if (i == 1) return y; + else return z; + } + + // opperations + float dot(const PointXYZ P) const + { + return x * P.x + y * P.y + z * P.z; + } + + float sq_norm() + { + return x*x + y*y + z*z; + } + + PointXYZ cross(const PointXYZ P) const + { + return PointXYZ(y*P.z - z*P.y, z*P.x - x*P.z, x*P.y - y*P.x); + } + + PointXYZ& operator+=(const PointXYZ& P) + { + x += P.x; + y += P.y; + z += P.z; + return *this; + } + + PointXYZ& operator-=(const PointXYZ& P) + { + x -= P.x; + y -= P.y; + z -= P.z; + return *this; + } + + PointXYZ& operator*=(const float& a) + { + x *= a; + y *= a; + z *= a; + return *this; + } +}; + + +// Point Opperations +// ***************** + +inline PointXYZ operator + (const PointXYZ A, const PointXYZ B) +{ + return PointXYZ(A.x + B.x, A.y + B.y, A.z + B.z); +} + +inline PointXYZ operator - (const PointXYZ A, const PointXYZ B) +{ + return PointXYZ(A.x - B.x, A.y - B.y, A.z - B.z); +} + +inline PointXYZ operator * (const PointXYZ P, const float a) +{ + return PointXYZ(P.x * a, P.y * a, P.z * a); +} + +inline PointXYZ operator * (const float a, const PointXYZ P) +{ + return PointXYZ(P.x * a, P.y * a, P.z * a); +} + +inline std::ostream& operator << (std::ostream& os, const PointXYZ P) +{ + return os << "[" << P.x << ", " << P.y << ", " << P.z << "]"; +} + +inline bool operator == (const PointXYZ A, const PointXYZ B) +{ + return A.x == B.x && A.y == B.y && A.z == B.z; +} + +inline PointXYZ floor(const PointXYZ P) +{ + return PointXYZ(std::floor(P.x), std::floor(P.y), std::floor(P.z)); +} + + +PointXYZ max_point(std::vector points); +PointXYZ min_point(std::vector points); + + +struct PointCloud +{ + + std::vector pts; + + // Must return the number of data points + inline size_t kdtree_get_point_count() const { return pts.size(); } + + // Returns the dim'th component of the idx'th point in the class: + // Since this is inlined and the "dim" argument is typically an immediate value, the + // "if/else's" are actually solved at compile time. + inline float kdtree_get_pt(const size_t idx, const size_t dim) const + { + if (dim == 0) return pts[idx].x; + else if (dim == 1) return pts[idx].y; + else return pts[idx].z; + } + + // Optional bounding-box computation: return false to default to a standard bbox computation loop. + // Return true if the BBOX was already computed by the class and returned in "bb" so it can be avoided to redo it again. + // Look at bb.size() to find out the expected dimensionality (e.g. 2 or 3 for point clouds) + template + bool kdtree_get_bbox(BBOX& /* bb */) const { return false; } + +}; + + + + + + + + + + + diff --git a/competing_methods/my_KPConv/cpp_wrappers/cpp_utils/nanoflann/nanoflann.hpp b/competing_methods/my_KPConv/cpp_wrappers/cpp_utils/nanoflann/nanoflann.hpp new file mode 100644 index 00000000..8d2ab6cc --- /dev/null +++ b/competing_methods/my_KPConv/cpp_wrappers/cpp_utils/nanoflann/nanoflann.hpp @@ -0,0 +1,2043 @@ +/*********************************************************************** + * Software License Agreement (BSD License) + * + * Copyright 2008-2009 Marius Muja (mariusm@cs.ubc.ca). All rights reserved. + * Copyright 2008-2009 David G. Lowe (lowe@cs.ubc.ca). All rights reserved. + * Copyright 2011-2016 Jose Luis Blanco (joseluisblancoc@gmail.com). + * All rights reserved. + * + * THE BSD LICENSE + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR + * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES + * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. + * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, + * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT + * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF + * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + *************************************************************************/ + +/** \mainpage nanoflann C++ API documentation + * nanoflann is a C++ header-only library for building KD-Trees, mostly + * optimized for 2D or 3D point clouds. + * + * nanoflann does not require compiling or installing, just an + * #include in your code. + * + * See: + * - C++ API organized by modules + * - Online README + * - Doxygen + * documentation + */ + +#ifndef NANOFLANN_HPP_ +#define NANOFLANN_HPP_ + +#include +#include +#include +#include // for abs() +#include // for fwrite() +#include // for abs() +#include +#include // std::reference_wrapper +#include +#include + +/** Library version: 0xMmP (M=Major,m=minor,P=patch) */ +#define NANOFLANN_VERSION 0x130 + +// Avoid conflicting declaration of min/max macros in windows headers +#if !defined(NOMINMAX) && \ + (defined(_WIN32) || defined(_WIN32_) || defined(WIN32) || defined(_WIN64)) +#define NOMINMAX +#ifdef max +#undef max +#undef min +#endif +#endif + +namespace nanoflann { +/** @addtogroup nanoflann_grp nanoflann C++ library for ANN + * @{ */ + +/** the PI constant (required to avoid MSVC missing symbols) */ +template T pi_const() { + return static_cast(3.14159265358979323846); +} + +/** + * Traits if object is resizable and assignable (typically has a resize | assign + * method) + */ +template struct has_resize : std::false_type {}; + +template +struct has_resize().resize(1), 0)> + : std::true_type {}; + +template struct has_assign : std::false_type {}; + +template +struct has_assign().assign(1, 0), 0)> + : std::true_type {}; + +/** + * Free function to resize a resizable object + */ +template +inline typename std::enable_if::value, void>::type +resize(Container &c, const size_t nElements) { + c.resize(nElements); +} + +/** + * Free function that has no effects on non resizable containers (e.g. + * std::array) It raises an exception if the expected size does not match + */ +template +inline typename std::enable_if::value, void>::type +resize(Container &c, const size_t nElements) { + if (nElements != c.size()) + throw std::logic_error("Try to change the size of a std::array."); +} + +/** + * Free function to assign to a container + */ +template +inline typename std::enable_if::value, void>::type +assign(Container &c, const size_t nElements, const T &value) { + c.assign(nElements, value); +} + +/** + * Free function to assign to a std::array + */ +template +inline typename std::enable_if::value, void>::type +assign(Container &c, const size_t nElements, const T &value) { + for (size_t i = 0; i < nElements; i++) + c[i] = value; +} + +/** @addtogroup result_sets_grp Result set classes + * @{ */ +template +class KNNResultSet { +public: + typedef _DistanceType DistanceType; + typedef _IndexType IndexType; + typedef _CountType CountType; + +private: + IndexType *indices; + DistanceType *dists; + CountType capacity; + CountType count; + +public: + inline KNNResultSet(CountType capacity_) + : indices(0), dists(0), capacity(capacity_), count(0) {} + + inline void init(IndexType *indices_, DistanceType *dists_) { + indices = indices_; + dists = dists_; + count = 0; + if (capacity) + dists[capacity - 1] = (std::numeric_limits::max)(); + } + + inline CountType size() const { return count; } + + inline bool full() const { return count == capacity; } + + /** + * Called during search to add an element matching the criteria. + * @return true if the search should be continued, false if the results are + * sufficient + */ + inline bool addPoint(DistanceType dist, IndexType index) { + CountType i; + for (i = count; i > 0; --i) { +#ifdef NANOFLANN_FIRST_MATCH // If defined and two points have the same + // distance, the one with the lowest-index will be + // returned first. + if ((dists[i - 1] > dist) || + ((dist == dists[i - 1]) && (indices[i - 1] > index))) { +#else + if (dists[i - 1] > dist) { +#endif + if (i < capacity) { + dists[i] = dists[i - 1]; + indices[i] = indices[i - 1]; + } + } else + break; + } + if (i < capacity) { + dists[i] = dist; + indices[i] = index; + } + if (count < capacity) + count++; + + // tell caller that the search shall continue + return true; + } + + inline DistanceType worstDist() const { return dists[capacity - 1]; } +}; + +/** operator "<" for std::sort() */ +struct IndexDist_Sorter { + /** PairType will be typically: std::pair */ + template + inline bool operator()(const PairType &p1, const PairType &p2) const { + return p1.second < p2.second; + } +}; + +/** + * A result-set class used when performing a radius based search. + */ +template +class RadiusResultSet { +public: + typedef _DistanceType DistanceType; + typedef _IndexType IndexType; + +public: + const DistanceType radius; + + std::vector> &m_indices_dists; + + inline RadiusResultSet( + DistanceType radius_, + std::vector> &indices_dists) + : radius(radius_), m_indices_dists(indices_dists) { + init(); + } + + inline void init() { clear(); } + inline void clear() { m_indices_dists.clear(); } + + inline size_t size() const { return m_indices_dists.size(); } + + inline bool full() const { return true; } + + /** + * Called during search to add an element matching the criteria. + * @return true if the search should be continued, false if the results are + * sufficient + */ + inline bool addPoint(DistanceType dist, IndexType index) { + if (dist < radius) + m_indices_dists.push_back(std::make_pair(index, dist)); + return true; + } + + inline DistanceType worstDist() const { return radius; } + + /** + * Find the worst result (furtherest neighbor) without copying or sorting + * Pre-conditions: size() > 0 + */ + std::pair worst_item() const { + if (m_indices_dists.empty()) + throw std::runtime_error("Cannot invoke RadiusResultSet::worst_item() on " + "an empty list of results."); + typedef + typename std::vector>::const_iterator + DistIt; + DistIt it = std::max_element(m_indices_dists.begin(), m_indices_dists.end(), + IndexDist_Sorter()); + return *it; + } +}; + +/** @} */ + +/** @addtogroup loadsave_grp Load/save auxiliary functions + * @{ */ +template +void save_value(FILE *stream, const T &value, size_t count = 1) { + fwrite(&value, sizeof(value), count, stream); +} + +template +void save_value(FILE *stream, const std::vector &value) { + size_t size = value.size(); + fwrite(&size, sizeof(size_t), 1, stream); + fwrite(&value[0], sizeof(T), size, stream); +} + +template +void load_value(FILE *stream, T &value, size_t count = 1) { + size_t read_cnt = fread(&value, sizeof(value), count, stream); + if (read_cnt != count) { + throw std::runtime_error("Cannot read from file"); + } +} + +template void load_value(FILE *stream, std::vector &value) { + size_t size; + size_t read_cnt = fread(&size, sizeof(size_t), 1, stream); + if (read_cnt != 1) { + throw std::runtime_error("Cannot read from file"); + } + value.resize(size); + read_cnt = fread(&value[0], sizeof(T), size, stream); + if (read_cnt != size) { + throw std::runtime_error("Cannot read from file"); + } +} +/** @} */ + +/** @addtogroup metric_grp Metric (distance) classes + * @{ */ + +struct Metric {}; + +/** Manhattan distance functor (generic version, optimized for + * high-dimensionality data sets). Corresponding distance traits: + * nanoflann::metric_L1 \tparam T Type of the elements (e.g. double, float, + * uint8_t) \tparam _DistanceType Type of distance variables (must be signed) + * (e.g. float, double, int64_t) + */ +template +struct L1_Adaptor { + typedef T ElementType; + typedef _DistanceType DistanceType; + + const DataSource &data_source; + + L1_Adaptor(const DataSource &_data_source) : data_source(_data_source) {} + + inline DistanceType evalMetric(const T *a, const size_t b_idx, size_t size, + DistanceType worst_dist = -1) const { + DistanceType result = DistanceType(); + const T *last = a + size; + const T *lastgroup = last - 3; + size_t d = 0; + + /* Process 4 items with each loop for efficiency. */ + while (a < lastgroup) { + const DistanceType diff0 = + std::abs(a[0] - data_source.kdtree_get_pt(b_idx, d++)); + const DistanceType diff1 = + std::abs(a[1] - data_source.kdtree_get_pt(b_idx, d++)); + const DistanceType diff2 = + std::abs(a[2] - data_source.kdtree_get_pt(b_idx, d++)); + const DistanceType diff3 = + std::abs(a[3] - data_source.kdtree_get_pt(b_idx, d++)); + result += diff0 + diff1 + diff2 + diff3; + a += 4; + if ((worst_dist > 0) && (result > worst_dist)) { + return result; + } + } + /* Process last 0-3 components. Not needed for standard vector lengths. */ + while (a < last) { + result += std::abs(*a++ - data_source.kdtree_get_pt(b_idx, d++)); + } + return result; + } + + template + inline DistanceType accum_dist(const U a, const V b, const size_t) const { + return std::abs(a - b); + } +}; + +/** Squared Euclidean distance functor (generic version, optimized for + * high-dimensionality data sets). Corresponding distance traits: + * nanoflann::metric_L2 \tparam T Type of the elements (e.g. double, float, + * uint8_t) \tparam _DistanceType Type of distance variables (must be signed) + * (e.g. float, double, int64_t) + */ +template +struct L2_Adaptor { + typedef T ElementType; + typedef _DistanceType DistanceType; + + const DataSource &data_source; + + L2_Adaptor(const DataSource &_data_source) : data_source(_data_source) {} + + inline DistanceType evalMetric(const T *a, const size_t b_idx, size_t size, + DistanceType worst_dist = -1) const { + DistanceType result = DistanceType(); + const T *last = a + size; + const T *lastgroup = last - 3; + size_t d = 0; + + /* Process 4 items with each loop for efficiency. */ + while (a < lastgroup) { + const DistanceType diff0 = a[0] - data_source.kdtree_get_pt(b_idx, d++); + const DistanceType diff1 = a[1] - data_source.kdtree_get_pt(b_idx, d++); + const DistanceType diff2 = a[2] - data_source.kdtree_get_pt(b_idx, d++); + const DistanceType diff3 = a[3] - data_source.kdtree_get_pt(b_idx, d++); + result += diff0 * diff0 + diff1 * diff1 + diff2 * diff2 + diff3 * diff3; + a += 4; + if ((worst_dist > 0) && (result > worst_dist)) { + return result; + } + } + /* Process last 0-3 components. Not needed for standard vector lengths. */ + while (a < last) { + const DistanceType diff0 = *a++ - data_source.kdtree_get_pt(b_idx, d++); + result += diff0 * diff0; + } + return result; + } + + template + inline DistanceType accum_dist(const U a, const V b, const size_t) const { + return (a - b) * (a - b); + } +}; + +/** Squared Euclidean (L2) distance functor (suitable for low-dimensionality + * datasets, like 2D or 3D point clouds) Corresponding distance traits: + * nanoflann::metric_L2_Simple \tparam T Type of the elements (e.g. double, + * float, uint8_t) \tparam _DistanceType Type of distance variables (must be + * signed) (e.g. float, double, int64_t) + */ +template +struct L2_Simple_Adaptor { + typedef T ElementType; + typedef _DistanceType DistanceType; + + const DataSource &data_source; + + L2_Simple_Adaptor(const DataSource &_data_source) + : data_source(_data_source) {} + + inline DistanceType evalMetric(const T *a, const size_t b_idx, + size_t size) const { + DistanceType result = DistanceType(); + for (size_t i = 0; i < size; ++i) { + const DistanceType diff = a[i] - data_source.kdtree_get_pt(b_idx, i); + result += diff * diff; + } + return result; + } + + template + inline DistanceType accum_dist(const U a, const V b, const size_t) const { + return (a - b) * (a - b); + } +}; + +/** SO2 distance functor + * Corresponding distance traits: nanoflann::metric_SO2 + * \tparam T Type of the elements (e.g. double, float) + * \tparam _DistanceType Type of distance variables (must be signed) (e.g. + * float, double) orientation is constrained to be in [-pi, pi] + */ +template +struct SO2_Adaptor { + typedef T ElementType; + typedef _DistanceType DistanceType; + + const DataSource &data_source; + + SO2_Adaptor(const DataSource &_data_source) : data_source(_data_source) {} + + inline DistanceType evalMetric(const T *a, const size_t b_idx, + size_t size) const { + return accum_dist(a[size - 1], data_source.kdtree_get_pt(b_idx, size - 1), + size - 1); + } + + /** Note: this assumes that input angles are already in the range [-pi,pi] */ + template + inline DistanceType accum_dist(const U a, const V b, const size_t) const { + DistanceType result = DistanceType(), PI = pi_const(); + result = b - a; + if (result > PI) + result -= 2 * PI; + else if (result < -PI) + result += 2 * PI; + return result; + } +}; + +/** SO3 distance functor (Uses L2_Simple) + * Corresponding distance traits: nanoflann::metric_SO3 + * \tparam T Type of the elements (e.g. double, float) + * \tparam _DistanceType Type of distance variables (must be signed) (e.g. + * float, double) + */ +template +struct SO3_Adaptor { + typedef T ElementType; + typedef _DistanceType DistanceType; + + L2_Simple_Adaptor distance_L2_Simple; + + SO3_Adaptor(const DataSource &_data_source) + : distance_L2_Simple(_data_source) {} + + inline DistanceType evalMetric(const T *a, const size_t b_idx, + size_t size) const { + return distance_L2_Simple.evalMetric(a, b_idx, size); + } + + template + inline DistanceType accum_dist(const U a, const V b, const size_t idx) const { + return distance_L2_Simple.accum_dist(a, b, idx); + } +}; + +/** Metaprogramming helper traits class for the L1 (Manhattan) metric */ +struct metric_L1 : public Metric { + template struct traits { + typedef L1_Adaptor distance_t; + }; +}; +/** Metaprogramming helper traits class for the L2 (Euclidean) metric */ +struct metric_L2 : public Metric { + template struct traits { + typedef L2_Adaptor distance_t; + }; +}; +/** Metaprogramming helper traits class for the L2_simple (Euclidean) metric */ +struct metric_L2_Simple : public Metric { + template struct traits { + typedef L2_Simple_Adaptor distance_t; + }; +}; +/** Metaprogramming helper traits class for the SO3_InnerProdQuat metric */ +struct metric_SO2 : public Metric { + template struct traits { + typedef SO2_Adaptor distance_t; + }; +}; +/** Metaprogramming helper traits class for the SO3_InnerProdQuat metric */ +struct metric_SO3 : public Metric { + template struct traits { + typedef SO3_Adaptor distance_t; + }; +}; + +/** @} */ + +/** @addtogroup param_grp Parameter structs + * @{ */ + +/** Parameters (see README.md) */ +struct KDTreeSingleIndexAdaptorParams { + KDTreeSingleIndexAdaptorParams(size_t _leaf_max_size = 10) + : leaf_max_size(_leaf_max_size) {} + + size_t leaf_max_size; +}; + +/** Search options for KDTreeSingleIndexAdaptor::findNeighbors() */ +struct SearchParams { + /** Note: The first argument (checks_IGNORED_) is ignored, but kept for + * compatibility with the FLANN interface */ + SearchParams(int checks_IGNORED_ = 32, float eps_ = 0, bool sorted_ = true) + : checks(checks_IGNORED_), eps(eps_), sorted(sorted_) {} + + int checks; //!< Ignored parameter (Kept for compatibility with the FLANN + //!< interface). + float eps; //!< search for eps-approximate neighbours (default: 0) + bool sorted; //!< only for radius search, require neighbours sorted by + //!< distance (default: true) +}; +/** @} */ + +/** @addtogroup memalloc_grp Memory allocation + * @{ */ + +/** + * Allocates (using C's malloc) a generic type T. + * + * Params: + * count = number of instances to allocate. + * Returns: pointer (of type T*) to memory buffer + */ +template inline T *allocate(size_t count = 1) { + T *mem = static_cast(::malloc(sizeof(T) * count)); + return mem; +} + +/** + * Pooled storage allocator + * + * The following routines allow for the efficient allocation of storage in + * small chunks from a specified pool. Rather than allowing each structure + * to be freed individually, an entire pool of storage is freed at once. + * This method has two advantages over just using malloc() and free(). First, + * it is far more efficient for allocating small objects, as there is + * no overhead for remembering all the information needed to free each + * object or consolidating fragmented memory. Second, the decision about + * how long to keep an object is made at the time of allocation, and there + * is no need to track down all the objects to free them. + * + */ + +const size_t WORDSIZE = 16; +const size_t BLOCKSIZE = 8192; + +class PooledAllocator { + /* We maintain memory alignment to word boundaries by requiring that all + allocations be in multiples of the machine wordsize. */ + /* Size of machine word in bytes. Must be power of 2. */ + /* Minimum number of bytes requested at a time from the system. Must be + * multiple of WORDSIZE. */ + + size_t remaining; /* Number of bytes left in current block of storage. */ + void *base; /* Pointer to base of current block of storage. */ + void *loc; /* Current location in block to next allocate memory. */ + + void internal_init() { + remaining = 0; + base = NULL; + usedMemory = 0; + wastedMemory = 0; + } + +public: + size_t usedMemory; + size_t wastedMemory; + + /** + Default constructor. Initializes a new pool. + */ + PooledAllocator() { internal_init(); } + + /** + * Destructor. Frees all the memory allocated in this pool. + */ + ~PooledAllocator() { free_all(); } + + /** Frees all allocated memory chunks */ + void free_all() { + while (base != NULL) { + void *prev = + *(static_cast(base)); /* Get pointer to prev block. */ + ::free(base); + base = prev; + } + internal_init(); + } + + /** + * Returns a pointer to a piece of new memory of the given size in bytes + * allocated from the pool. + */ + void *malloc(const size_t req_size) { + /* Round size up to a multiple of wordsize. The following expression + only works for WORDSIZE that is a power of 2, by masking last bits of + incremented size to zero. + */ + const size_t size = (req_size + (WORDSIZE - 1)) & ~(WORDSIZE - 1); + + /* Check whether a new block must be allocated. Note that the first word + of a block is reserved for a pointer to the previous block. + */ + if (size > remaining) { + + wastedMemory += remaining; + + /* Allocate new storage. */ + const size_t blocksize = + (size + sizeof(void *) + (WORDSIZE - 1) > BLOCKSIZE) + ? size + sizeof(void *) + (WORDSIZE - 1) + : BLOCKSIZE; + + // use the standard C malloc to allocate memory + void *m = ::malloc(blocksize); + if (!m) { + fprintf(stderr, "Failed to allocate memory.\n"); + return NULL; + } + + /* Fill first word of new block with pointer to previous block. */ + static_cast(m)[0] = base; + base = m; + + size_t shift = 0; + // int size_t = (WORDSIZE - ( (((size_t)m) + sizeof(void*)) & + // (WORDSIZE-1))) & (WORDSIZE-1); + + remaining = blocksize - sizeof(void *) - shift; + loc = (static_cast(m) + sizeof(void *) + shift); + } + void *rloc = loc; + loc = static_cast(loc) + size; + remaining -= size; + + usedMemory += size; + + return rloc; + } + + /** + * Allocates (using this pool) a generic type T. + * + * Params: + * count = number of instances to allocate. + * Returns: pointer (of type T*) to memory buffer + */ + template T *allocate(const size_t count = 1) { + T *mem = static_cast(this->malloc(sizeof(T) * count)); + return mem; + } +}; +/** @} */ + +/** @addtogroup nanoflann_metaprog_grp Auxiliary metaprogramming stuff + * @{ */ + +/** Used to declare fixed-size arrays when DIM>0, dynamically-allocated vectors + * when DIM=-1. Fixed size version for a generic DIM: + */ +template struct array_or_vector_selector { + typedef std::array container_t; +}; +/** Dynamic size version */ +template struct array_or_vector_selector<-1, T> { + typedef std::vector container_t; +}; + +/** @} */ + +/** kd-tree base-class + * + * Contains the member functions common to the classes KDTreeSingleIndexAdaptor + * and KDTreeSingleIndexDynamicAdaptor_. + * + * \tparam Derived The name of the class which inherits this class. + * \tparam DatasetAdaptor The user-provided adaptor (see comments above). + * \tparam Distance The distance metric to use, these are all classes derived + * from nanoflann::Metric \tparam DIM Dimensionality of data points (e.g. 3 for + * 3D points) \tparam IndexType Will be typically size_t or int + */ + +template +class KDTreeBaseClass { + +public: + /** Frees the previously-built index. Automatically called within + * buildIndex(). */ + void freeIndex(Derived &obj) { + obj.pool.free_all(); + obj.root_node = NULL; + obj.m_size_at_index_build = 0; + } + + typedef typename Distance::ElementType ElementType; + typedef typename Distance::DistanceType DistanceType; + + /*--------------------- Internal Data Structures --------------------------*/ + struct Node { + /** Union used because a node can be either a LEAF node or a non-leaf node, + * so both data fields are never used simultaneously */ + union { + struct leaf { + IndexType left, right; //!< Indices of points in leaf node + } lr; + struct nonleaf { + int divfeat; //!< Dimension used for subdivision. + DistanceType divlow, divhigh; //!< The values used for subdivision. + } sub; + } node_type; + Node *child1, *child2; //!< Child nodes (both=NULL mean its a leaf node) + }; + + typedef Node *NodePtr; + + struct Interval { + ElementType low, high; + }; + + /** + * Array of indices to vectors in the dataset. + */ + std::vector vind; + + NodePtr root_node; + + size_t m_leaf_max_size; + + size_t m_size; //!< Number of current points in the dataset + size_t m_size_at_index_build; //!< Number of points in the dataset when the + //!< index was built + int dim; //!< Dimensionality of each data point + + /** Define "BoundingBox" as a fixed-size or variable-size container depending + * on "DIM" */ + typedef + typename array_or_vector_selector::container_t BoundingBox; + + /** Define "distance_vector_t" as a fixed-size or variable-size container + * depending on "DIM" */ + typedef typename array_or_vector_selector::container_t + distance_vector_t; + + /** The KD-tree used to find neighbours */ + + BoundingBox root_bbox; + + /** + * Pooled memory allocator. + * + * Using a pooled memory allocator is more efficient + * than allocating memory directly when there is a large + * number small of memory allocations. + */ + PooledAllocator pool; + + /** Returns number of points in dataset */ + size_t size(const Derived &obj) const { return obj.m_size; } + + /** Returns the length of each point in the dataset */ + size_t veclen(const Derived &obj) { + return static_cast(DIM > 0 ? DIM : obj.dim); + } + + /// Helper accessor to the dataset points: + inline ElementType dataset_get(const Derived &obj, size_t idx, + int component) const { + return obj.dataset.kdtree_get_pt(idx, component); + } + + /** + * Computes the inde memory usage + * Returns: memory used by the index + */ + size_t usedMemory(Derived &obj) { + return obj.pool.usedMemory + obj.pool.wastedMemory + + obj.dataset.kdtree_get_point_count() * + sizeof(IndexType); // pool memory and vind array memory + } + + void computeMinMax(const Derived &obj, IndexType *ind, IndexType count, + int element, ElementType &min_elem, + ElementType &max_elem) { + min_elem = dataset_get(obj, ind[0], element); + max_elem = dataset_get(obj, ind[0], element); + for (IndexType i = 1; i < count; ++i) { + ElementType val = dataset_get(obj, ind[i], element); + if (val < min_elem) + min_elem = val; + if (val > max_elem) + max_elem = val; + } + } + + /** + * Create a tree node that subdivides the list of vecs from vind[first] + * to vind[last]. The routine is called recursively on each sublist. + * + * @param left index of the first vector + * @param right index of the last vector + */ + NodePtr divideTree(Derived &obj, const IndexType left, const IndexType right, + BoundingBox &bbox) { + NodePtr node = obj.pool.template allocate(); // allocate memory + + /* If too few exemplars remain, then make this a leaf node. */ + if ((right - left) <= static_cast(obj.m_leaf_max_size)) { + node->child1 = node->child2 = NULL; /* Mark as leaf node. */ + node->node_type.lr.left = left; + node->node_type.lr.right = right; + + // compute bounding-box of leaf points + for (int i = 0; i < (DIM > 0 ? DIM : obj.dim); ++i) { + bbox[i].low = dataset_get(obj, obj.vind[left], i); + bbox[i].high = dataset_get(obj, obj.vind[left], i); + } + for (IndexType k = left + 1; k < right; ++k) { + for (int i = 0; i < (DIM > 0 ? DIM : obj.dim); ++i) { + if (bbox[i].low > dataset_get(obj, obj.vind[k], i)) + bbox[i].low = dataset_get(obj, obj.vind[k], i); + if (bbox[i].high < dataset_get(obj, obj.vind[k], i)) + bbox[i].high = dataset_get(obj, obj.vind[k], i); + } + } + } else { + IndexType idx; + int cutfeat; + DistanceType cutval; + middleSplit_(obj, &obj.vind[0] + left, right - left, idx, cutfeat, cutval, + bbox); + + node->node_type.sub.divfeat = cutfeat; + + BoundingBox left_bbox(bbox); + left_bbox[cutfeat].high = cutval; + node->child1 = divideTree(obj, left, left + idx, left_bbox); + + BoundingBox right_bbox(bbox); + right_bbox[cutfeat].low = cutval; + node->child2 = divideTree(obj, left + idx, right, right_bbox); + + node->node_type.sub.divlow = left_bbox[cutfeat].high; + node->node_type.sub.divhigh = right_bbox[cutfeat].low; + + for (int i = 0; i < (DIM > 0 ? DIM : obj.dim); ++i) { + bbox[i].low = std::min(left_bbox[i].low, right_bbox[i].low); + bbox[i].high = std::max(left_bbox[i].high, right_bbox[i].high); + } + } + + return node; + } + + void middleSplit_(Derived &obj, IndexType *ind, IndexType count, + IndexType &index, int &cutfeat, DistanceType &cutval, + const BoundingBox &bbox) { + const DistanceType EPS = static_cast(0.00001); + ElementType max_span = bbox[0].high - bbox[0].low; + for (int i = 1; i < (DIM > 0 ? DIM : obj.dim); ++i) { + ElementType span = bbox[i].high - bbox[i].low; + if (span > max_span) { + max_span = span; + } + } + ElementType max_spread = -1; + cutfeat = 0; + for (int i = 0; i < (DIM > 0 ? DIM : obj.dim); ++i) { + ElementType span = bbox[i].high - bbox[i].low; + if (span > (1 - EPS) * max_span) { + ElementType min_elem, max_elem; + computeMinMax(obj, ind, count, i, min_elem, max_elem); + ElementType spread = max_elem - min_elem; + ; + if (spread > max_spread) { + cutfeat = i; + max_spread = spread; + } + } + } + // split in the middle + DistanceType split_val = (bbox[cutfeat].low + bbox[cutfeat].high) / 2; + ElementType min_elem, max_elem; + computeMinMax(obj, ind, count, cutfeat, min_elem, max_elem); + + if (split_val < min_elem) + cutval = min_elem; + else if (split_val > max_elem) + cutval = max_elem; + else + cutval = split_val; + + IndexType lim1, lim2; + planeSplit(obj, ind, count, cutfeat, cutval, lim1, lim2); + + if (lim1 > count / 2) + index = lim1; + else if (lim2 < count / 2) + index = lim2; + else + index = count / 2; + } + + /** + * Subdivide the list of points by a plane perpendicular on axe corresponding + * to the 'cutfeat' dimension at 'cutval' position. + * + * On return: + * dataset[ind[0..lim1-1]][cutfeat]cutval + */ + void planeSplit(Derived &obj, IndexType *ind, const IndexType count, + int cutfeat, DistanceType &cutval, IndexType &lim1, + IndexType &lim2) { + /* Move vector indices for left subtree to front of list. */ + IndexType left = 0; + IndexType right = count - 1; + for (;;) { + while (left <= right && dataset_get(obj, ind[left], cutfeat) < cutval) + ++left; + while (right && left <= right && + dataset_get(obj, ind[right], cutfeat) >= cutval) + --right; + if (left > right || !right) + break; // "!right" was added to support unsigned Index types + std::swap(ind[left], ind[right]); + ++left; + --right; + } + /* If either list is empty, it means that all remaining features + * are identical. Split in the middle to maintain a balanced tree. + */ + lim1 = left; + right = count - 1; + for (;;) { + while (left <= right && dataset_get(obj, ind[left], cutfeat) <= cutval) + ++left; + while (right && left <= right && + dataset_get(obj, ind[right], cutfeat) > cutval) + --right; + if (left > right || !right) + break; // "!right" was added to support unsigned Index types + std::swap(ind[left], ind[right]); + ++left; + --right; + } + lim2 = left; + } + + DistanceType computeInitialDistances(const Derived &obj, + const ElementType *vec, + distance_vector_t &dists) const { + assert(vec); + DistanceType distsq = DistanceType(); + + for (int i = 0; i < (DIM > 0 ? DIM : obj.dim); ++i) { + if (vec[i] < obj.root_bbox[i].low) { + dists[i] = obj.distance.accum_dist(vec[i], obj.root_bbox[i].low, i); + distsq += dists[i]; + } + if (vec[i] > obj.root_bbox[i].high) { + dists[i] = obj.distance.accum_dist(vec[i], obj.root_bbox[i].high, i); + distsq += dists[i]; + } + } + return distsq; + } + + void save_tree(Derived &obj, FILE *stream, NodePtr tree) { + save_value(stream, *tree); + if (tree->child1 != NULL) { + save_tree(obj, stream, tree->child1); + } + if (tree->child2 != NULL) { + save_tree(obj, stream, tree->child2); + } + } + + void load_tree(Derived &obj, FILE *stream, NodePtr &tree) { + tree = obj.pool.template allocate(); + load_value(stream, *tree); + if (tree->child1 != NULL) { + load_tree(obj, stream, tree->child1); + } + if (tree->child2 != NULL) { + load_tree(obj, stream, tree->child2); + } + } + + /** Stores the index in a binary file. + * IMPORTANT NOTE: The set of data points is NOT stored in the file, so when + * loading the index object it must be constructed associated to the same + * source of data points used while building it. See the example: + * examples/saveload_example.cpp \sa loadIndex */ + void saveIndex_(Derived &obj, FILE *stream) { + save_value(stream, obj.m_size); + save_value(stream, obj.dim); + save_value(stream, obj.root_bbox); + save_value(stream, obj.m_leaf_max_size); + save_value(stream, obj.vind); + save_tree(obj, stream, obj.root_node); + } + + /** Loads a previous index from a binary file. + * IMPORTANT NOTE: The set of data points is NOT stored in the file, so the + * index object must be constructed associated to the same source of data + * points used while building the index. See the example: + * examples/saveload_example.cpp \sa loadIndex */ + void loadIndex_(Derived &obj, FILE *stream) { + load_value(stream, obj.m_size); + load_value(stream, obj.dim); + load_value(stream, obj.root_bbox); + load_value(stream, obj.m_leaf_max_size); + load_value(stream, obj.vind); + load_tree(obj, stream, obj.root_node); + } +}; + +/** @addtogroup kdtrees_grp KD-tree classes and adaptors + * @{ */ + +/** kd-tree static index + * + * Contains the k-d trees and other information for indexing a set of points + * for nearest-neighbor matching. + * + * The class "DatasetAdaptor" must provide the following interface (can be + * non-virtual, inlined methods): + * + * \code + * // Must return the number of data poins + * inline size_t kdtree_get_point_count() const { ... } + * + * + * // Must return the dim'th component of the idx'th point in the class: + * inline T kdtree_get_pt(const size_t idx, const size_t dim) const { ... } + * + * // Optional bounding-box computation: return false to default to a standard + * bbox computation loop. + * // Return true if the BBOX was already computed by the class and returned + * in "bb" so it can be avoided to redo it again. + * // Look at bb.size() to find out the expected dimensionality (e.g. 2 or 3 + * for point clouds) template bool kdtree_get_bbox(BBOX &bb) const + * { + * bb[0].low = ...; bb[0].high = ...; // 0th dimension limits + * bb[1].low = ...; bb[1].high = ...; // 1st dimension limits + * ... + * return true; + * } + * + * \endcode + * + * \tparam DatasetAdaptor The user-provided adaptor (see comments above). + * \tparam Distance The distance metric to use: nanoflann::metric_L1, + * nanoflann::metric_L2, nanoflann::metric_L2_Simple, etc. \tparam DIM + * Dimensionality of data points (e.g. 3 for 3D points) \tparam IndexType Will + * be typically size_t or int + */ +template +class KDTreeSingleIndexAdaptor + : public KDTreeBaseClass< + KDTreeSingleIndexAdaptor, + Distance, DatasetAdaptor, DIM, IndexType> { +public: + /** Deleted copy constructor*/ + KDTreeSingleIndexAdaptor( + const KDTreeSingleIndexAdaptor + &) = delete; + + /** + * The dataset used by this index + */ + const DatasetAdaptor &dataset; //!< The source of our data + + const KDTreeSingleIndexAdaptorParams index_params; + + Distance distance; + + typedef typename nanoflann::KDTreeBaseClass< + nanoflann::KDTreeSingleIndexAdaptor, + Distance, DatasetAdaptor, DIM, IndexType> + BaseClassRef; + + typedef typename BaseClassRef::ElementType ElementType; + typedef typename BaseClassRef::DistanceType DistanceType; + + typedef typename BaseClassRef::Node Node; + typedef Node *NodePtr; + + typedef typename BaseClassRef::Interval Interval; + /** Define "BoundingBox" as a fixed-size or variable-size container depending + * on "DIM" */ + typedef typename BaseClassRef::BoundingBox BoundingBox; + + /** Define "distance_vector_t" as a fixed-size or variable-size container + * depending on "DIM" */ + typedef typename BaseClassRef::distance_vector_t distance_vector_t; + + /** + * KDTree constructor + * + * Refer to docs in README.md or online in + * https://github.com/jlblancoc/nanoflann + * + * The KD-Tree point dimension (the length of each point in the datase, e.g. 3 + * for 3D points) is determined by means of: + * - The \a DIM template parameter if >0 (highest priority) + * - Otherwise, the \a dimensionality parameter of this constructor. + * + * @param inputData Dataset with the input features + * @param params Basically, the maximum leaf node size + */ + KDTreeSingleIndexAdaptor(const int dimensionality, + const DatasetAdaptor &inputData, + const KDTreeSingleIndexAdaptorParams ¶ms = + KDTreeSingleIndexAdaptorParams()) + : dataset(inputData), index_params(params), distance(inputData) { + BaseClassRef::root_node = NULL; + BaseClassRef::m_size = dataset.kdtree_get_point_count(); + BaseClassRef::m_size_at_index_build = BaseClassRef::m_size; + BaseClassRef::dim = dimensionality; + if (DIM > 0) + BaseClassRef::dim = DIM; + BaseClassRef::m_leaf_max_size = params.leaf_max_size; + + // Create a permutable array of indices to the input vectors. + init_vind(); + } + + /** + * Builds the index + */ + void buildIndex() { + BaseClassRef::m_size = dataset.kdtree_get_point_count(); + BaseClassRef::m_size_at_index_build = BaseClassRef::m_size; + init_vind(); + this->freeIndex(*this); + BaseClassRef::m_size_at_index_build = BaseClassRef::m_size; + if (BaseClassRef::m_size == 0) + return; + computeBoundingBox(BaseClassRef::root_bbox); + BaseClassRef::root_node = + this->divideTree(*this, 0, BaseClassRef::m_size, + BaseClassRef::root_bbox); // construct the tree + } + + /** \name Query methods + * @{ */ + + /** + * Find set of nearest neighbors to vec[0:dim-1]. Their indices are stored + * inside the result object. + * + * Params: + * result = the result object in which the indices of the + * nearest-neighbors are stored vec = the vector for which to search the + * nearest neighbors + * + * \tparam RESULTSET Should be any ResultSet + * \return True if the requested neighbors could be found. + * \sa knnSearch, radiusSearch + */ + template + bool findNeighbors(RESULTSET &result, const ElementType *vec, + const SearchParams &searchParams) const { + assert(vec); + if (this->size(*this) == 0) + return false; + if (!BaseClassRef::root_node) + throw std::runtime_error( + "[nanoflann] findNeighbors() called before building the index."); + float epsError = 1 + searchParams.eps; + + distance_vector_t + dists; // fixed or variable-sized container (depending on DIM) + auto zero = static_cast(0); + assign(dists, (DIM > 0 ? DIM : BaseClassRef::dim), + zero); // Fill it with zeros. + DistanceType distsq = this->computeInitialDistances(*this, vec, dists); + + searchLevel(result, vec, BaseClassRef::root_node, distsq, dists, + epsError); // "count_leaf" parameter removed since was neither + // used nor returned to the user. + + return result.full(); + } + + /** + * Find the "num_closest" nearest neighbors to the \a query_point[0:dim-1]. + * Their indices are stored inside the result object. \sa radiusSearch, + * findNeighbors \note nChecks_IGNORED is ignored but kept for compatibility + * with the original FLANN interface. \return Number `N` of valid points in + * the result set. Only the first `N` entries in `out_indices` and + * `out_distances_sq` will be valid. Return may be less than `num_closest` + * only if the number of elements in the tree is less than `num_closest`. + */ + size_t knnSearch(const ElementType *query_point, const size_t num_closest, + IndexType *out_indices, DistanceType *out_distances_sq, + const int /* nChecks_IGNORED */ = 10) const { + nanoflann::KNNResultSet resultSet(num_closest); + resultSet.init(out_indices, out_distances_sq); + this->findNeighbors(resultSet, query_point, nanoflann::SearchParams()); + return resultSet.size(); + } + + /** + * Find all the neighbors to \a query_point[0:dim-1] within a maximum radius. + * The output is given as a vector of pairs, of which the first element is a + * point index and the second the corresponding distance. Previous contents of + * \a IndicesDists are cleared. + * + * If searchParams.sorted==true, the output list is sorted by ascending + * distances. + * + * For a better performance, it is advisable to do a .reserve() on the vector + * if you have any wild guess about the number of expected matches. + * + * \sa knnSearch, findNeighbors, radiusSearchCustomCallback + * \return The number of points within the given radius (i.e. indices.size() + * or dists.size() ) + */ + size_t + radiusSearch(const ElementType *query_point, const DistanceType &radius, + std::vector> &IndicesDists, + const SearchParams &searchParams) const { + RadiusResultSet resultSet(radius, IndicesDists); + const size_t nFound = + radiusSearchCustomCallback(query_point, resultSet, searchParams); + if (searchParams.sorted) + std::sort(IndicesDists.begin(), IndicesDists.end(), IndexDist_Sorter()); + return nFound; + } + + /** + * Just like radiusSearch() but with a custom callback class for each point + * found in the radius of the query. See the source of RadiusResultSet<> as a + * start point for your own classes. \sa radiusSearch + */ + template + size_t radiusSearchCustomCallback( + const ElementType *query_point, SEARCH_CALLBACK &resultSet, + const SearchParams &searchParams = SearchParams()) const { + this->findNeighbors(resultSet, query_point, searchParams); + return resultSet.size(); + } + + /** @} */ + +public: + /** Make sure the auxiliary list \a vind has the same size than the current + * dataset, and re-generate if size has changed. */ + void init_vind() { + // Create a permutable array of indices to the input vectors. + BaseClassRef::m_size = dataset.kdtree_get_point_count(); + if (BaseClassRef::vind.size() != BaseClassRef::m_size) + BaseClassRef::vind.resize(BaseClassRef::m_size); + for (size_t i = 0; i < BaseClassRef::m_size; i++) + BaseClassRef::vind[i] = i; + } + + void computeBoundingBox(BoundingBox &bbox) { + resize(bbox, (DIM > 0 ? DIM : BaseClassRef::dim)); + if (dataset.kdtree_get_bbox(bbox)) { + // Done! It was implemented in derived class + } else { + const size_t N = dataset.kdtree_get_point_count(); + if (!N) + throw std::runtime_error("[nanoflann] computeBoundingBox() called but " + "no data points found."); + for (int i = 0; i < (DIM > 0 ? DIM : BaseClassRef::dim); ++i) { + bbox[i].low = bbox[i].high = this->dataset_get(*this, 0, i); + } + for (size_t k = 1; k < N; ++k) { + for (int i = 0; i < (DIM > 0 ? DIM : BaseClassRef::dim); ++i) { + if (this->dataset_get(*this, k, i) < bbox[i].low) + bbox[i].low = this->dataset_get(*this, k, i); + if (this->dataset_get(*this, k, i) > bbox[i].high) + bbox[i].high = this->dataset_get(*this, k, i); + } + } + } + } + + /** + * Performs an exact search in the tree starting from a node. + * \tparam RESULTSET Should be any ResultSet + * \return true if the search should be continued, false if the results are + * sufficient + */ + template + bool searchLevel(RESULTSET &result_set, const ElementType *vec, + const NodePtr node, DistanceType mindistsq, + distance_vector_t &dists, const float epsError) const { + /* If this is a leaf node, then do check and return. */ + if ((node->child1 == NULL) && (node->child2 == NULL)) { + // count_leaf += (node->lr.right-node->lr.left); // Removed since was + // neither used nor returned to the user. + DistanceType worst_dist = result_set.worstDist(); + for (IndexType i = node->node_type.lr.left; i < node->node_type.lr.right; + ++i) { + const IndexType index = BaseClassRef::vind[i]; // reorder... : i; + DistanceType dist = distance.evalMetric( + vec, index, (DIM > 0 ? DIM : BaseClassRef::dim)); + if (dist < worst_dist) { + if (!result_set.addPoint(dist, BaseClassRef::vind[i])) { + // the resultset doesn't want to receive any more points, we're done + // searching! + return false; + } + } + } + return true; + } + + /* Which child branch should be taken first? */ + int idx = node->node_type.sub.divfeat; + ElementType val = vec[idx]; + DistanceType diff1 = val - node->node_type.sub.divlow; + DistanceType diff2 = val - node->node_type.sub.divhigh; + + NodePtr bestChild; + NodePtr otherChild; + DistanceType cut_dist; + if ((diff1 + diff2) < 0) { + bestChild = node->child1; + otherChild = node->child2; + cut_dist = distance.accum_dist(val, node->node_type.sub.divhigh, idx); + } else { + bestChild = node->child2; + otherChild = node->child1; + cut_dist = distance.accum_dist(val, node->node_type.sub.divlow, idx); + } + + /* Call recursively to search next level down. */ + if (!searchLevel(result_set, vec, bestChild, mindistsq, dists, epsError)) { + // the resultset doesn't want to receive any more points, we're done + // searching! + return false; + } + + DistanceType dst = dists[idx]; + mindistsq = mindistsq + cut_dist - dst; + dists[idx] = cut_dist; + if (mindistsq * epsError <= result_set.worstDist()) { + if (!searchLevel(result_set, vec, otherChild, mindistsq, dists, + epsError)) { + // the resultset doesn't want to receive any more points, we're done + // searching! + return false; + } + } + dists[idx] = dst; + return true; + } + +public: + /** Stores the index in a binary file. + * IMPORTANT NOTE: The set of data points is NOT stored in the file, so when + * loading the index object it must be constructed associated to the same + * source of data points used while building it. See the example: + * examples/saveload_example.cpp \sa loadIndex */ + void saveIndex(FILE *stream) { this->saveIndex_(*this, stream); } + + /** Loads a previous index from a binary file. + * IMPORTANT NOTE: The set of data points is NOT stored in the file, so the + * index object must be constructed associated to the same source of data + * points used while building the index. See the example: + * examples/saveload_example.cpp \sa loadIndex */ + void loadIndex(FILE *stream) { this->loadIndex_(*this, stream); } + +}; // class KDTree + +/** kd-tree dynamic index + * + * Contains the k-d trees and other information for indexing a set of points + * for nearest-neighbor matching. + * + * The class "DatasetAdaptor" must provide the following interface (can be + * non-virtual, inlined methods): + * + * \code + * // Must return the number of data poins + * inline size_t kdtree_get_point_count() const { ... } + * + * // Must return the dim'th component of the idx'th point in the class: + * inline T kdtree_get_pt(const size_t idx, const size_t dim) const { ... } + * + * // Optional bounding-box computation: return false to default to a standard + * bbox computation loop. + * // Return true if the BBOX was already computed by the class and returned + * in "bb" so it can be avoided to redo it again. + * // Look at bb.size() to find out the expected dimensionality (e.g. 2 or 3 + * for point clouds) template bool kdtree_get_bbox(BBOX &bb) const + * { + * bb[0].low = ...; bb[0].high = ...; // 0th dimension limits + * bb[1].low = ...; bb[1].high = ...; // 1st dimension limits + * ... + * return true; + * } + * + * \endcode + * + * \tparam DatasetAdaptor The user-provided adaptor (see comments above). + * \tparam Distance The distance metric to use: nanoflann::metric_L1, + * nanoflann::metric_L2, nanoflann::metric_L2_Simple, etc. \tparam DIM + * Dimensionality of data points (e.g. 3 for 3D points) \tparam IndexType Will + * be typically size_t or int + */ +template +class KDTreeSingleIndexDynamicAdaptor_ + : public KDTreeBaseClass, + Distance, DatasetAdaptor, DIM, IndexType> { +public: + /** + * The dataset used by this index + */ + const DatasetAdaptor &dataset; //!< The source of our data + + KDTreeSingleIndexAdaptorParams index_params; + + std::vector &treeIndex; + + Distance distance; + + typedef typename nanoflann::KDTreeBaseClass< + nanoflann::KDTreeSingleIndexDynamicAdaptor_, + Distance, DatasetAdaptor, DIM, IndexType> + BaseClassRef; + + typedef typename BaseClassRef::ElementType ElementType; + typedef typename BaseClassRef::DistanceType DistanceType; + + typedef typename BaseClassRef::Node Node; + typedef Node *NodePtr; + + typedef typename BaseClassRef::Interval Interval; + /** Define "BoundingBox" as a fixed-size or variable-size container depending + * on "DIM" */ + typedef typename BaseClassRef::BoundingBox BoundingBox; + + /** Define "distance_vector_t" as a fixed-size or variable-size container + * depending on "DIM" */ + typedef typename BaseClassRef::distance_vector_t distance_vector_t; + + /** + * KDTree constructor + * + * Refer to docs in README.md or online in + * https://github.com/jlblancoc/nanoflann + * + * The KD-Tree point dimension (the length of each point in the datase, e.g. 3 + * for 3D points) is determined by means of: + * - The \a DIM template parameter if >0 (highest priority) + * - Otherwise, the \a dimensionality parameter of this constructor. + * + * @param inputData Dataset with the input features + * @param params Basically, the maximum leaf node size + */ + KDTreeSingleIndexDynamicAdaptor_( + const int dimensionality, const DatasetAdaptor &inputData, + std::vector &treeIndex_, + const KDTreeSingleIndexAdaptorParams ¶ms = + KDTreeSingleIndexAdaptorParams()) + : dataset(inputData), index_params(params), treeIndex(treeIndex_), + distance(inputData) { + BaseClassRef::root_node = NULL; + BaseClassRef::m_size = 0; + BaseClassRef::m_size_at_index_build = 0; + BaseClassRef::dim = dimensionality; + if (DIM > 0) + BaseClassRef::dim = DIM; + BaseClassRef::m_leaf_max_size = params.leaf_max_size; + } + + /** Assignment operator definiton */ + KDTreeSingleIndexDynamicAdaptor_ + operator=(const KDTreeSingleIndexDynamicAdaptor_ &rhs) { + KDTreeSingleIndexDynamicAdaptor_ tmp(rhs); + std::swap(BaseClassRef::vind, tmp.BaseClassRef::vind); + std::swap(BaseClassRef::m_leaf_max_size, tmp.BaseClassRef::m_leaf_max_size); + std::swap(index_params, tmp.index_params); + std::swap(treeIndex, tmp.treeIndex); + std::swap(BaseClassRef::m_size, tmp.BaseClassRef::m_size); + std::swap(BaseClassRef::m_size_at_index_build, + tmp.BaseClassRef::m_size_at_index_build); + std::swap(BaseClassRef::root_node, tmp.BaseClassRef::root_node); + std::swap(BaseClassRef::root_bbox, tmp.BaseClassRef::root_bbox); + std::swap(BaseClassRef::pool, tmp.BaseClassRef::pool); + return *this; + } + + /** + * Builds the index + */ + void buildIndex() { + BaseClassRef::m_size = BaseClassRef::vind.size(); + this->freeIndex(*this); + BaseClassRef::m_size_at_index_build = BaseClassRef::m_size; + if (BaseClassRef::m_size == 0) + return; + computeBoundingBox(BaseClassRef::root_bbox); + BaseClassRef::root_node = + this->divideTree(*this, 0, BaseClassRef::m_size, + BaseClassRef::root_bbox); // construct the tree + } + + /** \name Query methods + * @{ */ + + /** + * Find set of nearest neighbors to vec[0:dim-1]. Their indices are stored + * inside the result object. + * + * Params: + * result = the result object in which the indices of the + * nearest-neighbors are stored vec = the vector for which to search the + * nearest neighbors + * + * \tparam RESULTSET Should be any ResultSet + * \return True if the requested neighbors could be found. + * \sa knnSearch, radiusSearch + */ + template + bool findNeighbors(RESULTSET &result, const ElementType *vec, + const SearchParams &searchParams) const { + assert(vec); + if (this->size(*this) == 0) + return false; + if (!BaseClassRef::root_node) + return false; + float epsError = 1 + searchParams.eps; + + // fixed or variable-sized container (depending on DIM) + distance_vector_t dists; + // Fill it with zeros. + assign(dists, (DIM > 0 ? DIM : BaseClassRef::dim), + static_cast(0)); + DistanceType distsq = this->computeInitialDistances(*this, vec, dists); + + searchLevel(result, vec, BaseClassRef::root_node, distsq, dists, + epsError); // "count_leaf" parameter removed since was neither + // used nor returned to the user. + + return result.full(); + } + + /** + * Find the "num_closest" nearest neighbors to the \a query_point[0:dim-1]. + * Their indices are stored inside the result object. \sa radiusSearch, + * findNeighbors \note nChecks_IGNORED is ignored but kept for compatibility + * with the original FLANN interface. \return Number `N` of valid points in + * the result set. Only the first `N` entries in `out_indices` and + * `out_distances_sq` will be valid. Return may be less than `num_closest` + * only if the number of elements in the tree is less than `num_closest`. + */ + size_t knnSearch(const ElementType *query_point, const size_t num_closest, + IndexType *out_indices, DistanceType *out_distances_sq, + const int /* nChecks_IGNORED */ = 10) const { + nanoflann::KNNResultSet resultSet(num_closest); + resultSet.init(out_indices, out_distances_sq); + this->findNeighbors(resultSet, query_point, nanoflann::SearchParams()); + return resultSet.size(); + } + + /** + * Find all the neighbors to \a query_point[0:dim-1] within a maximum radius. + * The output is given as a vector of pairs, of which the first element is a + * point index and the second the corresponding distance. Previous contents of + * \a IndicesDists are cleared. + * + * If searchParams.sorted==true, the output list is sorted by ascending + * distances. + * + * For a better performance, it is advisable to do a .reserve() on the vector + * if you have any wild guess about the number of expected matches. + * + * \sa knnSearch, findNeighbors, radiusSearchCustomCallback + * \return The number of points within the given radius (i.e. indices.size() + * or dists.size() ) + */ + size_t + radiusSearch(const ElementType *query_point, const DistanceType &radius, + std::vector> &IndicesDists, + const SearchParams &searchParams) const { + RadiusResultSet resultSet(radius, IndicesDists); + const size_t nFound = + radiusSearchCustomCallback(query_point, resultSet, searchParams); + if (searchParams.sorted) + std::sort(IndicesDists.begin(), IndicesDists.end(), IndexDist_Sorter()); + return nFound; + } + + /** + * Just like radiusSearch() but with a custom callback class for each point + * found in the radius of the query. See the source of RadiusResultSet<> as a + * start point for your own classes. \sa radiusSearch + */ + template + size_t radiusSearchCustomCallback( + const ElementType *query_point, SEARCH_CALLBACK &resultSet, + const SearchParams &searchParams = SearchParams()) const { + this->findNeighbors(resultSet, query_point, searchParams); + return resultSet.size(); + } + + /** @} */ + +public: + void computeBoundingBox(BoundingBox &bbox) { + resize(bbox, (DIM > 0 ? DIM : BaseClassRef::dim)); + + if (dataset.kdtree_get_bbox(bbox)) { + // Done! It was implemented in derived class + } else { + const size_t N = BaseClassRef::m_size; + if (!N) + throw std::runtime_error("[nanoflann] computeBoundingBox() called but " + "no data points found."); + for (int i = 0; i < (DIM > 0 ? DIM : BaseClassRef::dim); ++i) { + bbox[i].low = bbox[i].high = + this->dataset_get(*this, BaseClassRef::vind[0], i); + } + for (size_t k = 1; k < N; ++k) { + for (int i = 0; i < (DIM > 0 ? DIM : BaseClassRef::dim); ++i) { + if (this->dataset_get(*this, BaseClassRef::vind[k], i) < bbox[i].low) + bbox[i].low = this->dataset_get(*this, BaseClassRef::vind[k], i); + if (this->dataset_get(*this, BaseClassRef::vind[k], i) > bbox[i].high) + bbox[i].high = this->dataset_get(*this, BaseClassRef::vind[k], i); + } + } + } + } + + /** + * Performs an exact search in the tree starting from a node. + * \tparam RESULTSET Should be any ResultSet + */ + template + void searchLevel(RESULTSET &result_set, const ElementType *vec, + const NodePtr node, DistanceType mindistsq, + distance_vector_t &dists, const float epsError) const { + /* If this is a leaf node, then do check and return. */ + if ((node->child1 == NULL) && (node->child2 == NULL)) { + // count_leaf += (node->lr.right-node->lr.left); // Removed since was + // neither used nor returned to the user. + DistanceType worst_dist = result_set.worstDist(); + for (IndexType i = node->node_type.lr.left; i < node->node_type.lr.right; + ++i) { + const IndexType index = BaseClassRef::vind[i]; // reorder... : i; + if (treeIndex[index] == -1) + continue; + DistanceType dist = distance.evalMetric( + vec, index, (DIM > 0 ? DIM : BaseClassRef::dim)); + if (dist < worst_dist) { + if (!result_set.addPoint( + static_cast(dist), + static_cast( + BaseClassRef::vind[i]))) { + // the resultset doesn't want to receive any more points, we're done + // searching! + return; // false; + } + } + } + return; + } + + /* Which child branch should be taken first? */ + int idx = node->node_type.sub.divfeat; + ElementType val = vec[idx]; + DistanceType diff1 = val - node->node_type.sub.divlow; + DistanceType diff2 = val - node->node_type.sub.divhigh; + + NodePtr bestChild; + NodePtr otherChild; + DistanceType cut_dist; + if ((diff1 + diff2) < 0) { + bestChild = node->child1; + otherChild = node->child2; + cut_dist = distance.accum_dist(val, node->node_type.sub.divhigh, idx); + } else { + bestChild = node->child2; + otherChild = node->child1; + cut_dist = distance.accum_dist(val, node->node_type.sub.divlow, idx); + } + + /* Call recursively to search next level down. */ + searchLevel(result_set, vec, bestChild, mindistsq, dists, epsError); + + DistanceType dst = dists[idx]; + mindistsq = mindistsq + cut_dist - dst; + dists[idx] = cut_dist; + if (mindistsq * epsError <= result_set.worstDist()) { + searchLevel(result_set, vec, otherChild, mindistsq, dists, epsError); + } + dists[idx] = dst; + } + +public: + /** Stores the index in a binary file. + * IMPORTANT NOTE: The set of data points is NOT stored in the file, so when + * loading the index object it must be constructed associated to the same + * source of data points used while building it. See the example: + * examples/saveload_example.cpp \sa loadIndex */ + void saveIndex(FILE *stream) { this->saveIndex_(*this, stream); } + + /** Loads a previous index from a binary file. + * IMPORTANT NOTE: The set of data points is NOT stored in the file, so the + * index object must be constructed associated to the same source of data + * points used while building the index. See the example: + * examples/saveload_example.cpp \sa loadIndex */ + void loadIndex(FILE *stream) { this->loadIndex_(*this, stream); } +}; + +/** kd-tree dynaimic index + * + * class to create multiple static index and merge their results to behave as + * single dynamic index as proposed in Logarithmic Approach. + * + * Example of usage: + * examples/dynamic_pointcloud_example.cpp + * + * \tparam DatasetAdaptor The user-provided adaptor (see comments above). + * \tparam Distance The distance metric to use: nanoflann::metric_L1, + * nanoflann::metric_L2, nanoflann::metric_L2_Simple, etc. \tparam DIM + * Dimensionality of data points (e.g. 3 for 3D points) \tparam IndexType Will + * be typically size_t or int + */ +template +class KDTreeSingleIndexDynamicAdaptor { +public: + typedef typename Distance::ElementType ElementType; + typedef typename Distance::DistanceType DistanceType; + +protected: + size_t m_leaf_max_size; + size_t treeCount; + size_t pointCount; + + /** + * The dataset used by this index + */ + const DatasetAdaptor &dataset; //!< The source of our data + + std::vector treeIndex; //!< treeIndex[idx] is the index of tree in which + //!< point at idx is stored. treeIndex[idx]=-1 + //!< means that point has been removed. + + KDTreeSingleIndexAdaptorParams index_params; + + int dim; //!< Dimensionality of each data point + + typedef KDTreeSingleIndexDynamicAdaptor_ + index_container_t; + std::vector index; + +public: + /** Get a const ref to the internal list of indices; the number of indices is + * adapted dynamically as the dataset grows in size. */ + const std::vector &getAllIndices() const { return index; } + +private: + /** finds position of least significant unset bit */ + int First0Bit(IndexType num) { + int pos = 0; + while (num & 1) { + num = num >> 1; + pos++; + } + return pos; + } + + /** Creates multiple empty trees to handle dynamic support */ + void init() { + typedef KDTreeSingleIndexDynamicAdaptor_ + my_kd_tree_t; + std::vector index_( + treeCount, my_kd_tree_t(dim /*dim*/, dataset, treeIndex, index_params)); + index = index_; + } + +public: + Distance distance; + + /** + * KDTree constructor + * + * Refer to docs in README.md or online in + * https://github.com/jlblancoc/nanoflann + * + * The KD-Tree point dimension (the length of each point in the datase, e.g. 3 + * for 3D points) is determined by means of: + * - The \a DIM template parameter if >0 (highest priority) + * - Otherwise, the \a dimensionality parameter of this constructor. + * + * @param inputData Dataset with the input features + * @param params Basically, the maximum leaf node size + */ + KDTreeSingleIndexDynamicAdaptor(const int dimensionality, + const DatasetAdaptor &inputData, + const KDTreeSingleIndexAdaptorParams ¶ms = + KDTreeSingleIndexAdaptorParams(), + const size_t maximumPointCount = 1000000000U) + : dataset(inputData), index_params(params), distance(inputData) { + treeCount = static_cast(std::log2(maximumPointCount)); + pointCount = 0U; + dim = dimensionality; + treeIndex.clear(); + if (DIM > 0) + dim = DIM; + m_leaf_max_size = params.leaf_max_size; + init(); + const size_t num_initial_points = dataset.kdtree_get_point_count(); + if (num_initial_points > 0) { + addPoints(0, num_initial_points - 1); + } + } + + /** Deleted copy constructor*/ + KDTreeSingleIndexDynamicAdaptor( + const KDTreeSingleIndexDynamicAdaptor &) = delete; + + /** Add points to the set, Inserts all points from [start, end] */ + void addPoints(IndexType start, IndexType end) { + size_t count = end - start + 1; + treeIndex.resize(treeIndex.size() + count); + for (IndexType idx = start; idx <= end; idx++) { + int pos = First0Bit(pointCount); + index[pos].vind.clear(); + treeIndex[pointCount] = pos; + for (int i = 0; i < pos; i++) { + for (int j = 0; j < static_cast(index[i].vind.size()); j++) { + index[pos].vind.push_back(index[i].vind[j]); + if (treeIndex[index[i].vind[j]] != -1) + treeIndex[index[i].vind[j]] = pos; + } + index[i].vind.clear(); + index[i].freeIndex(index[i]); + } + index[pos].vind.push_back(idx); + index[pos].buildIndex(); + pointCount++; + } + } + + /** Remove a point from the set (Lazy Deletion) */ + void removePoint(size_t idx) { + if (idx >= pointCount) + return; + treeIndex[idx] = -1; + } + + /** + * Find set of nearest neighbors to vec[0:dim-1]. Their indices are stored + * inside the result object. + * + * Params: + * result = the result object in which the indices of the + * nearest-neighbors are stored vec = the vector for which to search the + * nearest neighbors + * + * \tparam RESULTSET Should be any ResultSet + * \return True if the requested neighbors could be found. + * \sa knnSearch, radiusSearch + */ + template + bool findNeighbors(RESULTSET &result, const ElementType *vec, + const SearchParams &searchParams) const { + for (size_t i = 0; i < treeCount; i++) { + index[i].findNeighbors(result, &vec[0], searchParams); + } + return result.full(); + } +}; + +/** An L2-metric KD-tree adaptor for working with data directly stored in an + * Eigen Matrix, without duplicating the data storage. Each row in the matrix + * represents a point in the state space. + * + * Example of usage: + * \code + * Eigen::Matrix mat; + * // Fill out "mat"... + * + * typedef KDTreeEigenMatrixAdaptor< Eigen::Matrix > + * my_kd_tree_t; const int max_leaf = 10; my_kd_tree_t mat_index(mat, max_leaf + * ); mat_index.index->buildIndex(); mat_index.index->... \endcode + * + * \tparam DIM If set to >0, it specifies a compile-time fixed dimensionality + * for the points in the data set, allowing more compiler optimizations. \tparam + * Distance The distance metric to use: nanoflann::metric_L1, + * nanoflann::metric_L2, nanoflann::metric_L2_Simple, etc. + */ +template +struct KDTreeEigenMatrixAdaptor { + typedef KDTreeEigenMatrixAdaptor self_t; + typedef typename MatrixType::Scalar num_t; + typedef typename MatrixType::Index IndexType; + typedef + typename Distance::template traits::distance_t metric_t; + typedef KDTreeSingleIndexAdaptor + index_t; + + index_t *index; //! The kd-tree index for the user to call its methods as + //! usual with any other FLANN index. + + /// Constructor: takes a const ref to the matrix object with the data points + KDTreeEigenMatrixAdaptor(const size_t dimensionality, + const std::reference_wrapper &mat, + const int leaf_max_size = 10) + : m_data_matrix(mat) { + const auto dims = mat.get().cols(); + if (size_t(dims) != dimensionality) + throw std::runtime_error( + "Error: 'dimensionality' must match column count in data matrix"); + if (DIM > 0 && int(dims) != DIM) + throw std::runtime_error( + "Data set dimensionality does not match the 'DIM' template argument"); + index = + new index_t(static_cast(dims), *this /* adaptor */, + nanoflann::KDTreeSingleIndexAdaptorParams(leaf_max_size)); + index->buildIndex(); + } + +public: + /** Deleted copy constructor */ + KDTreeEigenMatrixAdaptor(const self_t &) = delete; + + ~KDTreeEigenMatrixAdaptor() { delete index; } + + const std::reference_wrapper m_data_matrix; + + /** Query for the \a num_closest closest points to a given point (entered as + * query_point[0:dim-1]). Note that this is a short-cut method for + * index->findNeighbors(). The user can also call index->... methods as + * desired. \note nChecks_IGNORED is ignored but kept for compatibility with + * the original FLANN interface. + */ + inline void query(const num_t *query_point, const size_t num_closest, + IndexType *out_indices, num_t *out_distances_sq, + const int /* nChecks_IGNORED */ = 10) const { + nanoflann::KNNResultSet resultSet(num_closest); + resultSet.init(out_indices, out_distances_sq); + index->findNeighbors(resultSet, query_point, nanoflann::SearchParams()); + } + + /** @name Interface expected by KDTreeSingleIndexAdaptor + * @{ */ + + const self_t &derived() const { return *this; } + self_t &derived() { return *this; } + + // Must return the number of data points + inline size_t kdtree_get_point_count() const { + return m_data_matrix.get().rows(); + } + + // Returns the dim'th component of the idx'th point in the class: + inline num_t kdtree_get_pt(const IndexType idx, size_t dim) const { + return m_data_matrix.get().coeff(idx, IndexType(dim)); + } + + // Optional bounding-box computation: return false to default to a standard + // bbox computation loop. + // Return true if the BBOX was already computed by the class and returned in + // "bb" so it can be avoided to redo it again. Look at bb.size() to find out + // the expected dimensionality (e.g. 2 or 3 for point clouds) + template bool kdtree_get_bbox(BBOX & /*bb*/) const { + return false; + } + + /** @} */ + +}; // end of KDTreeEigenMatrixAdaptor + /** @} */ + +/** @} */ // end of grouping +} // namespace nanoflann + +#endif /* NANOFLANN_HPP_ */ diff --git a/competing_methods/my_KPConv/datasets/Hessigsim3D.py b/competing_methods/my_KPConv/datasets/Hessigsim3D.py new file mode 100644 index 00000000..afedf18d --- /dev/null +++ b/competing_methods/my_KPConv/datasets/Hessigsim3D.py @@ -0,0 +1,1625 @@ +# +# +# 0=================================0 +# | Kernel Point Convolutions | +# 0=================================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Class handling Hessigsim3D dataset. +# Implements a Dataset, a Sampler, and a collate_fn +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 11/06/2018 +# + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Imports and global variables +# \**********************************/ +# + +# Common libs +import time +import numpy as np +import pickle +import torch +import math +from multiprocessing import Lock + + +# OS functions +from os import listdir +from os.path import exists, join, isdir + +# Dataset parent class +from os.path import join, exists, dirname, abspath +from datasets.common import PointCloudDataset +from torch.utils.data import Sampler, get_worker_info +from utils.mayavi_visu import * + +from datasets.common import grid_subsampling +from utils.config import bcolors +from laspy.file import File +import laspy +import glob, os + +################################### UTILS Functions ####################################### +COLOR_MAP = np.asarray( + [ + [178, 203, 47], #C00 Low Vegetation + [183, 178, 170], #C01 Impervious Surface + [32, 151, 163], #C02 Vehicle + [168, 33, 107], #C03 Urban Furniture + [255, 122, 89], #C04 Roof + [255, 215, 136], #C05 Facade + [89, 125, 53], #C06 Shrub + [0, 128, 65], #C07 Tree + [170, 85, 0], #C08 Soil/Gravel + [252, 225, 5], #C09 Vertical Surface + [128, 0, 0], #C10 Chimney + ] +) + +def transfer_16bit_to_8bit(array_in): + min_16bit = np.min(array_in) + max_16bit = np.max(array_in) + array_8bit = np.array(np.rint(255 * ((array_in - min_16bit) / (max_16bit - min_16bit))), dtype=np.uint8) + return array_8bit + +def read_las(filename, has_label = True, add_las_feas = True): + """convert from a las file with no rgb""" + # ---read the ply file-------- + try: + inFile = File(filename, mode='r') + except NameError: + raise ValueError( + "laspy package not found. uncomment import in /partition/provider and make sure it is installed in your environment") + + N_points = len(inFile) + x = np.reshape(inFile.x, (N_points, 1)) + y = np.reshape(inFile.y, (N_points, 1)) + z = np.reshape(inFile.z, (N_points, 1)) + # apply global shift (-513852.00, -5426490.00, -224.51) + x += -513852.00 + y += -5426490.00 + z += -224.51 + + xyz = np.hstack((x, y, z)).astype('f8') + + r = transfer_16bit_to_8bit(np.reshape(inFile.get_red(), (N_points, 1))) + g = transfer_16bit_to_8bit(np.reshape(inFile.get_green(), (N_points, 1))) + b = transfer_16bit_to_8bit(np.reshape(inFile.get_blue(), (N_points, 1))) + rgb = np.hstack((r, g, b)) + + if add_las_feas: + return_num = np.reshape(inFile.num_returns, (N_points, 1)) + num_of_return = np.reshape(inFile.return_num, (N_points, 1)) + reflec = np.reshape(inFile.Reflectance, (N_points, 1)) + feas = np.stack((return_num, num_of_return, reflec), axis=1).astype('f8') + feas = np.transpose(np.concatenate(feas, axis=-1)) + if has_label: + labels = np.reshape(inFile.Classification, (N_points, 1)) + labels = np.concatenate(labels, axis=-1) + if add_las_feas: + return xyz, rgb, labels, feas + else: + return xyz, rgb, labels + else: + if add_las_feas: + return xyz, rgb, feas + else: + return xyz, rgb + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Dataset class definition +# \******************************/ +BASE_DIR = dirname(abspath(__file__)) +ROOT_DIR = dirname(BASE_DIR) +sys.path.append(ROOT_DIR) + +class Hessigsim3DDataset(PointCloudDataset): + """Class to handle Hessigsim3D dataset.""" + + def __init__(self, config, set='training', use_potentials=True, load_data=True): + """ + This dataset is small enough to be stored in-memory, so load all point clouds here + """ + PointCloudDataset.__init__(self, 'Hessigsim3D') + + ############ + # Parameters + ############ + + # Dict from labels to names + self.label_to_names = {0: "Low Vegetation", + 1: "Impervious Surface", + 2: "Vehicle", + 3: "Urban Furniture", + 4: "Roof", + 5: "Facade", + 6: "Shrub", + 7: "Tree", + 8: "Soil/Gravel", + 9: "Vertical Surface", + 10: "Chimney"} + + # Initialize a bunch of variables concerning class labels + self.init_labels() + + # List of classes ignored during training (can be empty) + self.ignored_labels = [] #np.sort([0]) + + # Dataset folder + self.root = ROOT_DIR + '/' + self.path = ROOT_DIR + '/data' + + # Type of task conducted on this dataset + self.dataset_task = 'cloud_segmentation' + + # Update number of class and data task in configuration + config.num_classes = self.num_classes + config.dataset_task = self.dataset_task + + # Parameters from config + self.config = config + + # Training or test set + self.set = set + + # Using potential or random epoch generation + self.use_potentials = use_potentials + + # Proportion of validation scenes + #self.cloud_names = ['Area_1', 'Area_2', 'Area_3', 'Area_4', 'Area_5', 'Area_6'] + #self.all_splits = [0, 1, 2, 3, 4, 5] + #self.validation_split = 1 + + # Number of models used per epoch + if self.set == 'training': + self.epoch_n = config.epoch_steps * config.batch_num + elif self.set in ['validation', 'test']: + self.epoch_n = config.validation_size * config.batch_num + else: + raise ValueError('Unknown set for Hessigsim3D data: ', self.set) + + # Stop data is not needed + if not load_data: + return + + ################ + # Load ply files + ################ + + # List of training files + folders = ["train/", "test/", "validate/"] + self.files = [] + self.cloud_names = [] + for folder in folders: + data_folder = self.path + "/raw/" + folder + files = glob.glob(data_folder + "*.las") + if self.set == 'training' and folder == "train/": + self.files += files + for file in files: + file_name = os.path.splitext(os.path.basename(file))[0] + self.cloud_names.append(file_name) + break + elif self.set == 'validation' and folder == "validate/": + self.files += files + for file in files: + file_name = os.path.splitext(os.path.basename(file))[0] + self.cloud_names.append(file_name) + break + elif self.set == 'test' and folder == "test/": + self.files += files + for file in files: + file_name = os.path.splitext(os.path.basename(file))[0] + self.cloud_names.append(file_name) + break + + if len(self.files) == 0: + raise ValueError('Unknown set for Hessigsim3D data: ', self.set) + + if 0 < self.config.first_subsampling_dl <= 0.01: + raise ValueError('subsampling_parameter too low (should be over 1 cm') + + # Initiate containers + self.input_trees = [] + self.input_colors = [] + self.input_labels = [] + self.pot_trees = [] + self.num_clouds = 0 + self.test_proj = [] + self.validation_labels = [] + + # Start loading + self.load_subsampled_clouds() + + ############################ + # Batch selection parameters + ############################ + + # Initialize value for batch limit (max number of points per batch). + self.batch_limit = torch.tensor([1], dtype=torch.float32) + self.batch_limit.share_memory_() + + # Initialize potentials + if use_potentials: + self.potentials = [] + self.min_potentials = [] + self.argmin_potentials = [] + for i, tree in enumerate(self.pot_trees): + self.potentials += [torch.from_numpy(np.random.rand(tree.data.shape[0]) * 1e-3)] + min_ind = int(torch.argmin(self.potentials[-1])) + self.argmin_potentials += [min_ind] + self.min_potentials += [float(self.potentials[-1][min_ind])] + + # Share potential memory + self.argmin_potentials = torch.from_numpy(np.array(self.argmin_potentials, dtype=np.int64)) + self.min_potentials = torch.from_numpy(np.array(self.min_potentials, dtype=np.float64)) + self.argmin_potentials.share_memory_() + self.min_potentials.share_memory_() + for i, _ in enumerate(self.pot_trees): + self.potentials[i].share_memory_() + + self.worker_waiting = torch.tensor([0 for _ in range(config.input_threads)], dtype=torch.int32) + self.worker_waiting.share_memory_() + self.epoch_inds = None + self.epoch_i = 0 + + else: + self.potentials = None + self.min_potentials = None + self.argmin_potentials = None + N = config.epoch_steps * config.batch_num + self.epoch_inds = torch.from_numpy(np.zeros((2, N), dtype=np.int64)) + self.epoch_i = torch.from_numpy(np.zeros((1,), dtype=np.int64)) + self.epoch_i.share_memory_() + self.epoch_inds.share_memory_() + + self.worker_lock = Lock() + + # For ERF visualization, we want only one cloud per batch and no randomness + if self.set == 'ERF': + self.batch_limit = torch.tensor([1], dtype=torch.float32) + self.batch_limit.share_memory_() + np.random.seed(42) + + return + + def __len__(self): + """ + Return the length of data here + """ + return len(self.cloud_names) + + def __getitem__(self, batch_i): + """ + The main thread gives a list of indices to load a batch. Each worker is going to work in parallel to load a + different list of indices. + """ + + if self.use_potentials: + return self.potential_item(batch_i) + else: + return self.random_item(batch_i) + + def potential_item(self, batch_i, debug_workers=False): + + t = [time.time()] + + # Initiate concatanation lists + p_list = [] + f_list = [] + l_list = [] + i_list = [] + pi_list = [] + ci_list = [] + s_list = [] + R_list = [] + batch_n = 0 + + info = get_worker_info() + if info is not None: + wid = info.id + else: + wid = None + + while True: + + t += [time.time()] + + if debug_workers: + message = '' + for wi in range(info.num_workers): + if wi == wid: + message += ' {:}X{:} '.format(bcolors.FAIL, bcolors.ENDC) + elif self.worker_waiting[wi] == 0: + message += ' ' + elif self.worker_waiting[wi] == 1: + message += ' | ' + elif self.worker_waiting[wi] == 2: + message += ' o ' + print(message) + self.worker_waiting[wid] = 0 + + with self.worker_lock: + + if debug_workers: + message = '' + for wi in range(info.num_workers): + if wi == wid: + message += ' {:}v{:} '.format(bcolors.OKGREEN, bcolors.ENDC) + elif self.worker_waiting[wi] == 0: + message += ' ' + elif self.worker_waiting[wi] == 1: + message += ' | ' + elif self.worker_waiting[wi] == 2: + message += ' o ' + print(message) + self.worker_waiting[wid] = 1 + + # Get potential minimum + cloud_ind = int(torch.argmin(self.min_potentials)) + point_ind = int(self.argmin_potentials[cloud_ind]) + + # Get potential points from tree structure + pot_points = np.array(self.pot_trees[cloud_ind].data, copy=False) + + # Center point of input region + center_point = pot_points[point_ind, :].reshape(1, -1) + + # Add a small noise to center point + if self.set != 'ERF': + center_point += np.random.normal(scale=self.config.in_radius / 10, size=center_point.shape) + + # Indices of points in input region + pot_inds, dists = self.pot_trees[cloud_ind].query_radius(center_point, + r=self.config.in_radius, + return_distance=True) + + d2s = np.square(dists[0]) + pot_inds = pot_inds[0] + + # Update potentials (Tukey weights) + if self.set != 'ERF': + tukeys = np.square(1 - d2s / np.square(self.config.in_radius)) + tukeys[d2s > np.square(self.config.in_radius)] = 0 + self.potentials[cloud_ind][pot_inds] += tukeys + min_ind = torch.argmin(self.potentials[cloud_ind]) + self.min_potentials[[cloud_ind]] = self.potentials[cloud_ind][min_ind] + self.argmin_potentials[[cloud_ind]] = min_ind + + t += [time.time()] + + # Get points from tree structure + points = np.array(self.input_trees[cloud_ind].data, copy=False) + + + # Indices of points in input region + input_inds = self.input_trees[cloud_ind].query_radius(center_point, + r=self.config.in_radius)[0] + + t += [time.time()] + + # Number collected + n = input_inds.shape[0] + + # Collect labels and colors + input_points = (points[input_inds] - center_point).astype(np.float32) + input_colors = self.input_colors[cloud_ind][input_inds] + if self.set in ['test', 'ERF']: + input_labels = np.zeros(input_points.shape[0]) + else: + input_labels = self.input_labels[cloud_ind][input_inds] + input_labels = np.array([self.label_to_idx[l] for l in input_labels]) + + t += [time.time()] + + # Data augmentation + input_points, scale, R = self.augmentation_transform(input_points) + + # Color augmentation + if np.random.rand() > self.config.augment_color: + input_colors *= 0 + + # Get original height as additional feature + input_features = np.hstack((input_colors, input_points[:, 2:] + center_point[:, 2:])).astype(np.float32) + + t += [time.time()] + + # Stack batch + p_list += [input_points] + f_list += [input_features] + l_list += [input_labels] + pi_list += [input_inds] + i_list += [point_ind] + ci_list += [cloud_ind] + s_list += [scale] + R_list += [R] + + # Update batch size + batch_n += n + + # In case batch is full, stop + if batch_n > int(self.batch_limit): + break + + # Randomly drop some points (act as an augmentation process and a safety for GPU memory consumption) + # if n > int(self.batch_limit): + # input_inds = np.random.choice(input_inds, size=int(self.batch_limit) - 1, replace=False) + # n = input_inds.shape[0] + + ################### + # Concatenate batch + ################### + + stacked_points = np.concatenate(p_list, axis=0) + features = np.concatenate(f_list, axis=0) + labels = np.concatenate(l_list, axis=0) + point_inds = np.array(i_list, dtype=np.int32) + cloud_inds = np.array(ci_list, dtype=np.int32) + input_inds = np.concatenate(pi_list, axis=0) + stack_lengths = np.array([pp.shape[0] for pp in p_list], dtype=np.int32) + scales = np.array(s_list, dtype=np.float32) + rots = np.stack(R_list, axis=0) + + # Input features + stacked_features = np.ones_like(stacked_points[:, :1], dtype=np.float32) + if self.config.in_features_dim == 1: + pass + elif self.config.in_features_dim == 4: + stacked_features = np.hstack((stacked_features, features[:, :3])) + elif self.config.in_features_dim == 5 or self.config.in_features_dim == 8: + stacked_features = np.hstack((stacked_features, features)) + else: + raise ValueError('Only accepted input dimensions are 1, 4 and 7 (without and with XYZ)') + + ####################### + # Create network inputs + ####################### + # + # Points, neighbors, pooling indices for each layers + # + + t += [time.time()] + + # Get the whole input list + input_list = self.segmentation_inputs(stacked_points, + stacked_features, + labels, + stack_lengths) + + t += [time.time()] + + # Add scale and rotation for testing + input_list += [scales, rots, cloud_inds, point_inds, input_inds] + + if debug_workers: + message = '' + for wi in range(info.num_workers): + if wi == wid: + message += ' {:}0{:} '.format(bcolors.OKBLUE, bcolors.ENDC) + elif self.worker_waiting[wi] == 0: + message += ' ' + elif self.worker_waiting[wi] == 1: + message += ' | ' + elif self.worker_waiting[wi] == 2: + message += ' o ' + print(message) + self.worker_waiting[wid] = 2 + + t += [time.time()] + + # Display timings + debugT = False + if debugT: + print('\n************************\n') + print('Timings:') + ti = 0 + N = 5 + mess = 'Init ...... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += 1 + mess = 'Pots ...... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += 1 + mess = 'Sphere .... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += 1 + mess = 'Collect ... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += 1 + mess = 'Augment ... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += N * (len(stack_lengths) - 1) + 1 + print('concat .... {:5.1f}ms'.format(1000 * (t[ti+1] - t[ti]))) + ti += 1 + print('input ..... {:5.1f}ms'.format(1000 * (t[ti+1] - t[ti]))) + ti += 1 + print('stack ..... {:5.1f}ms'.format(1000 * (t[ti+1] - t[ti]))) + ti += 1 + print('\n************************\n') + return input_list + + def random_item(self, batch_i): + + # Initiate concatanation lists + p_list = [] + f_list = [] + l_list = [] + i_list = [] + pi_list = [] + ci_list = [] + s_list = [] + R_list = [] + batch_n = 0 + + while True: + + with self.worker_lock: + + # Get potential minimum + cloud_ind = int(self.epoch_inds[0, self.epoch_i]) + point_ind = int(self.epoch_inds[1, self.epoch_i]) + + # Update epoch indice + self.epoch_i += 1 + + # Get points from tree structure + points = np.array(self.input_trees[cloud_ind].data, copy=False) + + # Center point of input region + center_point = points[point_ind, :].reshape(1, -1) + + # Add a small noise to center point + if self.set != 'ERF': + center_point += np.random.normal(scale=self.config.in_radius / 10, size=center_point.shape) + + # Indices of points in input region + input_inds = self.input_trees[cloud_ind].query_radius(center_point, + r=self.config.in_radius)[0] + + # Number collected + n = input_inds.shape[0] + + # Collect labels and colors + input_points = (points[input_inds] - center_point).astype(np.float32) + input_colors = self.input_colors[cloud_ind][input_inds] + if self.set in ['test', 'ERF']: + input_labels = np.zeros(input_points.shape[0]) + else: + input_labels = self.input_labels[cloud_ind][input_inds] + input_labels = np.array([self.label_to_idx[l] for l in input_labels]) + + # Data augmentation + input_points, scale, R = self.augmentation_transform(input_points) + + # Color augmentation + if np.random.rand() > self.config.augment_color: + input_colors *= 0 + + # Get original height as additional feature + input_features = np.hstack((input_colors, input_points[:, 2:] + center_point[:, 2:])).astype(np.float32) + + # Stack batch + p_list += [input_points] + f_list += [input_features] + l_list += [input_labels] + pi_list += [input_inds] + i_list += [point_ind] + ci_list += [cloud_ind] + s_list += [scale] + R_list += [R] + + # Update batch size + batch_n += n + + # In case batch is full, stop + if batch_n > int(self.batch_limit): + break + + # Randomly drop some points (act as an augmentation process and a safety for GPU memory consumption) + # if n > int(self.batch_limit): + # input_inds = np.random.choice(input_inds, size=int(self.batch_limit) - 1, replace=False) + # n = input_inds.shape[0] + + ################### + # Concatenate batch + ################### + + stacked_points = np.concatenate(p_list, axis=0) + features = np.concatenate(f_list, axis=0) + labels = np.concatenate(l_list, axis=0) + point_inds = np.array(i_list, dtype=np.int32) + cloud_inds = np.array(ci_list, dtype=np.int32) + input_inds = np.concatenate(pi_list, axis=0) + stack_lengths = np.array([pp.shape[0] for pp in p_list], dtype=np.int32) + scales = np.array(s_list, dtype=np.float32) + rots = np.stack(R_list, axis=0) + + # Input features + stacked_features = np.ones_like(stacked_points[:, :1], dtype=np.float32) + if self.config.in_features_dim == 1: + pass + elif self.config.in_features_dim == 4: + stacked_features = np.hstack((stacked_features, features[:, :3])) + elif self.config.in_features_dim == 5 or self.config.in_features_dim == 8: + stacked_features = np.hstack((stacked_features, features)) + else: + raise ValueError('Only accepted input dimensions are 1, 4 and 7 (without and with XYZ)') + + ####################### + # Create network inputs + ####################### + # + # Points, neighbors, pooling indices for each layers + # + + # Get the whole input list + input_list = self.segmentation_inputs(stacked_points, + stacked_features, + labels, + stack_lengths) + + # Add scale and rotation for testing + input_list += [scales, rots, cloud_inds, point_inds, input_inds] + + return input_list + + def load_subsampled_clouds(self): + + # Parameter + dl = self.config.first_subsampling_dl + + # Create path for files + tree_path = join(self.path, 'input_{:.3f}'.format(dl)) + if not exists(tree_path): + makedirs(tree_path) + + ############## + # Load KDTrees + ############## + + for i, file_path in enumerate(self.files): + + # Restart timer + t0 = time.time() + + # Get cloud name + cloud_name = self.cloud_names[i] + + # Name of the input files + KDTree_file = join(tree_path, '{:s}_KDTree.pkl'.format(cloud_name)) + sub_ply_file = join(tree_path, '{:s}.ply'.format(cloud_name)) + + # Check if inputs have already been computed + if exists(KDTree_file): + print('\nFound KDTree for cloud {:s}, subsampled at {:.3f}'.format(cloud_name, dl)) + + # read ply with data + data = read_ply(sub_ply_file) + sub_feas = np.vstack((data['red'], data['green'], data['blue'], + data['ReturnNumber'], data['NumberOfReturns'], data['Reflectance'])).T + sub_labels = data['class'] + + # Read pkl with search tree + with open(KDTree_file, 'rb') as f: + search_tree = pickle.load(f) + + else: + print('\nPreparing KDTree for cloud {:s}, subsampled at {:.3f}'.format(cloud_name, dl)) + + # Read ply file + if self.set != 'test': + xyz, rgb, labels, las_feas = read_las(file_path) + labels = np.array(labels) + else: + xyz, rgb, las_feas = read_las(file_path, False, True) + labels = np.zeros(len(rgb)) + + sub_feas = np.hstack((rgb, las_feas)) + xyz = xyz.astype(np.float32) + sub_feas = sub_feas.astype(np.float32) + labels = labels.astype(np.int32) + + # Subsample cloud + sub_points, sub_feas, sub_labels = grid_subsampling(xyz, + features=sub_feas, + labels=labels, + sampleDl=dl) + + # Rescale float color and squeeze label + sub_feas[:,:3] /= 255 + sub_labels = np.squeeze(sub_labels) + + # Get chosen neighborhoods + search_tree = KDTree(sub_points, leaf_size=10) + #search_tree = nnfln.KDTree(n_neighbors=1, metric='L2', leaf_size=10) + #search_tree.fit(sub_points) + + # Save KDTree + with open(KDTree_file, 'wb') as f: + pickle.dump(search_tree, f) + + # Save ply + write_ply(sub_ply_file, + [sub_points, sub_feas, sub_labels], + ['x', 'y', 'z', 'red', 'green', 'blue', 'ReturnNumber', 'NumberOfReturns', 'Reflectance', 'class']) + + # Fill data containers + self.input_trees += [search_tree] + self.input_colors += [sub_feas] + self.input_labels += [sub_labels] + + size = sub_feas.shape[0] * 4 * 7 + print('{:.1f} MB loaded in {:.1f}s'.format(size * 1e-6, time.time() - t0)) + + ############################ + # Coarse potential locations + ############################ + + # Only necessary for validation and test sets + if self.use_potentials: + print('\nPreparing potentials') + + # Restart timer + t0 = time.time() + + pot_dl = self.config.in_radius / 10 + cloud_ind = 0 + + for i, file_path in enumerate(self.files): + + # Get cloud name + cloud_name = self.cloud_names[i] + + # Name of the input files + coarse_KDTree_file = join(tree_path, '{:s}_coarse_KDTree.pkl'.format(cloud_name)) + + # Check if inputs have already been computed + if exists(coarse_KDTree_file): + # Read pkl with search tree + with open(coarse_KDTree_file, 'rb') as f: + search_tree = pickle.load(f) + + else: + # Subsample cloud + sub_points = np.array(self.input_trees[cloud_ind].data, copy=False) + coarse_points = grid_subsampling(sub_points.astype(np.float32), sampleDl=pot_dl) + + # Get chosen neighborhoods + search_tree = KDTree(coarse_points, leaf_size=10) + + # Save KDTree + with open(coarse_KDTree_file, 'wb') as f: + pickle.dump(search_tree, f) + + # Fill data containers + self.pot_trees += [search_tree] + cloud_ind += 1 + + print('Done in {:.1f}s'.format(time.time() - t0)) + + ###################### + # Reprojection indices + ###################### + + # Get number of clouds + self.num_clouds = len(self.input_trees) + + # Only necessary for validation and test sets + if self.set in ['validation', 'test']: + + print('\nPreparing reprojection indices for testing') + + # Get validation/test reprojection indices + for i, file_path in enumerate(self.files): + + # Restart timer + t0 = time.time() + + # Get info on this cloud + cloud_name = self.cloud_names[i] + + # File name for saving + proj_file = join(tree_path, '{:s}_proj.pkl'.format(cloud_name)) + + # Try to load previous indices + if exists(proj_file): + with open(proj_file, 'rb') as f: + proj_inds, labels = pickle.load(f) + else: + # Read ply file + # Read ply file + if self.set != 'test': + xyz, rgb, labels, las_feas = read_las(file_path) + labels = np.array(labels) + else: + xyz, rgb, las_feas = read_las(file_path, False, True) + labels = np.zeros(len(rgb)) + + sub_feas = np.hstack((rgb, las_feas)) + xyz = xyz.astype(np.float32) + sub_feas = sub_feas.astype(np.float32) + + # Compute projection inds + idxs = self.input_trees[i].query(xyz, return_distance=False) + #dists, idxs = self.input_trees[i_cloud].kneighbors(points) + proj_inds = np.squeeze(idxs).astype(np.int32) + + # Save + with open(proj_file, 'wb') as f: + pickle.dump([proj_inds, labels], f) + + self.test_proj += [proj_inds] + self.validation_labels += [labels] + print('{:s} done in {:.1f}s'.format(cloud_name, time.time() - t0)) + + print() + return + + def load_evaluation_points(self, file_path): + """ + Load points (from test or validation split) on which the metrics should be evaluated + """ + + # Get original points + # data = read_ply(file_path) + # return np.vstack((data['x'], data['y'], data['z'])).T + + xyz, rgb, rawlabels = read_las(file_path, False, True) + xyz = xyz.astype(np.float32) + return xyz + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Utility classes definition +# \********************************/ + + +class Hessigsim3DSampler(Sampler): + """Sampler for Hessigsim3D""" + + def __init__(self, dataset: Hessigsim3DDataset): + Sampler.__init__(self, dataset) + + # Dataset used by the sampler (no copy is made in memory) + self.dataset = dataset + + # Number of step per epoch + if dataset.set == 'training': + self.N = dataset.config.epoch_steps + else: + self.N = dataset.config.validation_size + + return + + def __iter__(self): + """ + Yield next batch indices here. In this dataset, this is a dummy sampler that yield the index of batch element + (input sphere) in epoch instead of the list of point indices + """ + + if not self.dataset.use_potentials: + + # Initiate current epoch ind + self.dataset.epoch_i *= 0 + self.dataset.epoch_inds *= 0 + + # Initiate container for indices + all_epoch_inds = np.zeros((2, 0), dtype=np.int32) + + # Number of sphere centers taken per class in each cloud + num_centers = self.N * self.dataset.config.batch_num + random_pick_n = int(np.ceil(num_centers / (self.dataset.num_clouds * self.dataset.config.num_classes))) + + # Choose random points of each class for each cloud + for cloud_ind, cloud_labels in enumerate(self.dataset.input_labels): + epoch_indices = np.empty((0,), dtype=np.int32) + for label_ind, label in enumerate(self.dataset.label_values): + if label not in self.dataset.ignored_labels: + label_indices = np.where(np.equal(cloud_labels, label))[0] + if len(label_indices) <= random_pick_n: + epoch_indices = np.hstack((epoch_indices, label_indices)) + elif len(label_indices) < 50 * random_pick_n: + new_randoms = np.random.choice(label_indices, size=random_pick_n, replace=False) + epoch_indices = np.hstack((epoch_indices, new_randoms.astype(np.int32))) + else: + rand_inds = [] + while len(rand_inds) < random_pick_n: + rand_inds = np.unique(np.random.choice(label_indices, size=5 * random_pick_n, replace=True)) + epoch_indices = np.hstack((epoch_indices, rand_inds[:random_pick_n].astype(np.int32))) + + # Stack those indices with the cloud index + epoch_indices = np.vstack((np.full(epoch_indices.shape, cloud_ind, dtype=np.int32), epoch_indices)) + + # Update the global indice container + all_epoch_inds = np.hstack((all_epoch_inds, epoch_indices)) + + # Random permutation of the indices + random_order = np.random.permutation(all_epoch_inds.shape[1]) + all_epoch_inds = all_epoch_inds[:, random_order].astype(np.int64) + + # Update epoch inds + self.dataset.epoch_inds += torch.from_numpy(all_epoch_inds[:, :num_centers]) + + # Generator loop + for i in range(self.N): + yield i + + def __len__(self): + """ + The number of yielded samples is variable + """ + return self.N + + def fast_calib(self): + """ + This method calibrates the batch sizes while ensuring the potentials are well initialized. Indeed on a dataset + like Semantic3D, before potential have been updated over the dataset, there are cahnces that all the dense area + are picked in the begining and in the end, we will have very large batch of small point clouds + :return: + """ + + # Estimated average batch size and target value + estim_b = 0 + target_b = self.dataset.config.batch_num + + # Calibration parameters + low_pass_T = 10 + Kp = 100.0 + finer = False + breaking = False + + # Convergence parameters + smooth_errors = [] + converge_threshold = 0.1 + + t = [time.time()] + last_display = time.time() + mean_dt = np.zeros(2) + + for epoch in range(10): + for i, test in enumerate(self): + + # New time + t = t[-1:] + t += [time.time()] + + # batch length + b = len(test) + + # Update estim_b (low pass filter) + estim_b += (b - estim_b) / low_pass_T + + # Estimate error (noisy) + error = target_b - b + + # Save smooth errors for convergene check + smooth_errors.append(target_b - estim_b) + if len(smooth_errors) > 10: + smooth_errors = smooth_errors[1:] + + # Update batch limit with P controller + self.dataset.batch_limit += Kp * error + + # finer low pass filter when closing in + if not finer and np.abs(estim_b - target_b) < 1: + low_pass_T = 100 + finer = True + + # Convergence + if finer and np.max(np.abs(smooth_errors)) < converge_threshold: + breaking = True + break + + # Average timing + t += [time.time()] + mean_dt = 0.9 * mean_dt + 0.1 * (np.array(t[1:]) - np.array(t[:-1])) + + # Console display (only one per second) + if (t[-1] - last_display) > 1.0: + last_display = t[-1] + message = 'Step {:5d} estim_b ={:5.2f} batch_limit ={:7d}, // {:.1f}ms {:.1f}ms' + print(message.format(i, + estim_b, + int(self.dataset.batch_limit), + 1000 * mean_dt[0], + 1000 * mean_dt[1])) + + if breaking: + break + + def calibration(self, dataloader, untouched_ratio=0.9, verbose=False, force_redo=False): + """ + Method performing batch and neighbors calibration. + Batch calibration: Set "batch_limit" (the maximum number of points allowed in every batch) so that the + average batch size (number of stacked pointclouds) is the one asked. + Neighbors calibration: Set the "neighborhood_limits" (the maximum number of neighbors allowed in convolutions) + so that 90% of the neighborhoods remain untouched. There is a limit for each layer. + """ + + ############################## + # Previously saved calibration + ############################## + + print('\nStarting Calibration (use verbose=True for more details)') + t0 = time.time() + + redo = force_redo + + # Batch limit + # *********** + + # Load batch_limit dictionary + batch_lim_file = join(self.dataset.path, 'batch_limits.pkl') + if exists(batch_lim_file): + with open(batch_lim_file, 'rb') as file: + batch_lim_dict = pickle.load(file) + else: + batch_lim_dict = {} + + # Check if the batch limit associated with current parameters exists + if self.dataset.use_potentials: + sampler_method = 'potentials' + else: + sampler_method = 'random' + key = '{:s}_{:.3f}_{:.3f}_{:d}'.format(sampler_method, + self.dataset.config.in_radius, + self.dataset.config.first_subsampling_dl, + self.dataset.config.batch_num) + if not redo and key in batch_lim_dict: + self.dataset.batch_limit[0] = batch_lim_dict[key] + else: + redo = True + + if verbose: + print('\nPrevious calibration found:') + print('Check batch limit dictionary') + if key in batch_lim_dict: + color = bcolors.OKGREEN + v = str(int(batch_lim_dict[key])) + else: + color = bcolors.FAIL + v = '?' + print('{:}\"{:s}\": {:s}{:}'.format(color, key, v, bcolors.ENDC)) + + # Neighbors limit + # *************** + + # Load neighb_limits dictionary + neighb_lim_file = join(self.dataset.path, 'neighbors_limits.pkl') + if exists(neighb_lim_file): + with open(neighb_lim_file, 'rb') as file: + neighb_lim_dict = pickle.load(file) + else: + neighb_lim_dict = {} + + # Check if the limit associated with current parameters exists (for each layer) + neighb_limits = [] + for layer_ind in range(self.dataset.config.num_layers): + + dl = self.dataset.config.first_subsampling_dl * (2**layer_ind) + if self.dataset.config.deform_layers[layer_ind]: + r = dl * self.dataset.config.deform_radius + else: + r = dl * self.dataset.config.conv_radius + + key = '{:.3f}_{:.3f}'.format(dl, r) + if key in neighb_lim_dict: + neighb_limits += [neighb_lim_dict[key]] + + if not redo and len(neighb_limits) == self.dataset.config.num_layers: + self.dataset.neighborhood_limits = neighb_limits + else: + redo = True + + if verbose: + print('Check neighbors limit dictionary') + for layer_ind in range(self.dataset.config.num_layers): + dl = self.dataset.config.first_subsampling_dl * (2**layer_ind) + if self.dataset.config.deform_layers[layer_ind]: + r = dl * self.dataset.config.deform_radius + else: + r = dl * self.dataset.config.conv_radius + key = '{:.3f}_{:.3f}'.format(dl, r) + + if key in neighb_lim_dict: + color = bcolors.OKGREEN + v = str(neighb_lim_dict[key]) + else: + color = bcolors.FAIL + v = '?' + print('{:}\"{:s}\": {:s}{:}'.format(color, key, v, bcolors.ENDC)) + + if redo: + + ############################ + # Neighbors calib parameters + ############################ + + # From config parameter, compute higher bound of neighbors number in a neighborhood + hist_n = int(np.ceil(4 / 3 * np.pi * (self.dataset.config.deform_radius + 1) ** 3)) + + # Histogram of neighborhood sizes + neighb_hists = np.zeros((self.dataset.config.num_layers, hist_n), dtype=np.int32) + + ######################## + # Batch calib parameters + ######################## + + # Estimated average batch size and target value + estim_b = 0 + target_b = self.dataset.config.batch_num + + # Calibration parameters + low_pass_T = 10 + Kp = 100.0 + finer = False + + # Convergence parameters + smooth_errors = [] + converge_threshold = 0.1 + + # Loop parameters + last_display = time.time() + i = 0 + breaking = False + + ##################### + # Perform calibration + ##################### + + for epoch in range(10): + for batch_i, batch in enumerate(dataloader): + + # Update neighborhood histogram + counts = [np.sum(neighb_mat.numpy() < neighb_mat.shape[0], axis=1) for neighb_mat in batch.neighbors] + hists = [np.bincount(c, minlength=hist_n)[:hist_n] for c in counts] + neighb_hists += np.vstack(hists) + + # batch length + b = len(batch.cloud_inds) + + # Update estim_b (low pass filter) + estim_b += (b - estim_b) / low_pass_T + + # Estimate error (noisy) + error = target_b - b + + # Save smooth errors for convergene check + smooth_errors.append(target_b - estim_b) + if len(smooth_errors) > 10: + smooth_errors = smooth_errors[1:] + + # Update batch limit with P controller + self.dataset.batch_limit += Kp * error + + # finer low pass filter when closing in + if not finer and np.abs(estim_b - target_b) < 1: + low_pass_T = 100 + finer = True + + # Convergence + if finer and np.max(np.abs(smooth_errors)) < converge_threshold: + breaking = True + break + + i += 1 + t = time.time() + + # Console display (only one per second) + if verbose and (t - last_display) > 1.0: + last_display = t + message = 'Step {:5d} estim_b ={:5.2f} batch_limit ={:7d}' + print(message.format(i, + estim_b, + int(self.dataset.batch_limit))) + + if breaking: + break + + # Use collected neighbor histogram to get neighbors limit + cumsum = np.cumsum(neighb_hists.T, axis=0) + percentiles = np.sum(cumsum < (untouched_ratio * cumsum[hist_n - 1, :]), axis=0) + self.dataset.neighborhood_limits = percentiles + + if verbose: + + # Crop histogram + while np.sum(neighb_hists[:, -1]) == 0: + neighb_hists = neighb_hists[:, :-1] + hist_n = neighb_hists.shape[1] + + print('\n**************************************************\n') + line0 = 'neighbors_num ' + for layer in range(neighb_hists.shape[0]): + line0 += '| layer {:2d} '.format(layer) + print(line0) + for neighb_size in range(hist_n): + line0 = ' {:4d} '.format(neighb_size) + for layer in range(neighb_hists.shape[0]): + if neighb_size > percentiles[layer]: + color = bcolors.FAIL + else: + color = bcolors.OKGREEN + line0 += '|{:}{:10d}{:} '.format(color, + neighb_hists[layer, neighb_size], + bcolors.ENDC) + + print(line0) + + print('\n**************************************************\n') + print('\nchosen neighbors limits: ', percentiles) + print() + + # Save batch_limit dictionary + if self.dataset.use_potentials: + sampler_method = 'potentials' + else: + sampler_method = 'random' + key = '{:s}_{:.3f}_{:.3f}_{:d}'.format(sampler_method, + self.dataset.config.in_radius, + self.dataset.config.first_subsampling_dl, + self.dataset.config.batch_num) + batch_lim_dict[key] = float(self.dataset.batch_limit) + with open(batch_lim_file, 'wb') as file: + pickle.dump(batch_lim_dict, file) + + # Save neighb_limit dictionary + for layer_ind in range(self.dataset.config.num_layers): + dl = self.dataset.config.first_subsampling_dl * (2 ** layer_ind) + if self.dataset.config.deform_layers[layer_ind]: + r = dl * self.dataset.config.deform_radius + else: + r = dl * self.dataset.config.conv_radius + key = '{:.3f}_{:.3f}'.format(dl, r) + neighb_lim_dict[key] = self.dataset.neighborhood_limits[layer_ind] + with open(neighb_lim_file, 'wb') as file: + pickle.dump(neighb_lim_dict, file) + + + print('Calibration done in {:.1f}s\n'.format(time.time() - t0)) + return + + +class Hessigsim3DCustomBatch: + """Custom batch definition with memory pinning for Hessigsim3D""" + + def __init__(self, input_list): + + # Get rid of batch dimension + input_list = input_list[0] + + # Number of layers + L = (len(input_list) - 7) // 5 + + # Extract input tensors from the list of numpy array + ind = 0 + self.points = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.neighbors = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.pools = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.upsamples = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.lengths = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.features = torch.from_numpy(input_list[ind]) + ind += 1 + self.labels = torch.from_numpy(input_list[ind]).long() # torch.from_numpy(input_list[ind]) + ind += 1 + self.scales = torch.from_numpy(input_list[ind]) + ind += 1 + self.rots = torch.from_numpy(input_list[ind]) + ind += 1 + self.cloud_inds = torch.from_numpy(input_list[ind]) + ind += 1 + self.center_inds = torch.from_numpy(input_list[ind]) + ind += 1 + self.input_inds = torch.from_numpy(input_list[ind]) + + return + + def pin_memory(self): + """ + Manual pinning of the memory + """ + + self.points = [in_tensor.pin_memory() for in_tensor in self.points] + self.neighbors = [in_tensor.pin_memory() for in_tensor in self.neighbors] + self.pools = [in_tensor.pin_memory() for in_tensor in self.pools] + self.upsamples = [in_tensor.pin_memory() for in_tensor in self.upsamples] + self.lengths = [in_tensor.pin_memory() for in_tensor in self.lengths] + self.features = self.features.pin_memory() + self.labels = self.labels.pin_memory() + self.scales = self.scales.pin_memory() + self.rots = self.rots.pin_memory() + self.cloud_inds = self.cloud_inds.pin_memory() + self.center_inds = self.center_inds.pin_memory() + self.input_inds = self.input_inds.pin_memory() + + return self + + def to(self, device): + + self.points = [in_tensor.to(device) for in_tensor in self.points] + self.neighbors = [in_tensor.to(device) for in_tensor in self.neighbors] + self.pools = [in_tensor.to(device) for in_tensor in self.pools] + self.upsamples = [in_tensor.to(device) for in_tensor in self.upsamples] + self.lengths = [in_tensor.to(device) for in_tensor in self.lengths] + self.features = self.features.to(device) + self.labels = self.labels.to(device) + self.scales = self.scales.to(device) + self.rots = self.rots.to(device) + self.cloud_inds = self.cloud_inds.to(device) + self.center_inds = self.center_inds.to(device) + self.input_inds = self.input_inds.to(device) + + return self + + def unstack_points(self, layer=None): + """Unstack the points""" + return self.unstack_elements('points', layer) + + def unstack_neighbors(self, layer=None): + """Unstack the neighbors indices""" + return self.unstack_elements('neighbors', layer) + + def unstack_pools(self, layer=None): + """Unstack the pooling indices""" + return self.unstack_elements('pools', layer) + + def unstack_elements(self, element_name, layer=None, to_numpy=True): + """ + Return a list of the stacked elements in the batch at a certain layer. If no layer is given, then return all + layers + """ + + if element_name == 'points': + elements = self.points + elif element_name == 'neighbors': + elements = self.neighbors + elif element_name == 'pools': + elements = self.pools[:-1] + else: + raise ValueError('Unknown element name: {:s}'.format(element_name)) + + all_p_list = [] + for layer_i, layer_elems in enumerate(elements): + + if layer is None or layer == layer_i: + + i0 = 0 + p_list = [] + if element_name == 'pools': + lengths = self.lengths[layer_i+1] + else: + lengths = self.lengths[layer_i] + + for b_i, length in enumerate(lengths): + + elem = layer_elems[i0:i0 + length] + if element_name == 'neighbors': + elem[elem >= self.points[layer_i].shape[0]] = -1 + elem[elem >= 0] -= i0 + elif element_name == 'pools': + elem[elem >= self.points[layer_i].shape[0]] = -1 + elem[elem >= 0] -= torch.sum(self.lengths[layer_i][:b_i]) + i0 += length + + if to_numpy: + p_list.append(elem.numpy()) + else: + p_list.append(elem) + + if layer == layer_i: + return p_list + + all_p_list.append(p_list) + + return all_p_list + + +def Hessigsim3DCollate(batch_data): + return Hessigsim3DCustomBatch(batch_data) + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Debug functions +# \*********************/ + + +def debug_upsampling(dataset, loader): + """Shows which labels are sampled according to strategy chosen""" + + + for epoch in range(10): + + for batch_i, batch in enumerate(loader): + + pc1 = batch.points[1].numpy() + pc2 = batch.points[2].numpy() + up1 = batch.upsamples[1].numpy() + + print(pc1.shape, '=>', pc2.shape) + print(up1.shape, np.max(up1)) + + pc2 = np.vstack((pc2, np.zeros_like(pc2[:1, :]))) + + # Get neighbors distance + p0 = pc1[10, :] + neighbs0 = up1[10, :] + neighbs0 = pc2[neighbs0, :] - p0 + d2 = np.sum(neighbs0 ** 2, axis=1) + + print(neighbs0.shape) + print(neighbs0[:5]) + print(d2[:5]) + + print('******************') + print('*******************************************') + + _, counts = np.unique(dataset.input_labels, return_counts=True) + print(counts) + + +def debug_timing(dataset, loader): + """Timing of generator function""" + + t = [time.time()] + last_display = time.time() + mean_dt = np.zeros(2) + estim_b = dataset.config.batch_num + estim_N = 0 + + for epoch in range(10): + + for batch_i, batch in enumerate(loader): + # print(batch_i, tuple(points.shape), tuple(normals.shape), labels, indices, in_sizes) + + # New time + t = t[-1:] + t += [time.time()] + + # Update estim_b (low pass filter) + estim_b += (len(batch.cloud_inds) - estim_b) / 100 + estim_N += (batch.features.shape[0] - estim_N) / 10 + + # Pause simulating computations + time.sleep(0.05) + t += [time.time()] + + # Average timing + mean_dt = 0.9 * mean_dt + 0.1 * (np.array(t[1:]) - np.array(t[:-1])) + + # Console display (only one per second) + if (t[-1] - last_display) > -1.0: + last_display = t[-1] + message = 'Step {:08d} -> (ms/batch) {:8.2f} {:8.2f} / batch = {:.2f} - {:.0f}' + print(message.format(batch_i, + 1000 * mean_dt[0], + 1000 * mean_dt[1], + estim_b, + estim_N)) + + print('************* Epoch ended *************') + + _, counts = np.unique(dataset.input_labels, return_counts=True) + print(counts) + + +def debug_show_clouds(dataset, loader): + + + for epoch in range(10): + + clouds = [] + cloud_normals = [] + cloud_labels = [] + + L = dataset.config.num_layers + + for batch_i, batch in enumerate(loader): + + # Print characteristics of input tensors + print('\nPoints tensors') + for i in range(L): + print(batch.points[i].dtype, batch.points[i].shape) + print('\nNeigbors tensors') + for i in range(L): + print(batch.neighbors[i].dtype, batch.neighbors[i].shape) + print('\nPools tensors') + for i in range(L): + print(batch.pools[i].dtype, batch.pools[i].shape) + print('\nStack lengths') + for i in range(L): + print(batch.lengths[i].dtype, batch.lengths[i].shape) + print('\nFeatures') + print(batch.features.dtype, batch.features.shape) + print('\nLabels') + print(batch.labels.dtype, batch.labels.shape) + print('\nAugment Scales') + print(batch.scales.dtype, batch.scales.shape) + print('\nAugment Rotations') + print(batch.rots.dtype, batch.rots.shape) + print('\nModel indices') + print(batch.model_inds.dtype, batch.model_inds.shape) + + print('\nAre input tensors pinned') + print(batch.neighbors[0].is_pinned()) + print(batch.neighbors[-1].is_pinned()) + print(batch.points[0].is_pinned()) + print(batch.points[-1].is_pinned()) + print(batch.labels.is_pinned()) + print(batch.scales.is_pinned()) + print(batch.rots.is_pinned()) + print(batch.model_inds.is_pinned()) + + show_input_batch(batch) + + print('*******************************************') + + _, counts = np.unique(dataset.input_labels, return_counts=True) + print(counts) + + +def debug_batch_and_neighbors_calib(dataset, loader): + """Timing of generator function""" + + t = [time.time()] + last_display = time.time() + mean_dt = np.zeros(2) + + for epoch in range(10): + + for batch_i, input_list in enumerate(loader): + # print(batch_i, tuple(points.shape), tuple(normals.shape), labels, indices, in_sizes) + + # New time + t = t[-1:] + t += [time.time()] + + # Pause simulating computations + time.sleep(0.01) + t += [time.time()] + + # Average timing + mean_dt = 0.9 * mean_dt + 0.1 * (np.array(t[1:]) - np.array(t[:-1])) + + # Console display (only one per second) + if (t[-1] - last_display) > 1.0: + last_display = t[-1] + message = 'Step {:08d} -> Average timings (ms/batch) {:8.2f} {:8.2f} ' + print(message.format(batch_i, + 1000 * mean_dt[0], + 1000 * mean_dt[1])) + + print('************* Epoch ended *************') + + _, counts = np.unique(dataset.input_labels, return_counts=True) + print(counts) diff --git a/competing_methods/my_KPConv/datasets/ModelNet40.py b/competing_methods/my_KPConv/datasets/ModelNet40.py new file mode 100644 index 00000000..b00630b4 --- /dev/null +++ b/competing_methods/my_KPConv/datasets/ModelNet40.py @@ -0,0 +1,995 @@ +# +# +# 0=================================0 +# | Kernel Point Convolutions | +# 0=================================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Class handling ModelNet40 dataset. +# Implements a Dataset, a Sampler, and a collate_fn +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 11/06/2018 +# + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Imports and global variables +# \**********************************/ +# + +# Common libs +import time +import numpy as np +import pickle +import torch +import math + + +# OS functions +from os import listdir +from os.path import exists, join + +# Dataset parent class +from datasets.common import PointCloudDataset +from torch.utils.data import Sampler, get_worker_info +from utils.mayavi_visu import * + +from datasets.common import grid_subsampling +from utils.config import bcolors + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Dataset class definition +# \******************************/ + + +class ModelNet40Dataset(PointCloudDataset): + """Class to handle Modelnet 40 dataset.""" + + def __init__(self, config, train=True, orient_correction=True): + """ + This dataset is small enough to be stored in-memory, so load all point clouds here + """ + PointCloudDataset.__init__(self, 'ModelNet40') + + ############ + # Parameters + ############ + + # Dict from labels to names + self.label_to_names = {0: 'airplane', + 1: 'bathtub', + 2: 'bed', + 3: 'bench', + 4: 'bookshelf', + 5: 'bottle', + 6: 'bowl', + 7: 'car', + 8: 'chair', + 9: 'cone', + 10: 'cup', + 11: 'curtain', + 12: 'desk', + 13: 'door', + 14: 'dresser', + 15: 'flower_pot', + 16: 'glass_box', + 17: 'guitar', + 18: 'keyboard', + 19: 'lamp', + 20: 'laptop', + 21: 'mantel', + 22: 'monitor', + 23: 'night_stand', + 24: 'person', + 25: 'piano', + 26: 'plant', + 27: 'radio', + 28: 'range_hood', + 29: 'sink', + 30: 'sofa', + 31: 'stairs', + 32: 'stool', + 33: 'table', + 34: 'tent', + 35: 'toilet', + 36: 'tv_stand', + 37: 'vase', + 38: 'wardrobe', + 39: 'xbox'} + + # Initialize a bunch of variables concerning class labels + self.init_labels() + + # List of classes ignored during training (can be empty) + self.ignored_labels = np.array([]) + + # Dataset folder + self.path = '../../Data/ModelNet40' + + # Type of task conducted on this dataset + self.dataset_task = 'classification' + + # Update number of class and data task in configuration + config.num_classes = self.num_classes + config.dataset_task = self.dataset_task + + # Parameters from config + self.config = config + + # Training or test set + self.train = train + + # Number of models and models used per epoch + if self.train: + self.num_models = 9843 + if config.epoch_steps and config.epoch_steps * config.batch_num < self.num_models: + self.epoch_n = config.epoch_steps * config.batch_num + else: + self.epoch_n = self.num_models + else: + self.num_models = 2468 + self.epoch_n = min(self.num_models, config.validation_size * config.batch_num) + + ############# + # Load models + ############# + + if 0 < self.config.first_subsampling_dl <= 0.01: + raise ValueError('subsampling_parameter too low (should be over 1 cm') + + self.input_points, self.input_normals, self.input_labels = self.load_subsampled_clouds(orient_correction) + + return + + def __len__(self): + """ + Return the length of data here + """ + return self.num_models + + def __getitem__(self, idx_list): + """ + The main thread gives a list of indices to load a batch. Each worker is going to work in parallel to load a + different list of indices. + """ + + ################### + # Gather batch data + ################### + + tp_list = [] + tn_list = [] + tl_list = [] + ti_list = [] + s_list = [] + R_list = [] + + for p_i in idx_list: + + # Get points and labels + points = self.input_points[p_i].astype(np.float32) + normals = self.input_normals[p_i].astype(np.float32) + label = self.label_to_idx[self.input_labels[p_i]] + + # Data augmentation + points, normals, scale, R = self.augmentation_transform(points, normals) + + # Stack batch + tp_list += [points] + tn_list += [normals] + tl_list += [label] + ti_list += [p_i] + s_list += [scale] + R_list += [R] + + ################### + # Concatenate batch + ################### + + #show_ModelNet_examples(tp_list, cloud_normals=tn_list) + + stacked_points = np.concatenate(tp_list, axis=0) + stacked_normals = np.concatenate(tn_list, axis=0) + labels = np.array(tl_list, dtype=np.int64) + model_inds = np.array(ti_list, dtype=np.int32) + stack_lengths = np.array([tp.shape[0] for tp in tp_list], dtype=np.int32) + scales = np.array(s_list, dtype=np.float32) + rots = np.stack(R_list, axis=0) + + # Input features + stacked_features = np.ones_like(stacked_points[:, :1], dtype=np.float32) + if self.config.in_features_dim == 1: + pass + elif self.config.in_features_dim == 4: + stacked_features = np.hstack((stacked_features, stacked_normals)) + else: + raise ValueError('Only accepted input dimensions are 1, 4 and 7 (without and with XYZ)') + + ####################### + # Create network inputs + ####################### + # + # Points, neighbors, pooling indices for each layers + # + + # Get the whole input list + input_list = self.classification_inputs(stacked_points, + stacked_features, + labels, + stack_lengths) + + # Add scale and rotation for testing + input_list += [scales, rots, model_inds] + + return input_list + + def load_subsampled_clouds(self, orient_correction): + + # Restart timer + t0 = time.time() + + # Load wanted points if possible + if self.train: + split ='training' + else: + split = 'test' + + print('\nLoading {:s} points subsampled at {:.3f}'.format(split, self.config.first_subsampling_dl)) + filename = join(self.path, '{:s}_{:.3f}_record.pkl'.format(split, self.config.first_subsampling_dl)) + + if exists(filename): + with open(filename, 'rb') as file: + input_points, input_normals, input_labels = pickle.load(file) + + # Else compute them from original points + else: + + # Collect training file names + if self.train: + names = np.loadtxt(join(self.path, 'modelnet40_train.txt'), dtype=np.str) + else: + names = np.loadtxt(join(self.path, 'modelnet40_test.txt'), dtype=np.str) + + # Initialize containers + input_points = [] + input_normals = [] + + # Advanced display + N = len(names) + progress_n = 30 + fmt_str = '[{:<' + str(progress_n) + '}] {:5.1f}%' + + # Collect point clouds + for i, cloud_name in enumerate(names): + + # Read points + class_folder = '_'.join(cloud_name.split('_')[:-1]) + txt_file = join(self.path, class_folder, cloud_name) + '.txt' + data = np.loadtxt(txt_file, delimiter=',', dtype=np.float32) + + # Subsample them + if self.config.first_subsampling_dl > 0: + points, normals = grid_subsampling(data[:, :3], + features=data[:, 3:], + sampleDl=self.config.first_subsampling_dl) + else: + points = data[:, :3] + normals = data[:, 3:] + + print('', end='\r') + print(fmt_str.format('#' * ((i * progress_n) // N), 100 * i / N), end='', flush=True) + + # Add to list + input_points += [points] + input_normals += [normals] + + print('', end='\r') + print(fmt_str.format('#' * progress_n, 100), end='', flush=True) + print() + + # Get labels + label_names = ['_'.join(name.split('_')[:-1]) for name in names] + input_labels = np.array([self.name_to_label[name] for name in label_names]) + + # Save for later use + with open(filename, 'wb') as file: + pickle.dump((input_points, + input_normals, + input_labels), file) + + lengths = [p.shape[0] for p in input_points] + sizes = [l * 4 * 6 for l in lengths] + print('{:.1f} MB loaded in {:.1f}s'.format(np.sum(sizes) * 1e-6, time.time() - t0)) + + if orient_correction: + input_points = [pp[:, [0, 2, 1]] for pp in input_points] + input_normals = [nn[:, [0, 2, 1]] for nn in input_normals] + + return input_points, input_normals, input_labels + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Utility classes definition +# \********************************/ + + +class ModelNet40Sampler(Sampler): + """Sampler for ModelNet40""" + + def __init__(self, dataset: ModelNet40Dataset, use_potential=True, balance_labels=False): + Sampler.__init__(self, dataset) + + # Does the sampler use potential for regular sampling + self.use_potential = use_potential + + # Should be balance the classes when sampling + self.balance_labels = balance_labels + + # Dataset used by the sampler (no copy is made in memory) + self.dataset = dataset + + # Create potentials + if self.use_potential: + self.potentials = np.random.rand(len(dataset.input_labels)) * 0.1 + 0.1 + else: + self.potentials = None + + # Initialize value for batch limit (max number of points per batch). + self.batch_limit = 10000 + + return + + def __iter__(self): + """ + Yield next batch indices here + """ + + ########################################## + # Initialize the list of generated indices + ########################################## + + if self.use_potential: + if self.balance_labels: + + gen_indices = [] + pick_n = self.dataset.epoch_n // self.dataset.num_classes + 1 + for i, l in enumerate(self.dataset.label_values): + + # Get the potentials of the objects of this class + label_inds = np.where(np.equal(self.dataset.input_labels, l))[0] + class_potentials = self.potentials[label_inds] + + # Get the indices to generate thanks to potentials + if pick_n < class_potentials.shape[0]: + pick_indices = np.argpartition(class_potentials, pick_n)[:pick_n] + else: + pick_indices = np.random.permutation(class_potentials.shape[0]) + class_indices = label_inds[pick_indices] + gen_indices.append(class_indices) + + # Stack the chosen indices of all classes + gen_indices = np.random.permutation(np.hstack(gen_indices)) + + else: + + # Get indices with the minimum potential + if self.dataset.epoch_n < self.potentials.shape[0]: + gen_indices = np.argpartition(self.potentials, self.dataset.epoch_n)[:self.dataset.epoch_n] + else: + gen_indices = np.random.permutation(self.potentials.shape[0]) + gen_indices = np.random.permutation(gen_indices) + + # Update potentials (Change the order for the next epoch) + self.potentials[gen_indices] = np.ceil(self.potentials[gen_indices]) + self.potentials[gen_indices] += np.random.rand(gen_indices.shape[0]) * 0.1 + 0.1 + + else: + if self.balance_labels: + pick_n = self.dataset.epoch_n // self.dataset.num_classes + 1 + gen_indices = [] + for l in self.dataset.label_values: + label_inds = np.where(np.equal(self.dataset.input_labels, l))[0] + rand_inds = np.random.choice(label_inds, size=pick_n, replace=True) + gen_indices += [rand_inds] + gen_indices = np.random.permutation(np.hstack(gen_indices)) + else: + gen_indices = np.random.permutation(self.dataset.num_models)[:self.dataset.epoch_n] + + ################ + # Generator loop + ################ + + # Initialize concatenation lists + ti_list = [] + batch_n = 0 + + # Generator loop + for p_i in gen_indices: + + # Size of picked cloud + n = self.dataset.input_points[p_i].shape[0] + + # In case batch is full, yield it and reset it + if batch_n + n > self.batch_limit and batch_n > 0: + yield np.array(ti_list, dtype=np.int32) + ti_list = [] + batch_n = 0 + + # Add data to current batch + ti_list += [p_i] + + # Update batch size + batch_n += n + + yield np.array(ti_list, dtype=np.int32) + + return 0 + + def __len__(self): + """ + The number of yielded samples is variable + """ + return None + + def calibration(self, dataloader, untouched_ratio=0.9, verbose=False): + """ + Method performing batch and neighbors calibration. + Batch calibration: Set "batch_limit" (the maximum number of points allowed in every batch) so that the + average batch size (number of stacked pointclouds) is the one asked. + Neighbors calibration: Set the "neighborhood_limits" (the maximum number of neighbors allowed in convolutions) + so that 90% of the neighborhoods remain untouched. There is a limit for each layer. + """ + + ############################## + # Previously saved calibration + ############################## + + print('\nStarting Calibration (use verbose=True for more details)') + t0 = time.time() + + redo = False + + # Batch limit + # *********** + + # Load batch_limit dictionary + batch_lim_file = join(self.dataset.path, 'batch_limits.pkl') + if exists(batch_lim_file): + with open(batch_lim_file, 'rb') as file: + batch_lim_dict = pickle.load(file) + else: + batch_lim_dict = {} + + # Check if the batch limit associated with current parameters exists + key = '{:.3f}_{:d}'.format(self.dataset.config.first_subsampling_dl, + self.dataset.config.batch_num) + if key in batch_lim_dict: + self.batch_limit = batch_lim_dict[key] + else: + redo = True + + if verbose: + print('\nPrevious calibration found:') + print('Check batch limit dictionary') + if key in batch_lim_dict: + color = bcolors.OKGREEN + v = str(int(batch_lim_dict[key])) + else: + color = bcolors.FAIL + v = '?' + print('{:}\"{:s}\": {:s}{:}'.format(color, key, v, bcolors.ENDC)) + + # Neighbors limit + # *************** + + # Load neighb_limits dictionary + neighb_lim_file = join(self.dataset.path, 'neighbors_limits.pkl') + if exists(neighb_lim_file): + with open(neighb_lim_file, 'rb') as file: + neighb_lim_dict = pickle.load(file) + else: + neighb_lim_dict = {} + + # Check if the limit associated with current parameters exists (for each layer) + neighb_limits = [] + for layer_ind in range(self.dataset.config.num_layers): + + dl = self.dataset.config.first_subsampling_dl * (2**layer_ind) + if self.dataset.config.deform_layers[layer_ind]: + r = dl * self.dataset.config.deform_radius + else: + r = dl * self.dataset.config.conv_radius + + key = '{:.3f}_{:.3f}'.format(dl, r) + if key in neighb_lim_dict: + neighb_limits += [neighb_lim_dict[key]] + + if len(neighb_limits) == self.dataset.config.num_layers: + self.dataset.neighborhood_limits = neighb_limits + else: + redo = True + + if verbose: + print('Check neighbors limit dictionary') + for layer_ind in range(self.dataset.config.num_layers): + dl = self.dataset.config.first_subsampling_dl * (2**layer_ind) + if self.dataset.config.deform_layers[layer_ind]: + r = dl * self.dataset.config.deform_radius + else: + r = dl * self.dataset.config.conv_radius + key = '{:.3f}_{:.3f}'.format(dl, r) + + if key in neighb_lim_dict: + color = bcolors.OKGREEN + v = str(neighb_lim_dict[key]) + else: + color = bcolors.FAIL + v = '?' + print('{:}\"{:s}\": {:s}{:}'.format(color, key, v, bcolors.ENDC)) + + if redo: + + ############################ + # Neighbors calib parameters + ############################ + + # From config parameter, compute higher bound of neighbors number in a neighborhood + hist_n = int(np.ceil(4 / 3 * np.pi * (self.dataset.config.conv_radius + 1) ** 3)) + + # Histogram of neighborhood sizes + neighb_hists = np.zeros((self.dataset.config.num_layers, hist_n), dtype=np.int32) + + ######################## + # Batch calib parameters + ######################## + + # Estimated average batch size and target value + estim_b = 0 + target_b = self.dataset.config.batch_num + + # Calibration parameters + low_pass_T = 10 + Kp = 100.0 + finer = False + + # Convergence parameters + smooth_errors = [] + converge_threshold = 0.1 + + # Loop parameters + last_display = time.time() + i = 0 + breaking = False + + ##################### + # Perform calibration + ##################### + + for epoch in range(10): + for batch_i, batch in enumerate(dataloader): + + # Update neighborhood histogram + counts = [np.sum(neighb_mat.numpy() < neighb_mat.shape[0], axis=1) for neighb_mat in batch.neighbors] + hists = [np.bincount(c, minlength=hist_n)[:hist_n] for c in counts] + neighb_hists += np.vstack(hists) + + # batch length + b = len(batch.labels) + + # Update estim_b (low pass filter) + estim_b += (b - estim_b) / low_pass_T + + # Estimate error (noisy) + error = target_b - b + + # Save smooth errors for convergene check + smooth_errors.append(target_b - estim_b) + if len(smooth_errors) > 10: + smooth_errors = smooth_errors[1:] + + # Update batch limit with P controller + self.batch_limit += Kp * error + + # finer low pass filter when closing in + if not finer and np.abs(estim_b - target_b) < 1: + low_pass_T = 100 + finer = True + + # Convergence + if finer and np.max(np.abs(smooth_errors)) < converge_threshold: + breaking = True + break + + i += 1 + t = time.time() + + # Console display (only one per second) + if verbose and (t - last_display) > 1.0: + last_display = t + message = 'Step {:5d} estim_b ={:5.2f} batch_limit ={:7d}' + print(message.format(i, + estim_b, + int(self.batch_limit))) + + if breaking: + break + + # Use collected neighbor histogram to get neighbors limit + cumsum = np.cumsum(neighb_hists.T, axis=0) + percentiles = np.sum(cumsum < (untouched_ratio * cumsum[hist_n - 1, :]), axis=0) + self.dataset.neighborhood_limits = percentiles + + if verbose: + + # Crop histogram + while np.sum(neighb_hists[:, -1]) == 0: + neighb_hists = neighb_hists[:, :-1] + hist_n = neighb_hists.shape[1] + + print('\n**************************************************\n') + line0 = 'neighbors_num ' + for layer in range(neighb_hists.shape[0]): + line0 += '| layer {:2d} '.format(layer) + print(line0) + for neighb_size in range(hist_n): + line0 = ' {:4d} '.format(neighb_size) + for layer in range(neighb_hists.shape[0]): + if neighb_size > percentiles[layer]: + color = bcolors.FAIL + else: + color = bcolors.OKGREEN + line0 += '|{:}{:10d}{:} '.format(color, + neighb_hists[layer, neighb_size], + bcolors.ENDC) + + print(line0) + + print('\n**************************************************\n') + print('\nchosen neighbors limits: ', percentiles) + print() + + # Save batch_limit dictionary + key = '{:.3f}_{:d}'.format(self.dataset.config.first_subsampling_dl, + self.dataset.config.batch_num) + batch_lim_dict[key] = self.batch_limit + with open(batch_lim_file, 'wb') as file: + pickle.dump(batch_lim_dict, file) + + # Save neighb_limit dictionary + for layer_ind in range(self.dataset.config.num_layers): + dl = self.dataset.config.first_subsampling_dl * (2 ** layer_ind) + if self.dataset.config.deform_layers[layer_ind]: + r = dl * self.dataset.config.deform_radius + else: + r = dl * self.dataset.config.conv_radius + key = '{:.3f}_{:.3f}'.format(dl, r) + neighb_lim_dict[key] = self.dataset.neighborhood_limits[layer_ind] + with open(neighb_lim_file, 'wb') as file: + pickle.dump(neighb_lim_dict, file) + + + print('Calibration done in {:.1f}s\n'.format(time.time() - t0)) + return + + +class ModelNet40CustomBatch: + """Custom batch definition with memory pinning for ModelNet40""" + + def __init__(self, input_list): + + # Get rid of batch dimension + input_list = input_list[0] + + # Number of layers + L = (len(input_list) - 5) // 4 + + # Extract input tensors from the list of numpy array + ind = 0 + self.points = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.neighbors = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.pools = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.lengths = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.features = torch.from_numpy(input_list[ind]) + ind += 1 + self.labels = torch.from_numpy(input_list[ind]) + ind += 1 + self.scales = torch.from_numpy(input_list[ind]) + ind += 1 + self.rots = torch.from_numpy(input_list[ind]) + ind += 1 + self.model_inds = torch.from_numpy(input_list[ind]) + + return + + def pin_memory(self): + """ + Manual pinning of the memory + """ + + self.points = [in_tensor.pin_memory() for in_tensor in self.points] + self.neighbors = [in_tensor.pin_memory() for in_tensor in self.neighbors] + self.pools = [in_tensor.pin_memory() for in_tensor in self.pools] + self.lengths = [in_tensor.pin_memory() for in_tensor in self.lengths] + self.features = self.features.pin_memory() + self.labels = self.labels.pin_memory() + self.scales = self.scales.pin_memory() + self.rots = self.rots.pin_memory() + self.model_inds = self.model_inds.pin_memory() + + return self + + def to(self, device): + + self.points = [in_tensor.to(device) for in_tensor in self.points] + self.neighbors = [in_tensor.to(device) for in_tensor in self.neighbors] + self.pools = [in_tensor.to(device) for in_tensor in self.pools] + self.lengths = [in_tensor.to(device) for in_tensor in self.lengths] + self.features = self.features.to(device) + self.labels = self.labels.to(device) + self.scales = self.scales.to(device) + self.rots = self.rots.to(device) + self.model_inds = self.model_inds.to(device) + + return self + + def unstack_points(self, layer=None): + """Unstack the points""" + return self.unstack_elements('points', layer) + + def unstack_neighbors(self, layer=None): + """Unstack the neighbors indices""" + return self.unstack_elements('neighbors', layer) + + def unstack_pools(self, layer=None): + """Unstack the pooling indices""" + return self.unstack_elements('pools', layer) + + def unstack_elements(self, element_name, layer=None, to_numpy=True): + """ + Return a list of the stacked elements in the batch at a certain layer. If no layer is given, then return all + layers + """ + + if element_name == 'points': + elements = self.points + elif element_name == 'neighbors': + elements = self.neighbors + elif element_name == 'pools': + elements = self.pools[:-1] + else: + raise ValueError('Unknown element name: {:s}'.format(element_name)) + + all_p_list = [] + for layer_i, layer_elems in enumerate(elements): + + if layer is None or layer == layer_i: + + i0 = 0 + p_list = [] + if element_name == 'pools': + lengths = self.lengths[layer_i+1] + else: + lengths = self.lengths[layer_i] + + for b_i, length in enumerate(lengths): + + elem = layer_elems[i0:i0 + length] + if element_name == 'neighbors': + elem[elem >= self.points[layer_i].shape[0]] = -1 + elem[elem >= 0] -= i0 + elif element_name == 'pools': + elem[elem >= self.points[layer_i].shape[0]] = -1 + elem[elem >= 0] -= torch.sum(self.lengths[layer_i][:b_i]) + i0 += length + + if to_numpy: + p_list.append(elem.numpy()) + else: + p_list.append(elem) + + if layer == layer_i: + return p_list + + all_p_list.append(p_list) + + return all_p_list + + +def ModelNet40Collate(batch_data): + return ModelNet40CustomBatch(batch_data) + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Debug functions +# \*********************/ + + +def debug_sampling(dataset, sampler, loader): + """Shows which labels are sampled according to strategy chosen""" + label_sum = np.zeros((dataset.num_classes), dtype=np.int32) + for epoch in range(10): + + for batch_i, (points, normals, labels, indices, in_sizes) in enumerate(loader): + # print(batch_i, tuple(points.shape), tuple(normals.shape), labels, indices, in_sizes) + + label_sum += np.bincount(labels.numpy(), minlength=dataset.num_classes) + print(label_sum) + #print(sampler.potentials[:6]) + + print('******************') + print('*******************************************') + + _, counts = np.unique(dataset.input_labels, return_counts=True) + print(counts) + + +def debug_timing(dataset, sampler, loader): + """Timing of generator function""" + + t = [time.time()] + last_display = time.time() + mean_dt = np.zeros(2) + estim_b = dataset.config.batch_num + + for epoch in range(10): + + for batch_i, batch in enumerate(loader): + # print(batch_i, tuple(points.shape), tuple(normals.shape), labels, indices, in_sizes) + + # New time + t = t[-1:] + t += [time.time()] + + # Update estim_b (low pass filter) + estim_b += (len(batch.labels) - estim_b) / 100 + + # Pause simulating computations + time.sleep(0.050) + t += [time.time()] + + # Average timing + mean_dt = 0.9 * mean_dt + 0.1 * (np.array(t[1:]) - np.array(t[:-1])) + + # Console display (only one per second) + if (t[-1] - last_display) > -1.0: + last_display = t[-1] + message = 'Step {:08d} -> (ms/batch) {:8.2f} {:8.2f} / batch = {:.2f}' + print(message.format(batch_i, + 1000 * mean_dt[0], + 1000 * mean_dt[1], + estim_b)) + + print('************* Epoch ended *************') + + _, counts = np.unique(dataset.input_labels, return_counts=True) + print(counts) + + +def debug_show_clouds(dataset, sampler, loader): + + + for epoch in range(10): + + clouds = [] + cloud_normals = [] + cloud_labels = [] + + L = dataset.config.num_layers + + for batch_i, batch in enumerate(loader): + + # Print characteristics of input tensors + print('\nPoints tensors') + for i in range(L): + print(batch.points[i].dtype, batch.points[i].shape) + print('\nNeigbors tensors') + for i in range(L): + print(batch.neighbors[i].dtype, batch.neighbors[i].shape) + print('\nPools tensors') + for i in range(L): + print(batch.pools[i].dtype, batch.pools[i].shape) + print('\nStack lengths') + for i in range(L): + print(batch.lengths[i].dtype, batch.lengths[i].shape) + print('\nFeatures') + print(batch.features.dtype, batch.features.shape) + print('\nLabels') + print(batch.labels.dtype, batch.labels.shape) + print('\nAugment Scales') + print(batch.scales.dtype, batch.scales.shape) + print('\nAugment Rotations') + print(batch.rots.dtype, batch.rots.shape) + print('\nModel indices') + print(batch.model_inds.dtype, batch.model_inds.shape) + + print('\nAre input tensors pinned') + print(batch.neighbors[0].is_pinned()) + print(batch.neighbors[-1].is_pinned()) + print(batch.points[0].is_pinned()) + print(batch.points[-1].is_pinned()) + print(batch.labels.is_pinned()) + print(batch.scales.is_pinned()) + print(batch.rots.is_pinned()) + print(batch.model_inds.is_pinned()) + + show_input_batch(batch) + + print('*******************************************') + + _, counts = np.unique(dataset.input_labels, return_counts=True) + print(counts) + + +def debug_batch_and_neighbors_calib(dataset, sampler, loader): + """Timing of generator function""" + + t = [time.time()] + last_display = time.time() + mean_dt = np.zeros(2) + + for epoch in range(10): + + for batch_i, input_list in enumerate(loader): + # print(batch_i, tuple(points.shape), tuple(normals.shape), labels, indices, in_sizes) + + # New time + t = t[-1:] + t += [time.time()] + + # Pause simulating computations + time.sleep(0.01) + t += [time.time()] + + # Average timing + mean_dt = 0.9 * mean_dt + 0.1 * (np.array(t[1:]) - np.array(t[:-1])) + + # Console display (only one per second) + if (t[-1] - last_display) > 1.0: + last_display = t[-1] + message = 'Step {:08d} -> Average timings (ms/batch) {:8.2f} {:8.2f} ' + print(message.format(batch_i, + 1000 * mean_dt[0], + 1000 * mean_dt[1])) + + print('************* Epoch ended *************') + + _, counts = np.unique(dataset.input_labels, return_counts=True) + print(counts) + + +class ModelNet40WorkerInitDebug: + """Callable class that Initializes workers.""" + + def __init__(self, dataset): + self.dataset = dataset + return + + def __call__(self, worker_id): + + # Print workers info + worker_info = get_worker_info() + print(worker_info) + + # Get associated dataset + dataset = worker_info.dataset # the dataset copy in this worker process + + # In windows, each worker has its own copy of the dataset. In Linux, this is shared in memory + print(dataset.input_labels.__array_interface__['data']) + print(worker_info.dataset.input_labels.__array_interface__['data']) + print(self.dataset.input_labels.__array_interface__['data']) + + # configure the dataset to only process the split workload + + return + diff --git a/competing_methods/my_KPConv/datasets/S3DIS.py b/competing_methods/my_KPConv/datasets/S3DIS.py new file mode 100644 index 00000000..dfbcf513 --- /dev/null +++ b/competing_methods/my_KPConv/datasets/S3DIS.py @@ -0,0 +1,1608 @@ +# +# +# 0=================================0 +# | Kernel Point Convolutions | +# 0=================================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Class handling S3DIS dataset. +# Implements a Dataset, a Sampler, and a collate_fn +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 11/06/2018 +# + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Imports and global variables +# \**********************************/ +# + +# Common libs +import time +import numpy as np +import pickle +import torch +import math +from multiprocessing import Lock + + +# OS functions +from os import listdir +from os.path import exists, join, isdir + +# Dataset parent class +from datasets.common import PointCloudDataset +from torch.utils.data import Sampler, get_worker_info +from utils.mayavi_visu import * + +from datasets.common import grid_subsampling +from utils.config import bcolors + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Dataset class definition +# \******************************/ + + +class S3DISDataset(PointCloudDataset): + """Class to handle S3DIS dataset.""" + + def __init__(self, config, set='training', use_potentials=True, load_data=True): + """ + This dataset is small enough to be stored in-memory, so load all point clouds here + """ + PointCloudDataset.__init__(self, 'S3DIS') + + ############ + # Parameters + ############ + + # Dict from labels to names + self.label_to_names = {0: 'ceiling', + 1: 'floor', + 2: 'wall', + 3: 'beam', + 4: 'column', + 5: 'window', + 6: 'door', + 7: 'chair', + 8: 'table', + 9: 'bookcase', + 10: 'sofa', + 11: 'board', + 12: 'clutter'} + + # Initialize a bunch of variables concerning class labels + self.init_labels() + + # List of classes ignored during training (can be empty) + self.ignored_labels = np.array([]) + + # Dataset folder + self.path = '../../Data/S3DIS' + + # Type of task conducted on this dataset + self.dataset_task = 'cloud_segmentation' + + # Update number of class and data task in configuration + config.num_classes = self.num_classes + config.dataset_task = self.dataset_task + + # Parameters from config + self.config = config + + # Training or test set + self.set = set + + # Using potential or random epoch generation + self.use_potentials = use_potentials + + # Path of the training files + self.train_path = 'original_ply' + + # List of files to process + ply_path = join(self.path, self.train_path) + + # Proportion of validation scenes + self.cloud_names = ['Area_1', 'Area_2', 'Area_3', 'Area_4', 'Area_5', 'Area_6'] + self.all_splits = [0, 1, 2, 3, 4, 5] + self.validation_split = 4 + + # Number of models used per epoch + if self.set == 'training': + self.epoch_n = config.epoch_steps * config.batch_num + elif self.set in ['validation', 'test', 'ERF']: + self.epoch_n = config.validation_size * config.batch_num + else: + raise ValueError('Unknown set for S3DIS data: ', self.set) + + # Stop data is not needed + if not load_data: + return + + ################### + # Prepare ply files + ################### + + self.prepare_S3DIS_ply() + + ################ + # Load ply files + ################ + + # List of training files + self.files = [] + for i, f in enumerate(self.cloud_names): + if self.set == 'training': + if self.all_splits[i] != self.validation_split: + self.files += [join(ply_path, f + '.ply')] + elif self.set in ['validation', 'test', 'ERF']: + if self.all_splits[i] == self.validation_split: + self.files += [join(ply_path, f + '.ply')] + else: + raise ValueError('Unknown set for S3DIS data: ', self.set) + + if self.set == 'training': + self.cloud_names = [f for i, f in enumerate(self.cloud_names) + if self.all_splits[i] != self.validation_split] + elif self.set in ['validation', 'test', 'ERF']: + self.cloud_names = [f for i, f in enumerate(self.cloud_names) + if self.all_splits[i] == self.validation_split] + + if 0 < self.config.first_subsampling_dl <= 0.01: + raise ValueError('subsampling_parameter too low (should be over 1 cm') + + # Initiate containers + self.input_trees = [] + self.input_colors = [] + self.input_labels = [] + self.pot_trees = [] + self.num_clouds = 0 + self.test_proj = [] + self.validation_labels = [] + + # Start loading + self.load_subsampled_clouds() + + ############################ + # Batch selection parameters + ############################ + + # Initialize value for batch limit (max number of points per batch). + self.batch_limit = torch.tensor([1], dtype=torch.float32) + self.batch_limit.share_memory_() + + # Initialize potentials + if use_potentials: + self.potentials = [] + self.min_potentials = [] + self.argmin_potentials = [] + for i, tree in enumerate(self.pot_trees): + self.potentials += [torch.from_numpy(np.random.rand(tree.data.shape[0]) * 1e-3)] + min_ind = int(torch.argmin(self.potentials[-1])) + self.argmin_potentials += [min_ind] + self.min_potentials += [float(self.potentials[-1][min_ind])] + + # Share potential memory + self.argmin_potentials = torch.from_numpy(np.array(self.argmin_potentials, dtype=np.int64)) + self.min_potentials = torch.from_numpy(np.array(self.min_potentials, dtype=np.float64)) + self.argmin_potentials.share_memory_() + self.min_potentials.share_memory_() + for i, _ in enumerate(self.pot_trees): + self.potentials[i].share_memory_() + + self.worker_waiting = torch.tensor([0 for _ in range(config.input_threads)], dtype=torch.int32) + self.worker_waiting.share_memory_() + self.epoch_inds = None + self.epoch_i = 0 + + else: + self.potentials = None + self.min_potentials = None + self.argmin_potentials = None + N = config.epoch_steps * config.batch_num + self.epoch_inds = torch.from_numpy(np.zeros((2, N), dtype=np.int64)) + self.epoch_i = torch.from_numpy(np.zeros((1,), dtype=np.int64)) + self.epoch_i.share_memory_() + self.epoch_inds.share_memory_() + + self.worker_lock = Lock() + + # For ERF visualization, we want only one cloud per batch and no randomness + if self.set == 'ERF': + self.batch_limit = torch.tensor([1], dtype=torch.float32) + self.batch_limit.share_memory_() + np.random.seed(42) + + return + + def __len__(self): + """ + Return the length of data here + """ + return len(self.cloud_names) + + def __getitem__(self, batch_i): + """ + The main thread gives a list of indices to load a batch. Each worker is going to work in parallel to load a + different list of indices. + """ + + if self.use_potentials: + return self.potential_item(batch_i) + else: + return self.random_item(batch_i) + + def potential_item(self, batch_i, debug_workers=False): + + t = [time.time()] + + # Initiate concatanation lists + p_list = [] + f_list = [] + l_list = [] + i_list = [] + pi_list = [] + ci_list = [] + s_list = [] + R_list = [] + batch_n = 0 + + info = get_worker_info() + if info is not None: + wid = info.id + else: + wid = None + + while True: + + t += [time.time()] + + if debug_workers: + message = '' + for wi in range(info.num_workers): + if wi == wid: + message += ' {:}X{:} '.format(bcolors.FAIL, bcolors.ENDC) + elif self.worker_waiting[wi] == 0: + message += ' ' + elif self.worker_waiting[wi] == 1: + message += ' | ' + elif self.worker_waiting[wi] == 2: + message += ' o ' + print(message) + self.worker_waiting[wid] = 0 + + with self.worker_lock: + + if debug_workers: + message = '' + for wi in range(info.num_workers): + if wi == wid: + message += ' {:}v{:} '.format(bcolors.OKGREEN, bcolors.ENDC) + elif self.worker_waiting[wi] == 0: + message += ' ' + elif self.worker_waiting[wi] == 1: + message += ' | ' + elif self.worker_waiting[wi] == 2: + message += ' o ' + print(message) + self.worker_waiting[wid] = 1 + + # Get potential minimum + cloud_ind = int(torch.argmin(self.min_potentials)) + point_ind = int(self.argmin_potentials[cloud_ind]) + + # Get potential points from tree structure + pot_points = np.array(self.pot_trees[cloud_ind].data, copy=False) + + # Center point of input region + center_point = pot_points[point_ind, :].reshape(1, -1) + + # Add a small noise to center point + if self.set != 'ERF': + center_point += np.random.normal(scale=self.config.in_radius / 10, size=center_point.shape) + + # Indices of points in input region + pot_inds, dists = self.pot_trees[cloud_ind].query_radius(center_point, + r=self.config.in_radius, + return_distance=True) + + d2s = np.square(dists[0]) + pot_inds = pot_inds[0] + + # Update potentials (Tukey weights) + if self.set != 'ERF': + tukeys = np.square(1 - d2s / np.square(self.config.in_radius)) + tukeys[d2s > np.square(self.config.in_radius)] = 0 + self.potentials[cloud_ind][pot_inds] += tukeys + min_ind = torch.argmin(self.potentials[cloud_ind]) + self.min_potentials[[cloud_ind]] = self.potentials[cloud_ind][min_ind] + self.argmin_potentials[[cloud_ind]] = min_ind + + t += [time.time()] + + # Get points from tree structure + points = np.array(self.input_trees[cloud_ind].data, copy=False) + + + # Indices of points in input region + input_inds = self.input_trees[cloud_ind].query_radius(center_point, + r=self.config.in_radius)[0] + + t += [time.time()] + + # Number collected + n = input_inds.shape[0] + + # Collect labels and colors + input_points = (points[input_inds] - center_point).astype(np.float32) + input_colors = self.input_colors[cloud_ind][input_inds] + if self.set in ['test', 'ERF']: + input_labels = np.zeros(input_points.shape[0]) + else: + input_labels = self.input_labels[cloud_ind][input_inds] + input_labels = np.array([self.label_to_idx[l] for l in input_labels]) + + t += [time.time()] + + # Data augmentation + input_points, scale, R = self.augmentation_transform(input_points) + + # Color augmentation + if np.random.rand() > self.config.augment_color: + input_colors *= 0 + + # Get original height as additional feature + input_features = np.hstack((input_colors, input_points[:, 2:] + center_point[:, 2:])).astype(np.float32) + + t += [time.time()] + + # Stack batch + p_list += [input_points] + f_list += [input_features] + l_list += [input_labels] + pi_list += [input_inds] + i_list += [point_ind] + ci_list += [cloud_ind] + s_list += [scale] + R_list += [R] + + # Update batch size + batch_n += n + + # In case batch is full, stop + if batch_n > int(self.batch_limit): + break + + # Randomly drop some points (act as an augmentation process and a safety for GPU memory consumption) + # if n > int(self.batch_limit): + # input_inds = np.random.choice(input_inds, size=int(self.batch_limit) - 1, replace=False) + # n = input_inds.shape[0] + + ################### + # Concatenate batch + ################### + + stacked_points = np.concatenate(p_list, axis=0) + features = np.concatenate(f_list, axis=0) + labels = np.concatenate(l_list, axis=0) + point_inds = np.array(i_list, dtype=np.int32) + cloud_inds = np.array(ci_list, dtype=np.int32) + input_inds = np.concatenate(pi_list, axis=0) + stack_lengths = np.array([pp.shape[0] for pp in p_list], dtype=np.int32) + scales = np.array(s_list, dtype=np.float32) + rots = np.stack(R_list, axis=0) + + # Input features + stacked_features = np.ones_like(stacked_points[:, :1], dtype=np.float32) + if self.config.in_features_dim == 1: + pass + elif self.config.in_features_dim == 4: + stacked_features = np.hstack((stacked_features, features[:, :3])) + elif self.config.in_features_dim == 5: + stacked_features = np.hstack((stacked_features, features)) + else: + raise ValueError('Only accepted input dimensions are 1, 4 and 7 (without and with XYZ)') + + ####################### + # Create network inputs + ####################### + # + # Points, neighbors, pooling indices for each layers + # + + t += [time.time()] + + # Get the whole input list + input_list = self.segmentation_inputs(stacked_points, + stacked_features, + labels, + stack_lengths) + + t += [time.time()] + + # Add scale and rotation for testing + input_list += [scales, rots, cloud_inds, point_inds, input_inds] + + if debug_workers: + message = '' + for wi in range(info.num_workers): + if wi == wid: + message += ' {:}0{:} '.format(bcolors.OKBLUE, bcolors.ENDC) + elif self.worker_waiting[wi] == 0: + message += ' ' + elif self.worker_waiting[wi] == 1: + message += ' | ' + elif self.worker_waiting[wi] == 2: + message += ' o ' + print(message) + self.worker_waiting[wid] = 2 + + t += [time.time()] + + # Display timings + debugT = False + if debugT: + print('\n************************\n') + print('Timings:') + ti = 0 + N = 5 + mess = 'Init ...... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += 1 + mess = 'Pots ...... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += 1 + mess = 'Sphere .... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += 1 + mess = 'Collect ... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += 1 + mess = 'Augment ... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += N * (len(stack_lengths) - 1) + 1 + print('concat .... {:5.1f}ms'.format(1000 * (t[ti+1] - t[ti]))) + ti += 1 + print('input ..... {:5.1f}ms'.format(1000 * (t[ti+1] - t[ti]))) + ti += 1 + print('stack ..... {:5.1f}ms'.format(1000 * (t[ti+1] - t[ti]))) + ti += 1 + print('\n************************\n') + return input_list + + def random_item(self, batch_i): + + # Initiate concatanation lists + p_list = [] + f_list = [] + l_list = [] + i_list = [] + pi_list = [] + ci_list = [] + s_list = [] + R_list = [] + batch_n = 0 + + while True: + + with self.worker_lock: + + # Get potential minimum + cloud_ind = int(self.epoch_inds[0, self.epoch_i]) + point_ind = int(self.epoch_inds[1, self.epoch_i]) + + # Update epoch indice + self.epoch_i += 1 + + # Get points from tree structure + points = np.array(self.input_trees[cloud_ind].data, copy=False) + + # Center point of input region + center_point = points[point_ind, :].reshape(1, -1) + + # Add a small noise to center point + if self.set != 'ERF': + center_point += np.random.normal(scale=self.config.in_radius / 10, size=center_point.shape) + + # Indices of points in input region + input_inds = self.input_trees[cloud_ind].query_radius(center_point, + r=self.config.in_radius)[0] + + # Number collected + n = input_inds.shape[0] + + # Collect labels and colors + input_points = (points[input_inds] - center_point).astype(np.float32) + input_colors = self.input_colors[cloud_ind][input_inds] + if self.set in ['test', 'ERF']: + input_labels = np.zeros(input_points.shape[0]) + else: + input_labels = self.input_labels[cloud_ind][input_inds] + input_labels = np.array([self.label_to_idx[l] for l in input_labels]) + + # Data augmentation + input_points, scale, R = self.augmentation_transform(input_points) + + # Color augmentation + if np.random.rand() > self.config.augment_color: + input_colors *= 0 + + # Get original height as additional feature + input_features = np.hstack((input_colors, input_points[:, 2:] + center_point[:, 2:])).astype(np.float32) + + # Stack batch + p_list += [input_points] + f_list += [input_features] + l_list += [input_labels] + pi_list += [input_inds] + i_list += [point_ind] + ci_list += [cloud_ind] + s_list += [scale] + R_list += [R] + + # Update batch size + batch_n += n + + # In case batch is full, stop + if batch_n > int(self.batch_limit): + break + + # Randomly drop some points (act as an augmentation process and a safety for GPU memory consumption) + # if n > int(self.batch_limit): + # input_inds = np.random.choice(input_inds, size=int(self.batch_limit) - 1, replace=False) + # n = input_inds.shape[0] + + ################### + # Concatenate batch + ################### + + stacked_points = np.concatenate(p_list, axis=0) + features = np.concatenate(f_list, axis=0) + labels = np.concatenate(l_list, axis=0) + point_inds = np.array(i_list, dtype=np.int32) + cloud_inds = np.array(ci_list, dtype=np.int32) + input_inds = np.concatenate(pi_list, axis=0) + stack_lengths = np.array([pp.shape[0] for pp in p_list], dtype=np.int32) + scales = np.array(s_list, dtype=np.float32) + rots = np.stack(R_list, axis=0) + + # Input features + stacked_features = np.ones_like(stacked_points[:, :1], dtype=np.float32) + if self.config.in_features_dim == 1: + pass + elif self.config.in_features_dim == 4: + stacked_features = np.hstack((stacked_features, features[:, :3])) + elif self.config.in_features_dim == 5: + stacked_features = np.hstack((stacked_features, features)) + else: + raise ValueError('Only accepted input dimensions are 1, 4 and 7 (without and with XYZ)') + + ####################### + # Create network inputs + ####################### + # + # Points, neighbors, pooling indices for each layers + # + + # Get the whole input list + input_list = self.segmentation_inputs(stacked_points, + stacked_features, + labels, + stack_lengths) + + # Add scale and rotation for testing + input_list += [scales, rots, cloud_inds, point_inds, input_inds] + + return input_list + + def prepare_S3DIS_ply(self): + + print('\nPreparing ply files') + t0 = time.time() + + # Folder for the ply files + ply_path = join(self.path, self.train_path) + if not exists(ply_path): + makedirs(ply_path) + + for cloud_name in self.cloud_names: + + # Pass if the cloud has already been computed + cloud_file = join(ply_path, cloud_name + '.ply') + if exists(cloud_file): + continue + + # Get rooms of the current cloud + cloud_folder = join(self.path, cloud_name) + room_folders = [join(cloud_folder, room) for room in listdir(cloud_folder) if isdir(join(cloud_folder, room))] + + # Initiate containers + cloud_points = np.empty((0, 3), dtype=np.float32) + cloud_colors = np.empty((0, 3), dtype=np.uint8) + cloud_classes = np.empty((0, 1), dtype=np.int32) + + # Loop over rooms + for i, room_folder in enumerate(room_folders): + + print('Cloud %s - Room %d/%d : %s' % (cloud_name, i+1, len(room_folders), room_folder.split('/')[-1])) + + for object_name in listdir(join(room_folder, 'Annotations')): + + if object_name[-4:] == '.txt': + + # Text file containing point of the object + object_file = join(room_folder, 'Annotations', object_name) + + # Object class and ID + tmp = object_name[:-4].split('_')[0] + if tmp in self.name_to_label: + object_class = self.name_to_label[tmp] + elif tmp in ['stairs']: + object_class = self.name_to_label['clutter'] + else: + raise ValueError('Unknown object name: ' + str(tmp)) + + # Correct bug in S3DIS dataset + if object_name == 'ceiling_1.txt': + with open(object_file, 'r') as f: + lines = f.readlines() + for l_i, line in enumerate(lines): + if '103.0\x100000' in line: + lines[l_i] = line.replace('103.0\x100000', '103.000000') + with open(object_file, 'w') as f: + f.writelines(lines) + + # Read object points and colors + object_data = np.loadtxt(object_file, dtype=np.float32) + + # Stack all data + cloud_points = np.vstack((cloud_points, object_data[:, 0:3].astype(np.float32))) + cloud_colors = np.vstack((cloud_colors, object_data[:, 3:6].astype(np.uint8))) + object_classes = np.full((object_data.shape[0], 1), object_class, dtype=np.int32) + cloud_classes = np.vstack((cloud_classes, object_classes)) + + # Save as ply + write_ply(cloud_file, + (cloud_points, cloud_colors, cloud_classes), + ['x', 'y', 'z', 'red', 'green', 'blue', 'class']) + + print('Done in {:.1f}s'.format(time.time() - t0)) + return + + def load_subsampled_clouds(self): + + # Parameter + dl = self.config.first_subsampling_dl + + # Create path for files + tree_path = join(self.path, 'input_{:.3f}'.format(dl)) + if not exists(tree_path): + makedirs(tree_path) + + ############## + # Load KDTrees + ############## + + for i, file_path in enumerate(self.files): + + # Restart timer + t0 = time.time() + + # Get cloud name + cloud_name = self.cloud_names[i] + + # Name of the input files + KDTree_file = join(tree_path, '{:s}_KDTree.pkl'.format(cloud_name)) + sub_ply_file = join(tree_path, '{:s}.ply'.format(cloud_name)) + + # Check if inputs have already been computed + if exists(KDTree_file): + print('\nFound KDTree for cloud {:s}, subsampled at {:.3f}'.format(cloud_name, dl)) + + # read ply with data + data = read_ply(sub_ply_file) + sub_colors = np.vstack((data['red'], data['green'], data['blue'])).T + sub_labels = data['class'] + + # Read pkl with search tree + with open(KDTree_file, 'rb') as f: + search_tree = pickle.load(f) + + else: + print('\nPreparing KDTree for cloud {:s}, subsampled at {:.3f}'.format(cloud_name, dl)) + + # Read ply file + data = read_ply(file_path) + points = np.vstack((data['x'], data['y'], data['z'])).T + colors = np.vstack((data['red'], data['green'], data['blue'])).T + labels = data['class'] + + # Subsample cloud + sub_points, sub_colors, sub_labels = grid_subsampling(points, + features=colors, + labels=labels, + sampleDl=dl) + + # Rescale float color and squeeze label + sub_colors = sub_colors / 255 + sub_labels = np.squeeze(sub_labels) + + # Get chosen neighborhoods + search_tree = KDTree(sub_points, leaf_size=10) + #search_tree = nnfln.KDTree(n_neighbors=1, metric='L2', leaf_size=10) + #search_tree.fit(sub_points) + + # Save KDTree + with open(KDTree_file, 'wb') as f: + pickle.dump(search_tree, f) + + # Save ply + write_ply(sub_ply_file, + [sub_points, sub_colors, sub_labels], + ['x', 'y', 'z', 'red', 'green', 'blue', 'class']) + + # Fill data containers + self.input_trees += [search_tree] + self.input_colors += [sub_colors] + self.input_labels += [sub_labels] + + size = sub_colors.shape[0] * 4 * 7 + print('{:.1f} MB loaded in {:.1f}s'.format(size * 1e-6, time.time() - t0)) + + ############################ + # Coarse potential locations + ############################ + + # Only necessary for validation and test sets + if self.use_potentials: + print('\nPreparing potentials') + + # Restart timer + t0 = time.time() + + pot_dl = self.config.in_radius / 10 + cloud_ind = 0 + + for i, file_path in enumerate(self.files): + + # Get cloud name + cloud_name = self.cloud_names[i] + + # Name of the input files + coarse_KDTree_file = join(tree_path, '{:s}_coarse_KDTree.pkl'.format(cloud_name)) + + # Check if inputs have already been computed + if exists(coarse_KDTree_file): + # Read pkl with search tree + with open(coarse_KDTree_file, 'rb') as f: + search_tree = pickle.load(f) + + else: + # Subsample cloud + sub_points = np.array(self.input_trees[cloud_ind].data, copy=False) + coarse_points = grid_subsampling(sub_points.astype(np.float32), sampleDl=pot_dl) + + # Get chosen neighborhoods + search_tree = KDTree(coarse_points, leaf_size=10) + + # Save KDTree + with open(coarse_KDTree_file, 'wb') as f: + pickle.dump(search_tree, f) + + # Fill data containers + self.pot_trees += [search_tree] + cloud_ind += 1 + + print('Done in {:.1f}s'.format(time.time() - t0)) + + ###################### + # Reprojection indices + ###################### + + # Get number of clouds + self.num_clouds = len(self.input_trees) + + # Only necessary for validation and test sets + if self.set in ['validation', 'test']: + + print('\nPreparing reprojection indices for testing') + + # Get validation/test reprojection indices + for i, file_path in enumerate(self.files): + + # Restart timer + t0 = time.time() + + # Get info on this cloud + cloud_name = self.cloud_names[i] + + # File name for saving + proj_file = join(tree_path, '{:s}_proj.pkl'.format(cloud_name)) + + # Try to load previous indices + if exists(proj_file): + with open(proj_file, 'rb') as f: + proj_inds, labels = pickle.load(f) + else: + data = read_ply(file_path) + points = np.vstack((data['x'], data['y'], data['z'])).T + labels = data['class'] + + # Compute projection inds + idxs = self.input_trees[i].query(points, return_distance=False) + #dists, idxs = self.input_trees[i_cloud].kneighbors(points) + proj_inds = np.squeeze(idxs).astype(np.int32) + + # Save + with open(proj_file, 'wb') as f: + pickle.dump([proj_inds, labels], f) + + self.test_proj += [proj_inds] + self.validation_labels += [labels] + print('{:s} done in {:.1f}s'.format(cloud_name, time.time() - t0)) + + print() + return + + def load_evaluation_points(self, file_path): + """ + Load points (from test or validation split) on which the metrics should be evaluated + """ + + # Get original points + data = read_ply(file_path) + return np.vstack((data['x'], data['y'], data['z'])).T + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Utility classes definition +# \********************************/ + + +class S3DISSampler(Sampler): + """Sampler for S3DIS""" + + def __init__(self, dataset: S3DISDataset): + Sampler.__init__(self, dataset) + + # Dataset used by the sampler (no copy is made in memory) + self.dataset = dataset + + # Number of step per epoch + if dataset.set == 'training': + self.N = dataset.config.epoch_steps + else: + self.N = dataset.config.validation_size + + return + + def __iter__(self): + """ + Yield next batch indices here. In this dataset, this is a dummy sampler that yield the index of batch element + (input sphere) in epoch instead of the list of point indices + """ + + if not self.dataset.use_potentials: + + # Initiate current epoch ind + self.dataset.epoch_i *= 0 + self.dataset.epoch_inds *= 0 + + # Initiate container for indices + all_epoch_inds = np.zeros((2, 0), dtype=np.int32) + + # Number of sphere centers taken per class in each cloud + num_centers = self.N * self.dataset.config.batch_num + random_pick_n = int(np.ceil(num_centers / (self.dataset.num_clouds * self.dataset.config.num_classes))) + + # Choose random points of each class for each cloud + for cloud_ind, cloud_labels in enumerate(self.dataset.input_labels): + epoch_indices = np.empty((0,), dtype=np.int32) + for label_ind, label in enumerate(self.dataset.label_values): + if label not in self.dataset.ignored_labels: + label_indices = np.where(np.equal(cloud_labels, label))[0] + if len(label_indices) <= random_pick_n: + epoch_indices = np.hstack((epoch_indices, label_indices)) + elif len(label_indices) < 50 * random_pick_n: + new_randoms = np.random.choice(label_indices, size=random_pick_n, replace=False) + epoch_indices = np.hstack((epoch_indices, new_randoms.astype(np.int32))) + else: + rand_inds = [] + while len(rand_inds) < random_pick_n: + rand_inds = np.unique(np.random.choice(label_indices, size=5 * random_pick_n, replace=True)) + epoch_indices = np.hstack((epoch_indices, rand_inds[:random_pick_n].astype(np.int32))) + + # Stack those indices with the cloud index + epoch_indices = np.vstack((np.full(epoch_indices.shape, cloud_ind, dtype=np.int32), epoch_indices)) + + # Update the global indice container + all_epoch_inds = np.hstack((all_epoch_inds, epoch_indices)) + + # Random permutation of the indices + random_order = np.random.permutation(all_epoch_inds.shape[1]) + all_epoch_inds = all_epoch_inds[:, random_order].astype(np.int64) + + # Update epoch inds + self.dataset.epoch_inds += torch.from_numpy(all_epoch_inds[:, :num_centers]) + + # Generator loop + for i in range(self.N): + yield i + + def __len__(self): + """ + The number of yielded samples is variable + """ + return self.N + + def fast_calib(self): + """ + This method calibrates the batch sizes while ensuring the potentials are well initialized. Indeed on a dataset + like Semantic3D, before potential have been updated over the dataset, there are cahnces that all the dense area + are picked in the begining and in the end, we will have very large batch of small point clouds + :return: + """ + + # Estimated average batch size and target value + estim_b = 0 + target_b = self.dataset.config.batch_num + + # Calibration parameters + low_pass_T = 10 + Kp = 100.0 + finer = False + breaking = False + + # Convergence parameters + smooth_errors = [] + converge_threshold = 0.1 + + t = [time.time()] + last_display = time.time() + mean_dt = np.zeros(2) + + for epoch in range(10): + for i, test in enumerate(self): + + # New time + t = t[-1:] + t += [time.time()] + + # batch length + b = len(test) + + # Update estim_b (low pass filter) + estim_b += (b - estim_b) / low_pass_T + + # Estimate error (noisy) + error = target_b - b + + # Save smooth errors for convergene check + smooth_errors.append(target_b - estim_b) + if len(smooth_errors) > 10: + smooth_errors = smooth_errors[1:] + + # Update batch limit with P controller + self.dataset.batch_limit += Kp * error + + # finer low pass filter when closing in + if not finer and np.abs(estim_b - target_b) < 1: + low_pass_T = 100 + finer = True + + # Convergence + if finer and np.max(np.abs(smooth_errors)) < converge_threshold: + breaking = True + break + + # Average timing + t += [time.time()] + mean_dt = 0.9 * mean_dt + 0.1 * (np.array(t[1:]) - np.array(t[:-1])) + + # Console display (only one per second) + if (t[-1] - last_display) > 1.0: + last_display = t[-1] + message = 'Step {:5d} estim_b ={:5.2f} batch_limit ={:7d}, // {:.1f}ms {:.1f}ms' + print(message.format(i, + estim_b, + int(self.dataset.batch_limit), + 1000 * mean_dt[0], + 1000 * mean_dt[1])) + + if breaking: + break + + def calibration(self, dataloader, untouched_ratio=0.9, verbose=False, force_redo=False): + """ + Method performing batch and neighbors calibration. + Batch calibration: Set "batch_limit" (the maximum number of points allowed in every batch) so that the + average batch size (number of stacked pointclouds) is the one asked. + Neighbors calibration: Set the "neighborhood_limits" (the maximum number of neighbors allowed in convolutions) + so that 90% of the neighborhoods remain untouched. There is a limit for each layer. + """ + + ############################## + # Previously saved calibration + ############################## + + print('\nStarting Calibration (use verbose=True for more details)') + t0 = time.time() + + redo = force_redo + + # Batch limit + # *********** + + # Load batch_limit dictionary + batch_lim_file = join(self.dataset.path, 'batch_limits.pkl') + if exists(batch_lim_file): + with open(batch_lim_file, 'rb') as file: + batch_lim_dict = pickle.load(file) + else: + batch_lim_dict = {} + + # Check if the batch limit associated with current parameters exists + if self.dataset.use_potentials: + sampler_method = 'potentials' + else: + sampler_method = 'random' + key = '{:s}_{:.3f}_{:.3f}_{:d}'.format(sampler_method, + self.dataset.config.in_radius, + self.dataset.config.first_subsampling_dl, + self.dataset.config.batch_num) + if not redo and key in batch_lim_dict: + self.dataset.batch_limit[0] = batch_lim_dict[key] + else: + redo = True + + if verbose: + print('\nPrevious calibration found:') + print('Check batch limit dictionary') + if key in batch_lim_dict: + color = bcolors.OKGREEN + v = str(int(batch_lim_dict[key])) + else: + color = bcolors.FAIL + v = '?' + print('{:}\"{:s}\": {:s}{:}'.format(color, key, v, bcolors.ENDC)) + + # Neighbors limit + # *************** + + # Load neighb_limits dictionary + neighb_lim_file = join(self.dataset.path, 'neighbors_limits.pkl') + if exists(neighb_lim_file): + with open(neighb_lim_file, 'rb') as file: + neighb_lim_dict = pickle.load(file) + else: + neighb_lim_dict = {} + + # Check if the limit associated with current parameters exists (for each layer) + neighb_limits = [] + for layer_ind in range(self.dataset.config.num_layers): + + dl = self.dataset.config.first_subsampling_dl * (2**layer_ind) + if self.dataset.config.deform_layers[layer_ind]: + r = dl * self.dataset.config.deform_radius + else: + r = dl * self.dataset.config.conv_radius + + key = '{:.3f}_{:.3f}'.format(dl, r) + if key in neighb_lim_dict: + neighb_limits += [neighb_lim_dict[key]] + + if not redo and len(neighb_limits) == self.dataset.config.num_layers: + self.dataset.neighborhood_limits = neighb_limits + else: + redo = True + + if verbose: + print('Check neighbors limit dictionary') + for layer_ind in range(self.dataset.config.num_layers): + dl = self.dataset.config.first_subsampling_dl * (2**layer_ind) + if self.dataset.config.deform_layers[layer_ind]: + r = dl * self.dataset.config.deform_radius + else: + r = dl * self.dataset.config.conv_radius + key = '{:.3f}_{:.3f}'.format(dl, r) + + if key in neighb_lim_dict: + color = bcolors.OKGREEN + v = str(neighb_lim_dict[key]) + else: + color = bcolors.FAIL + v = '?' + print('{:}\"{:s}\": {:s}{:}'.format(color, key, v, bcolors.ENDC)) + + if redo: + + ############################ + # Neighbors calib parameters + ############################ + + # From config parameter, compute higher bound of neighbors number in a neighborhood + hist_n = int(np.ceil(4 / 3 * np.pi * (self.dataset.config.deform_radius + 1) ** 3)) + + # Histogram of neighborhood sizes + neighb_hists = np.zeros((self.dataset.config.num_layers, hist_n), dtype=np.int32) + + ######################## + # Batch calib parameters + ######################## + + # Estimated average batch size and target value + estim_b = 0 + target_b = self.dataset.config.batch_num + + # Calibration parameters + low_pass_T = 10 + Kp = 100.0 + finer = False + + # Convergence parameters + smooth_errors = [] + converge_threshold = 0.1 + + # Loop parameters + last_display = time.time() + i = 0 + breaking = False + + ##################### + # Perform calibration + ##################### + + for epoch in range(10): + for batch_i, batch in enumerate(dataloader): + + # Update neighborhood histogram + counts = [np.sum(neighb_mat.numpy() < neighb_mat.shape[0], axis=1) for neighb_mat in batch.neighbors] + hists = [np.bincount(c, minlength=hist_n)[:hist_n] for c in counts] + neighb_hists += np.vstack(hists) + + # batch length + b = len(batch.cloud_inds) + + # Update estim_b (low pass filter) + estim_b += (b - estim_b) / low_pass_T + + # Estimate error (noisy) + error = target_b - b + + # Save smooth errors for convergene check + smooth_errors.append(target_b - estim_b) + if len(smooth_errors) > 10: + smooth_errors = smooth_errors[1:] + + # Update batch limit with P controller + self.dataset.batch_limit += Kp * error + + # finer low pass filter when closing in + if not finer and np.abs(estim_b - target_b) < 1: + low_pass_T = 100 + finer = True + + # Convergence + if finer and np.max(np.abs(smooth_errors)) < converge_threshold: + breaking = True + break + + i += 1 + t = time.time() + + # Console display (only one per second) + if verbose and (t - last_display) > 1.0: + last_display = t + message = 'Step {:5d} estim_b ={:5.2f} batch_limit ={:7d}' + print(message.format(i, + estim_b, + int(self.dataset.batch_limit))) + + if breaking: + break + + # Use collected neighbor histogram to get neighbors limit + cumsum = np.cumsum(neighb_hists.T, axis=0) + percentiles = np.sum(cumsum < (untouched_ratio * cumsum[hist_n - 1, :]), axis=0) + self.dataset.neighborhood_limits = percentiles + + if verbose: + + # Crop histogram + while np.sum(neighb_hists[:, -1]) == 0: + neighb_hists = neighb_hists[:, :-1] + hist_n = neighb_hists.shape[1] + + print('\n**************************************************\n') + line0 = 'neighbors_num ' + for layer in range(neighb_hists.shape[0]): + line0 += '| layer {:2d} '.format(layer) + print(line0) + for neighb_size in range(hist_n): + line0 = ' {:4d} '.format(neighb_size) + for layer in range(neighb_hists.shape[0]): + if neighb_size > percentiles[layer]: + color = bcolors.FAIL + else: + color = bcolors.OKGREEN + line0 += '|{:}{:10d}{:} '.format(color, + neighb_hists[layer, neighb_size], + bcolors.ENDC) + + print(line0) + + print('\n**************************************************\n') + print('\nchosen neighbors limits: ', percentiles) + print() + + # Save batch_limit dictionary + if self.dataset.use_potentials: + sampler_method = 'potentials' + else: + sampler_method = 'random' + key = '{:s}_{:.3f}_{:.3f}_{:d}'.format(sampler_method, + self.dataset.config.in_radius, + self.dataset.config.first_subsampling_dl, + self.dataset.config.batch_num) + batch_lim_dict[key] = float(self.dataset.batch_limit) + with open(batch_lim_file, 'wb') as file: + pickle.dump(batch_lim_dict, file) + + # Save neighb_limit dictionary + for layer_ind in range(self.dataset.config.num_layers): + dl = self.dataset.config.first_subsampling_dl * (2 ** layer_ind) + if self.dataset.config.deform_layers[layer_ind]: + r = dl * self.dataset.config.deform_radius + else: + r = dl * self.dataset.config.conv_radius + key = '{:.3f}_{:.3f}'.format(dl, r) + neighb_lim_dict[key] = self.dataset.neighborhood_limits[layer_ind] + with open(neighb_lim_file, 'wb') as file: + pickle.dump(neighb_lim_dict, file) + + + print('Calibration done in {:.1f}s\n'.format(time.time() - t0)) + return + + +class S3DISCustomBatch: + """Custom batch definition with memory pinning for S3DIS""" + + def __init__(self, input_list): + + # Get rid of batch dimension + input_list = input_list[0] + + # Number of layers + L = (len(input_list) - 7) // 5 + + # Extract input tensors from the list of numpy array + ind = 0 + self.points = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.neighbors = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.pools = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.upsamples = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.lengths = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.features = torch.from_numpy(input_list[ind]) + ind += 1 + self.labels = torch.from_numpy(input_list[ind]) + ind += 1 + self.scales = torch.from_numpy(input_list[ind]) + ind += 1 + self.rots = torch.from_numpy(input_list[ind]) + ind += 1 + self.cloud_inds = torch.from_numpy(input_list[ind]) + ind += 1 + self.center_inds = torch.from_numpy(input_list[ind]) + ind += 1 + self.input_inds = torch.from_numpy(input_list[ind]) + + return + + def pin_memory(self): + """ + Manual pinning of the memory + """ + + self.points = [in_tensor.pin_memory() for in_tensor in self.points] + self.neighbors = [in_tensor.pin_memory() for in_tensor in self.neighbors] + self.pools = [in_tensor.pin_memory() for in_tensor in self.pools] + self.upsamples = [in_tensor.pin_memory() for in_tensor in self.upsamples] + self.lengths = [in_tensor.pin_memory() for in_tensor in self.lengths] + self.features = self.features.pin_memory() + self.labels = self.labels.pin_memory() + self.scales = self.scales.pin_memory() + self.rots = self.rots.pin_memory() + self.cloud_inds = self.cloud_inds.pin_memory() + self.center_inds = self.center_inds.pin_memory() + self.input_inds = self.input_inds.pin_memory() + + return self + + def to(self, device): + + self.points = [in_tensor.to(device) for in_tensor in self.points] + self.neighbors = [in_tensor.to(device) for in_tensor in self.neighbors] + self.pools = [in_tensor.to(device) for in_tensor in self.pools] + self.upsamples = [in_tensor.to(device) for in_tensor in self.upsamples] + self.lengths = [in_tensor.to(device) for in_tensor in self.lengths] + self.features = self.features.to(device) + self.labels = self.labels.to(device) + self.scales = self.scales.to(device) + self.rots = self.rots.to(device) + self.cloud_inds = self.cloud_inds.to(device) + self.center_inds = self.center_inds.to(device) + self.input_inds = self.input_inds.to(device) + + return self + + def unstack_points(self, layer=None): + """Unstack the points""" + return self.unstack_elements('points', layer) + + def unstack_neighbors(self, layer=None): + """Unstack the neighbors indices""" + return self.unstack_elements('neighbors', layer) + + def unstack_pools(self, layer=None): + """Unstack the pooling indices""" + return self.unstack_elements('pools', layer) + + def unstack_elements(self, element_name, layer=None, to_numpy=True): + """ + Return a list of the stacked elements in the batch at a certain layer. If no layer is given, then return all + layers + """ + + if element_name == 'points': + elements = self.points + elif element_name == 'neighbors': + elements = self.neighbors + elif element_name == 'pools': + elements = self.pools[:-1] + else: + raise ValueError('Unknown element name: {:s}'.format(element_name)) + + all_p_list = [] + for layer_i, layer_elems in enumerate(elements): + + if layer is None or layer == layer_i: + + i0 = 0 + p_list = [] + if element_name == 'pools': + lengths = self.lengths[layer_i+1] + else: + lengths = self.lengths[layer_i] + + for b_i, length in enumerate(lengths): + + elem = layer_elems[i0:i0 + length] + if element_name == 'neighbors': + elem[elem >= self.points[layer_i].shape[0]] = -1 + elem[elem >= 0] -= i0 + elif element_name == 'pools': + elem[elem >= self.points[layer_i].shape[0]] = -1 + elem[elem >= 0] -= torch.sum(self.lengths[layer_i][:b_i]) + i0 += length + + if to_numpy: + p_list.append(elem.numpy()) + else: + p_list.append(elem) + + if layer == layer_i: + return p_list + + all_p_list.append(p_list) + + return all_p_list + + +def S3DISCollate(batch_data): + return S3DISCustomBatch(batch_data) + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Debug functions +# \*********************/ + + +def debug_upsampling(dataset, loader): + """Shows which labels are sampled according to strategy chosen""" + + + for epoch in range(10): + + for batch_i, batch in enumerate(loader): + + pc1 = batch.points[1].numpy() + pc2 = batch.points[2].numpy() + up1 = batch.upsamples[1].numpy() + + print(pc1.shape, '=>', pc2.shape) + print(up1.shape, np.max(up1)) + + pc2 = np.vstack((pc2, np.zeros_like(pc2[:1, :]))) + + # Get neighbors distance + p0 = pc1[10, :] + neighbs0 = up1[10, :] + neighbs0 = pc2[neighbs0, :] - p0 + d2 = np.sum(neighbs0 ** 2, axis=1) + + print(neighbs0.shape) + print(neighbs0[:5]) + print(d2[:5]) + + print('******************') + print('*******************************************') + + _, counts = np.unique(dataset.input_labels, return_counts=True) + print(counts) + + +def debug_timing(dataset, loader): + """Timing of generator function""" + + t = [time.time()] + last_display = time.time() + mean_dt = np.zeros(2) + estim_b = dataset.config.batch_num + estim_N = 0 + + for epoch in range(10): + + for batch_i, batch in enumerate(loader): + # print(batch_i, tuple(points.shape), tuple(normals.shape), labels, indices, in_sizes) + + # New time + t = t[-1:] + t += [time.time()] + + # Update estim_b (low pass filter) + estim_b += (len(batch.cloud_inds) - estim_b) / 100 + estim_N += (batch.features.shape[0] - estim_N) / 10 + + # Pause simulating computations + time.sleep(0.05) + t += [time.time()] + + # Average timing + mean_dt = 0.9 * mean_dt + 0.1 * (np.array(t[1:]) - np.array(t[:-1])) + + # Console display (only one per second) + if (t[-1] - last_display) > -1.0: + last_display = t[-1] + message = 'Step {:08d} -> (ms/batch) {:8.2f} {:8.2f} / batch = {:.2f} - {:.0f}' + print(message.format(batch_i, + 1000 * mean_dt[0], + 1000 * mean_dt[1], + estim_b, + estim_N)) + + print('************* Epoch ended *************') + + _, counts = np.unique(dataset.input_labels, return_counts=True) + print(counts) + + +def debug_show_clouds(dataset, loader): + + + for epoch in range(10): + + clouds = [] + cloud_normals = [] + cloud_labels = [] + + L = dataset.config.num_layers + + for batch_i, batch in enumerate(loader): + + # Print characteristics of input tensors + print('\nPoints tensors') + for i in range(L): + print(batch.points[i].dtype, batch.points[i].shape) + print('\nNeigbors tensors') + for i in range(L): + print(batch.neighbors[i].dtype, batch.neighbors[i].shape) + print('\nPools tensors') + for i in range(L): + print(batch.pools[i].dtype, batch.pools[i].shape) + print('\nStack lengths') + for i in range(L): + print(batch.lengths[i].dtype, batch.lengths[i].shape) + print('\nFeatures') + print(batch.features.dtype, batch.features.shape) + print('\nLabels') + print(batch.labels.dtype, batch.labels.shape) + print('\nAugment Scales') + print(batch.scales.dtype, batch.scales.shape) + print('\nAugment Rotations') + print(batch.rots.dtype, batch.rots.shape) + print('\nModel indices') + print(batch.model_inds.dtype, batch.model_inds.shape) + + print('\nAre input tensors pinned') + print(batch.neighbors[0].is_pinned()) + print(batch.neighbors[-1].is_pinned()) + print(batch.points[0].is_pinned()) + print(batch.points[-1].is_pinned()) + print(batch.labels.is_pinned()) + print(batch.scales.is_pinned()) + print(batch.rots.is_pinned()) + print(batch.model_inds.is_pinned()) + + show_input_batch(batch) + + print('*******************************************') + + _, counts = np.unique(dataset.input_labels, return_counts=True) + print(counts) + + +def debug_batch_and_neighbors_calib(dataset, loader): + """Timing of generator function""" + + t = [time.time()] + last_display = time.time() + mean_dt = np.zeros(2) + + for epoch in range(10): + + for batch_i, input_list in enumerate(loader): + # print(batch_i, tuple(points.shape), tuple(normals.shape), labels, indices, in_sizes) + + # New time + t = t[-1:] + t += [time.time()] + + # Pause simulating computations + time.sleep(0.01) + t += [time.time()] + + # Average timing + mean_dt = 0.9 * mean_dt + 0.1 * (np.array(t[1:]) - np.array(t[:-1])) + + # Console display (only one per second) + if (t[-1] - last_display) > 1.0: + last_display = t[-1] + message = 'Step {:08d} -> Average timings (ms/batch) {:8.2f} {:8.2f} ' + print(message.format(batch_i, + 1000 * mean_dt[0], + 1000 * mean_dt[1])) + + print('************* Epoch ended *************') + + _, counts = np.unique(dataset.input_labels, return_counts=True) + print(counts) diff --git a/competing_methods/my_KPConv/datasets/SemanticKitti.py b/competing_methods/my_KPConv/datasets/SemanticKitti.py new file mode 100644 index 00000000..7a781e93 --- /dev/null +++ b/competing_methods/my_KPConv/datasets/SemanticKitti.py @@ -0,0 +1,1455 @@ +# +# +# 0=================================0 +# | Kernel Point Convolutions | +# 0=================================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Class handling SemanticKitti dataset. +# Implements a Dataset, a Sampler, and a collate_fn +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 11/06/2018 +# + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Imports and global variables +# \**********************************/ +# + +# Common libs +import time +import numpy as np +import pickle +import torch +import yaml +from multiprocessing import Lock + + +# OS functions +from os import listdir +from os.path import exists, join, isdir + +# Dataset parent class +from datasets.common import * +from torch.utils.data import Sampler, get_worker_info +from utils.mayavi_visu import * +from utils.metrics import fast_confusion + +from datasets.common import grid_subsampling +from utils.config import bcolors + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Dataset class definition +# \******************************/ + + +class SemanticKittiDataset(PointCloudDataset): + """Class to handle SemanticKitti dataset.""" + + def __init__(self, config, set='training', balance_classes=True): + PointCloudDataset.__init__(self, 'SemanticKitti') + + ########################## + # Parameters for the files + ########################## + + # Dataset folder + self.path = '../../Data/SemanticKitti' + + # Type of task conducted on this dataset + self.dataset_task = 'slam_segmentation' + + # Training or test set + self.set = set + + # Get a list of sequences + if self.set == 'training': + self.sequences = ['{:02d}'.format(i) for i in range(11) if i != 8] + elif self.set == 'validation': + self.sequences = ['{:02d}'.format(i) for i in range(11) if i == 8] + elif self.set == 'test': + self.sequences = ['{:02d}'.format(i) for i in range(11, 22)] + else: + raise ValueError('Unknown set for SemanticKitti data: ', self.set) + + # List all files in each sequence + self.frames = [] + for seq in self.sequences: + velo_path = join(self.path, 'sequences', seq, 'velodyne') + frames = np.sort([vf[:-4] for vf in listdir(velo_path) if vf.endswith('.bin')]) + self.frames.append(frames) + + ########################### + # Object classes parameters + ########################### + + # Read labels + if config.n_frames == 1: + config_file = join(self.path, 'semantic-kitti.yaml') + elif config.n_frames > 1: + config_file = join(self.path, 'semantic-kitti-all.yaml') + else: + raise ValueError('number of frames has to be >= 1') + + with open(config_file, 'r') as stream: + doc = yaml.safe_load(stream) + all_labels = doc['labels'] + learning_map_inv = doc['learning_map_inv'] + learning_map = doc['learning_map'] + self.learning_map = np.zeros((np.max([k for k in learning_map.keys()]) + 1), dtype=np.int32) + for k, v in learning_map.items(): + self.learning_map[k] = v + + self.learning_map_inv = np.zeros((np.max([k for k in learning_map_inv.keys()]) + 1), dtype=np.int32) + for k, v in learning_map_inv.items(): + self.learning_map_inv[k] = v + + # Dict from labels to names + self.label_to_names = {k: all_labels[v] for k, v in learning_map_inv.items()} + + # Initiate a bunch of variables concerning class labels + self.init_labels() + + # List of classes ignored during training (can be empty) + self.ignored_labels = np.sort([0]) + + ################## + # Other parameters + ################## + + # Update number of class and data task in configuration + config.num_classes = self.num_classes + config.dataset_task = self.dataset_task + + # Parameters from config + self.config = config + + ################## + # Load calibration + ################## + + # Init variables + self.calibrations = [] + self.times = [] + self.poses = [] + self.all_inds = None + self.class_proportions = None + self.class_frames = [] + self.val_confs = [] + + # Load everything + self.load_calib_poses() + + ############################ + # Batch selection parameters + ############################ + + # Initialize value for batch limit (max number of points per batch). + self.batch_limit = torch.tensor([1], dtype=torch.float32) + self.batch_limit.share_memory_() + + # Initialize frame potentials + self.potentials = torch.from_numpy(np.random.rand(self.all_inds.shape[0]) * 0.1 + 0.1) + self.potentials.share_memory_() + + # If true, the same amount of frames is picked per class + self.balance_classes = balance_classes + + # Choose batch_num in_R and max_in_p depending on validation or training + if self.set == 'training': + self.batch_num = config.batch_num + self.max_in_p = config.max_in_points + self.in_R = config.in_radius + else: + self.batch_num = config.val_batch_num + self.max_in_p = config.max_val_points + self.in_R = config.val_radius + + # shared epoch indices and classes (in case we want class balanced sampler) + if set == 'training': + N = int(np.ceil(config.epoch_steps * self.batch_num * 1.1)) + else: + N = int(np.ceil(config.validation_size * self.batch_num * 1.1)) + self.epoch_i = torch.from_numpy(np.zeros((1,), dtype=np.int64)) + self.epoch_inds = torch.from_numpy(np.zeros((N,), dtype=np.int64)) + self.epoch_labels = torch.from_numpy(np.zeros((N,), dtype=np.int32)) + self.epoch_i.share_memory_() + self.epoch_inds.share_memory_() + self.epoch_labels.share_memory_() + + self.worker_waiting = torch.tensor([0 for _ in range(config.input_threads)], dtype=torch.int32) + self.worker_waiting.share_memory_() + self.worker_lock = Lock() + + return + + def __len__(self): + """ + Return the length of data here + """ + return len(self.frames) + + def __getitem__(self, batch_i): + """ + The main thread gives a list of indices to load a batch. Each worker is going to work in parallel to load a + different list of indices. + """ + + t = [time.time()] + + # Initiate concatanation lists + p_list = [] + f_list = [] + l_list = [] + fi_list = [] + p0_list = [] + s_list = [] + R_list = [] + r_inds_list = [] + r_mask_list = [] + val_labels_list = [] + batch_n = 0 + + while True: + + t += [time.time()] + + with self.worker_lock: + + # Get potential minimum + ind = int(self.epoch_inds[self.epoch_i]) + wanted_label = int(self.epoch_labels[self.epoch_i]) + + # Update epoch indice + self.epoch_i += 1 + + s_ind, f_ind = self.all_inds[ind] + + t += [time.time()] + + ######################### + # Merge n_frames together + ######################### + + # Initiate merged points + merged_points = np.zeros((0, 3), dtype=np.float32) + merged_labels = np.zeros((0,), dtype=np.int32) + merged_coords = np.zeros((0, 4), dtype=np.float32) + + # Get center of the first frame in world coordinates + p_origin = np.zeros((1, 4)) + p_origin[0, 3] = 1 + pose0 = self.poses[s_ind][f_ind] + p0 = p_origin.dot(pose0.T)[:, :3] + p0 = np.squeeze(p0) + o_pts = None + o_labels = None + + t += [time.time()] + + num_merged = 0 + f_inc = 0 + while num_merged < self.config.n_frames and f_ind - f_inc >= 0: + + # Current frame pose + pose = self.poses[s_ind][f_ind - f_inc] + + # Select frame only if center has moved far away (more than X meter). Negative value to ignore + X = -1.0 + if X > 0: + diff = p_origin.dot(pose.T)[:, :3] - p_origin.dot(pose0.T)[:, :3] + if num_merged > 0 and np.linalg.norm(diff) < num_merged * X: + f_inc += 1 + continue + + # Path of points and labels + seq_path = join(self.path, 'sequences', self.sequences[s_ind]) + velo_file = join(seq_path, 'velodyne', self.frames[s_ind][f_ind - f_inc] + '.bin') + if self.set == 'test': + label_file = None + else: + label_file = join(seq_path, 'labels', self.frames[s_ind][f_ind - f_inc] + '.label') + + # Read points + frame_points = np.fromfile(velo_file, dtype=np.float32) + points = frame_points.reshape((-1, 4)) + + if self.set == 'test': + # Fake labels + sem_labels = np.zeros((frame_points.shape[0],), dtype=np.int32) + else: + # Read labels + frame_labels = np.fromfile(label_file, dtype=np.int32) + sem_labels = frame_labels & 0xFFFF # semantic label in lower half + sem_labels = self.learning_map[sem_labels] + + # Apply pose (without np.dot to avoid multi-threading) + hpoints = np.hstack((points[:, :3], np.ones_like(points[:, :1]))) + #new_points = hpoints.dot(pose.T) + new_points = np.sum(np.expand_dims(hpoints, 2) * pose.T, axis=1) + #new_points[:, 3:] = points[:, 3:] + + # In case of validation, keep the original points in memory + if self.set in ['validation', 'test'] and f_inc == 0: + o_pts = new_points[:, :3].astype(np.float32) + o_labels = sem_labels.astype(np.int32) + + # In case radius smaller than 50m, chose new center on a point of the wanted class or not + if self.in_R < 50.0 and f_inc == 0: + if self.balance_classes: + wanted_ind = np.random.choice(np.where(sem_labels == wanted_label)[0]) + else: + wanted_ind = np.random.choice(new_points.shape[0]) + p0 = new_points[wanted_ind, :3] + + # Eliminate points further than config.in_radius + mask = np.sum(np.square(new_points[:, :3] - p0), axis=1) < self.in_R ** 2 + mask_inds = np.where(mask)[0].astype(np.int32) + + # Shuffle points + rand_order = np.random.permutation(mask_inds) + new_points = new_points[rand_order, :3] + sem_labels = sem_labels[rand_order] + + # Place points in original frame reference to get coordinates + if f_inc == 0: + new_coords = points[rand_order, :] + else: + # We have to project in the first frame coordinates + new_coords = new_points - pose0[:3, 3] + # new_coords = new_coords.dot(pose0[:3, :3]) + new_coords = np.sum(np.expand_dims(new_coords, 2) * pose0[:3, :3], axis=1) + new_coords = np.hstack((new_coords, points[:, 3:])) + + # Increment merge count + merged_points = np.vstack((merged_points, new_points)) + merged_labels = np.hstack((merged_labels, sem_labels)) + merged_coords = np.vstack((merged_coords, new_coords)) + num_merged += 1 + f_inc += 1 + + t += [time.time()] + + ######################### + # Merge n_frames together + ######################### + + # Subsample merged frames + in_pts, in_fts, in_lbls = grid_subsampling(merged_points, + features=merged_coords, + labels=merged_labels, + sampleDl=self.config.first_subsampling_dl) + + t += [time.time()] + + # Number collected + n = in_pts.shape[0] + + # Safe check + if n < 2: + continue + + # Randomly drop some points (augmentation process and safety for GPU memory consumption) + if n > self.max_in_p: + input_inds = np.random.choice(n, size=self.max_in_p, replace=False) + in_pts = in_pts[input_inds, :] + in_fts = in_fts[input_inds, :] + in_lbls = in_lbls[input_inds] + n = input_inds.shape[0] + + t += [time.time()] + + # Before augmenting, compute reprojection inds (only for validation and test) + if self.set in ['validation', 'test']: + + # get val_points that are in range + radiuses = np.sum(np.square(o_pts - p0), axis=1) + reproj_mask = radiuses < (0.99 * self.in_R) ** 2 + + # Project predictions on the frame points + search_tree = KDTree(in_pts, leaf_size=50) + proj_inds = search_tree.query(o_pts[reproj_mask, :], return_distance=False) + proj_inds = np.squeeze(proj_inds).astype(np.int32) + else: + proj_inds = np.zeros((0,)) + reproj_mask = np.zeros((0,)) + + t += [time.time()] + + # Data augmentation + in_pts, scale, R = self.augmentation_transform(in_pts) + + t += [time.time()] + + # Color augmentation + if np.random.rand() > self.config.augment_color: + in_fts[:, 3:] *= 0 + + # Stack batch + p_list += [in_pts] + f_list += [in_fts] + l_list += [np.squeeze(in_lbls)] + fi_list += [[s_ind, f_ind]] + p0_list += [p0] + s_list += [scale] + R_list += [R] + r_inds_list += [proj_inds] + r_mask_list += [reproj_mask] + val_labels_list += [o_labels] + + t += [time.time()] + + # Update batch size + batch_n += n + + # In case batch is full, stop + if batch_n > int(self.batch_limit): + break + + ################### + # Concatenate batch + ################### + + stacked_points = np.concatenate(p_list, axis=0) + features = np.concatenate(f_list, axis=0) + labels = np.concatenate(l_list, axis=0) + frame_inds = np.array(fi_list, dtype=np.int32) + frame_centers = np.stack(p0_list, axis=0) + stack_lengths = np.array([pp.shape[0] for pp in p_list], dtype=np.int32) + scales = np.array(s_list, dtype=np.float32) + rots = np.stack(R_list, axis=0) + + # Input features (Use reflectance, input height or all coordinates) + stacked_features = np.ones_like(stacked_points[:, :1], dtype=np.float32) + if self.config.in_features_dim == 1: + pass + elif self.config.in_features_dim == 2: + # Use original height coordinate + stacked_features = np.hstack((stacked_features, features[:, 2:3])) + elif self.config.in_features_dim == 3: + # Use height + reflectance + stacked_features = np.hstack((stacked_features, features[:, 2:])) + elif self.config.in_features_dim == 4: + # Use all coordinates + stacked_features = np.hstack((stacked_features, features[:3])) + elif self.config.in_features_dim == 5: + # Use all coordinates + reflectance + stacked_features = np.hstack((stacked_features, features)) + else: + raise ValueError('Only accepted input dimensions are 1, 4 and 7 (without and with XYZ)') + + t += [time.time()] + + ####################### + # Create network inputs + ####################### + # + # Points, neighbors, pooling indices for each layers + # + + # Get the whole input list + input_list = self.segmentation_inputs(stacked_points, + stacked_features, + labels.astype(np.int64), + stack_lengths) + + t += [time.time()] + + # Add scale and rotation for testing + input_list += [scales, rots, frame_inds, frame_centers, r_inds_list, r_mask_list, val_labels_list] + + t += [time.time()] + + # Display timings + debugT = False + if debugT: + print('\n************************\n') + print('Timings:') + ti = 0 + N = 9 + mess = 'Init ...... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += 1 + mess = 'Lock ...... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += 1 + mess = 'Init ...... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += 1 + mess = 'Load ...... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += 1 + mess = 'Subs ...... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += 1 + mess = 'Drop ...... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += 1 + mess = 'Reproj .... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += 1 + mess = 'Augment ... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += 1 + mess = 'Stack ..... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += N * (len(stack_lengths) - 1) + 1 + print('concat .... {:5.1f}ms'.format(1000 * (t[ti+1] - t[ti]))) + ti += 1 + print('input ..... {:5.1f}ms'.format(1000 * (t[ti+1] - t[ti]))) + ti += 1 + print('stack ..... {:5.1f}ms'.format(1000 * (t[ti+1] - t[ti]))) + ti += 1 + print('\n************************\n') + + return [self.config.num_layers] + input_list + + def load_calib_poses(self): + """ + load calib poses and times. + """ + + ########### + # Load data + ########### + + self.calibrations = [] + self.times = [] + self.poses = [] + + for seq in self.sequences: + + seq_folder = join(self.path, 'sequences', seq) + + # Read Calib + self.calibrations.append(self.parse_calibration(join(seq_folder, "calib.txt"))) + + # Read times + self.times.append(np.loadtxt(join(seq_folder, 'times.txt'), dtype=np.float32)) + + # Read poses + poses_f64 = self.parse_poses(join(seq_folder, 'poses.txt'), self.calibrations[-1]) + self.poses.append([pose.astype(np.float32) for pose in poses_f64]) + + ################################### + # Prepare the indices of all frames + ################################### + + seq_inds = np.hstack([np.ones(len(_), dtype=np.int32) * i for i, _ in enumerate(self.frames)]) + frame_inds = np.hstack([np.arange(len(_), dtype=np.int32) for _ in self.frames]) + self.all_inds = np.vstack((seq_inds, frame_inds)).T + + ################################################ + # For each class list the frames containing them + ################################################ + + if self.set in ['training', 'validation']: + + class_frames_bool = np.zeros((0, self.num_classes), dtype=np.bool) + self.class_proportions = np.zeros((self.num_classes,), dtype=np.int32) + + for s_ind, (seq, seq_frames) in enumerate(zip(self.sequences, self.frames)): + + frame_mode = 'single' + if self.config.n_frames > 1: + frame_mode = 'multi' + seq_stat_file = join(self.path, 'sequences', seq, 'stats_{:s}.pkl'.format(frame_mode)) + + # Check if inputs have already been computed + if exists(seq_stat_file): + # Read pkl + with open(seq_stat_file, 'rb') as f: + seq_class_frames, seq_proportions = pickle.load(f) + + else: + + # Initiate dict + print('Preparing seq {:s} class frames. (Long but one time only)'.format(seq)) + + # Class frames as a boolean mask + seq_class_frames = np.zeros((len(seq_frames), self.num_classes), dtype=np.bool) + + # Proportion of each class + seq_proportions = np.zeros((self.num_classes,), dtype=np.int32) + + # Sequence path + seq_path = join(self.path, 'sequences', seq) + + # Read all frames + for f_ind, frame_name in enumerate(seq_frames): + + # Path of points and labels + label_file = join(seq_path, 'labels', frame_name + '.label') + + # Read labels + frame_labels = np.fromfile(label_file, dtype=np.int32) + sem_labels = frame_labels & 0xFFFF # semantic label in lower half + sem_labels = self.learning_map[sem_labels] + + # Get present labels and there frequency + unique, counts = np.unique(sem_labels, return_counts=True) + + # Add this frame to the frame lists of all class present + frame_labels = np.array([self.label_to_idx[l] for l in unique], dtype=np.int32) + seq_class_frames[f_ind, frame_labels] = True + + # Add proportions + seq_proportions[frame_labels] += counts + + # Save pickle + with open(seq_stat_file, 'wb') as f: + pickle.dump([seq_class_frames, seq_proportions], f) + + class_frames_bool = np.vstack((class_frames_bool, seq_class_frames)) + self.class_proportions += seq_proportions + + # Transform boolean indexing to int indices. + self.class_frames = [] + for i, c in enumerate(self.label_values): + if c in self.ignored_labels: + self.class_frames.append(torch.zeros((0,), dtype=torch.int64)) + else: + integer_inds = np.where(class_frames_bool[:, i])[0] + self.class_frames.append(torch.from_numpy(integer_inds.astype(np.int64))) + + # Add variables for validation + if self.set == 'validation': + self.val_points = [] + self.val_labels = [] + self.val_confs = [] + + for s_ind, seq_frames in enumerate(self.frames): + self.val_confs.append(np.zeros((len(seq_frames), self.num_classes, self.num_classes))) + + return + + def parse_calibration(self, filename): + """ read calibration file with given filename + + Returns + ------- + dict + Calibration matrices as 4x4 numpy arrays. + """ + calib = {} + + calib_file = open(filename) + for line in calib_file: + key, content = line.strip().split(":") + values = [float(v) for v in content.strip().split()] + + pose = np.zeros((4, 4)) + pose[0, 0:4] = values[0:4] + pose[1, 0:4] = values[4:8] + pose[2, 0:4] = values[8:12] + pose[3, 3] = 1.0 + + calib[key] = pose + + calib_file.close() + + return calib + + def parse_poses(self, filename, calibration): + """ read poses file with per-scan poses from given filename + + Returns + ------- + list + list of poses as 4x4 numpy arrays. + """ + file = open(filename) + + poses = [] + + Tr = calibration["Tr"] + Tr_inv = np.linalg.inv(Tr) + + for line in file: + values = [float(v) for v in line.strip().split()] + + pose = np.zeros((4, 4)) + pose[0, 0:4] = values[0:4] + pose[1, 0:4] = values[4:8] + pose[2, 0:4] = values[8:12] + pose[3, 3] = 1.0 + + poses.append(np.matmul(Tr_inv, np.matmul(pose, Tr))) + + return poses + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Utility classes definition +# \********************************/ + + +class SemanticKittiSampler(Sampler): + """Sampler for SemanticKitti""" + + def __init__(self, dataset: SemanticKittiDataset): + Sampler.__init__(self, dataset) + + # Dataset used by the sampler (no copy is made in memory) + self.dataset = dataset + + # Number of step per epoch + if dataset.set == 'training': + self.N = dataset.config.epoch_steps + else: + self.N = dataset.config.validation_size + + return + + def __iter__(self): + """ + Yield next batch indices here. In this dataset, this is a dummy sampler that yield the index of batch element + (input sphere) in epoch instead of the list of point indices + """ + + if self.dataset.balance_classes: + + # Initiate current epoch ind + self.dataset.epoch_i *= 0 + self.dataset.epoch_inds *= 0 + self.dataset.epoch_labels *= 0 + + # Number of sphere centers taken per class in each cloud + num_centers = self.dataset.epoch_inds.shape[0] + + # Generate a list of indices balancing classes and respecting potentials + gen_indices = [] + gen_classes = [] + for i, c in enumerate(self.dataset.label_values): + if c not in self.dataset.ignored_labels: + + # Get the potentials of the frames containing this class + class_potentials = self.dataset.potentials[self.dataset.class_frames[i]] + + # Get the indices to generate thanks to potentials + used_classes = self.dataset.num_classes - len(self.dataset.ignored_labels) + class_n = num_centers // used_classes + 1 + if class_n < class_potentials.shape[0]: + _, class_indices = torch.topk(class_potentials, class_n, largest=False) + else: + class_indices = torch.zeros((0,), dtype=torch.int32) + while class_indices.shape < class_n: + new_class_inds = torch.randperm(class_potentials.shape[0]) + class_indices = torch.cat((class_indices, new_class_inds), dim=0) + class_indices = class_indices[:class_n] + class_indices = self.dataset.class_frames[i][class_indices] + + # Add the indices to the generated ones + gen_indices.append(class_indices) + gen_classes.append(class_indices * 0 + c) + + # Update potentials + update_inds = torch.unique(class_indices) + self.dataset.potentials[update_inds] = torch.ceil(self.dataset.potentials[update_inds]) + self.dataset.potentials[update_inds] += torch.from_numpy(np.random.rand(update_inds.shape[0]) * 0.1 + 0.1) + + # Stack the chosen indices of all classes + gen_indices = torch.cat(gen_indices, dim=0) + gen_classes = torch.cat(gen_classes, dim=0) + + # Shuffle generated indices + rand_order = torch.randperm(gen_indices.shape[0])[:num_centers] + gen_indices = gen_indices[rand_order] + gen_classes = gen_classes[rand_order] + + # Update potentials (Change the order for the next epoch) + #self.dataset.potentials[gen_indices] = torch.ceil(self.dataset.potentials[gen_indices]) + #self.dataset.potentials[gen_indices] += torch.from_numpy(np.random.rand(gen_indices.shape[0]) * 0.1 + 0.1) + + # Update epoch inds + self.dataset.epoch_inds += gen_indices + self.dataset.epoch_labels += gen_classes.type(torch.int32) + + else: + + # Initiate current epoch ind + self.dataset.epoch_i *= 0 + self.dataset.epoch_inds *= 0 + self.dataset.epoch_labels *= 0 + + # Number of sphere centers taken per class in each cloud + num_centers = self.dataset.epoch_inds.shape[0] + + # Get the list of indices to generate thanks to potentials + if num_centers < self.dataset.potentials.shape[0]: + _, gen_indices = torch.topk(self.dataset.potentials, num_centers, largest=False, sorted=True) + else: + gen_indices = torch.randperm(self.dataset.potentials.shape[0]) + + # Update potentials (Change the order for the next epoch) + self.dataset.potentials[gen_indices] = torch.ceil(self.dataset.potentials[gen_indices]) + self.dataset.potentials[gen_indices] += torch.from_numpy(np.random.rand(gen_indices.shape[0]) * 0.1 + 0.1) + + # Update epoch inds + self.dataset.epoch_inds += gen_indices + + # Generator loop + for i in range(self.N): + yield i + + def __len__(self): + """ + The number of yielded samples is variable + """ + return self.N + + def calib_max_in(self, config, dataloader, untouched_ratio=0.8, verbose=True, force_redo=False): + """ + Method performing batch and neighbors calibration. + Batch calibration: Set "batch_limit" (the maximum number of points allowed in every batch) so that the + average batch size (number of stacked pointclouds) is the one asked. + Neighbors calibration: Set the "neighborhood_limits" (the maximum number of neighbors allowed in convolutions) + so that 90% of the neighborhoods remain untouched. There is a limit for each layer. + """ + + ############################## + # Previously saved calibration + ############################## + + print('\nStarting Calibration of max_in_points value (use verbose=True for more details)') + t0 = time.time() + + redo = force_redo + + # Batch limit + # *********** + + # Load max_in_limit dictionary + max_in_lim_file = join(self.dataset.path, 'max_in_limits.pkl') + if exists(max_in_lim_file): + with open(max_in_lim_file, 'rb') as file: + max_in_lim_dict = pickle.load(file) + else: + max_in_lim_dict = {} + + # Check if the max_in limit associated with current parameters exists + if self.dataset.balance_classes: + sampler_method = 'balanced' + else: + sampler_method = 'random' + key = '{:s}_{:.3f}_{:.3f}'.format(sampler_method, + self.dataset.in_R, + self.dataset.config.first_subsampling_dl) + if not redo and key in max_in_lim_dict: + self.dataset.max_in_p = max_in_lim_dict[key] + else: + redo = True + + if verbose: + print('\nPrevious calibration found:') + print('Check max_in limit dictionary') + if key in max_in_lim_dict: + color = bcolors.OKGREEN + v = str(int(max_in_lim_dict[key])) + else: + color = bcolors.FAIL + v = '?' + print('{:}\"{:s}\": {:s}{:}'.format(color, key, v, bcolors.ENDC)) + + if redo: + + ######################## + # Batch calib parameters + ######################## + + # Loop parameters + last_display = time.time() + i = 0 + breaking = False + + all_lengths = [] + N = 1000 + + ##################### + # Perform calibration + ##################### + + for epoch in range(10): + for batch_i, batch in enumerate(dataloader): + + # Control max_in_points value + all_lengths += batch.lengths[0].tolist() + + # Convergence + if len(all_lengths) > N: + breaking = True + break + + i += 1 + t = time.time() + + # Console display (only one per second) + if t - last_display > 1.0: + last_display = t + message = 'Collecting {:d} in_points: {:5.1f}%' + print(message.format(N, + 100 * len(all_lengths) / N)) + + if breaking: + break + + self.dataset.max_in_p = int(np.percentile(all_lengths, 100*untouched_ratio)) + + if verbose: + + # Create histogram + a = 1 + + # Save max_in_limit dictionary + print('New max_in_p = ', self.dataset.max_in_p) + max_in_lim_dict[key] = self.dataset.max_in_p + with open(max_in_lim_file, 'wb') as file: + pickle.dump(max_in_lim_dict, file) + + # Update value in config + if self.dataset.set == 'training': + config.max_in_points = self.dataset.max_in_p + else: + config.max_val_points = self.dataset.max_in_p + + print('Calibration done in {:.1f}s\n'.format(time.time() - t0)) + return + + def calibration(self, dataloader, untouched_ratio=0.9, verbose=False, force_redo=False): + """ + Method performing batch and neighbors calibration. + Batch calibration: Set "batch_limit" (the maximum number of points allowed in every batch) so that the + average batch size (number of stacked pointclouds) is the one asked. + Neighbors calibration: Set the "neighborhood_limits" (the maximum number of neighbors allowed in convolutions) + so that 90% of the neighborhoods remain untouched. There is a limit for each layer. + """ + + ############################## + # Previously saved calibration + ############################## + + print('\nStarting Calibration (use verbose=True for more details)') + t0 = time.time() + + redo = force_redo + + # Batch limit + # *********** + + # Load batch_limit dictionary + batch_lim_file = join(self.dataset.path, 'batch_limits.pkl') + if exists(batch_lim_file): + with open(batch_lim_file, 'rb') as file: + batch_lim_dict = pickle.load(file) + else: + batch_lim_dict = {} + + # Check if the batch limit associated with current parameters exists + if self.dataset.balance_classes: + sampler_method = 'balanced' + else: + sampler_method = 'random' + key = '{:s}_{:.3f}_{:.3f}_{:d}_{:d}'.format(sampler_method, + self.dataset.in_R, + self.dataset.config.first_subsampling_dl, + self.dataset.batch_num, + self.dataset.max_in_p) + if not redo and key in batch_lim_dict: + self.dataset.batch_limit[0] = batch_lim_dict[key] + else: + redo = True + + if verbose: + print('\nPrevious calibration found:') + print('Check batch limit dictionary') + if key in batch_lim_dict: + color = bcolors.OKGREEN + v = str(int(batch_lim_dict[key])) + else: + color = bcolors.FAIL + v = '?' + print('{:}\"{:s}\": {:s}{:}'.format(color, key, v, bcolors.ENDC)) + + # Neighbors limit + # *************** + + # Load neighb_limits dictionary + neighb_lim_file = join(self.dataset.path, 'neighbors_limits.pkl') + if exists(neighb_lim_file): + with open(neighb_lim_file, 'rb') as file: + neighb_lim_dict = pickle.load(file) + else: + neighb_lim_dict = {} + + # Check if the limit associated with current parameters exists (for each layer) + neighb_limits = [] + for layer_ind in range(self.dataset.config.num_layers): + + dl = self.dataset.config.first_subsampling_dl * (2**layer_ind) + if self.dataset.config.deform_layers[layer_ind]: + r = dl * self.dataset.config.deform_radius + else: + r = dl * self.dataset.config.conv_radius + + key = '{:s}_{:d}_{:.3f}_{:.3f}'.format(sampler_method, self.dataset.max_in_p, dl, r) + if key in neighb_lim_dict: + neighb_limits += [neighb_lim_dict[key]] + + if not redo and len(neighb_limits) == self.dataset.config.num_layers: + self.dataset.neighborhood_limits = neighb_limits + else: + redo = True + + if verbose: + print('Check neighbors limit dictionary') + for layer_ind in range(self.dataset.config.num_layers): + dl = self.dataset.config.first_subsampling_dl * (2**layer_ind) + if self.dataset.config.deform_layers[layer_ind]: + r = dl * self.dataset.config.deform_radius + else: + r = dl * self.dataset.config.conv_radius + key = '{:s}_{:d}_{:.3f}_{:.3f}'.format(sampler_method, self.dataset.max_in_p, dl, r) + + if key in neighb_lim_dict: + color = bcolors.OKGREEN + v = str(neighb_lim_dict[key]) + else: + color = bcolors.FAIL + v = '?' + print('{:}\"{:s}\": {:s}{:}'.format(color, key, v, bcolors.ENDC)) + + if redo: + + ############################ + # Neighbors calib parameters + ############################ + + # From config parameter, compute higher bound of neighbors number in a neighborhood + hist_n = int(np.ceil(4 / 3 * np.pi * (self.dataset.config.deform_radius + 1) ** 3)) + + # Histogram of neighborhood sizes + neighb_hists = np.zeros((self.dataset.config.num_layers, hist_n), dtype=np.int32) + + ######################## + # Batch calib parameters + ######################## + + # Estimated average batch size and target value + estim_b = 0 + target_b = self.dataset.batch_num + + # Calibration parameters + low_pass_T = 10 + Kp = 100.0 + finer = False + + # Convergence parameters + smooth_errors = [] + converge_threshold = 0.1 + + # Save input pointcloud sizes to control max_in_points + cropped_n = 0 + all_n = 0 + + # Loop parameters + last_display = time.time() + i = 0 + breaking = False + + ##################### + # Perform calibration + ##################### + + #self.dataset.batch_limit[0] = self.dataset.max_in_p * (self.dataset.batch_num - 1) + + for epoch in range(10): + for batch_i, batch in enumerate(dataloader): + + # Control max_in_points value + are_cropped = batch.lengths[0] > self.dataset.max_in_p - 1 + cropped_n += torch.sum(are_cropped.type(torch.int32)).item() + all_n += int(batch.lengths[0].shape[0]) + + # Update neighborhood histogram + counts = [np.sum(neighb_mat.numpy() < neighb_mat.shape[0], axis=1) for neighb_mat in batch.neighbors] + hists = [np.bincount(c, minlength=hist_n)[:hist_n] for c in counts] + neighb_hists += np.vstack(hists) + + # batch length + b = len(batch.frame_inds) + + # Update estim_b (low pass filter) + estim_b += (b - estim_b) / low_pass_T + + # Estimate error (noisy) + error = target_b - b + + # Save smooth errors for convergene check + smooth_errors.append(target_b - estim_b) + if len(smooth_errors) > 10: + smooth_errors = smooth_errors[1:] + + # Update batch limit with P controller + self.dataset.batch_limit[0] += Kp * error + + # finer low pass filter when closing in + if not finer and np.abs(estim_b - target_b) < 1: + low_pass_T = 100 + finer = True + + # Convergence + if finer and np.max(np.abs(smooth_errors)) < converge_threshold: + breaking = True + break + + i += 1 + t = time.time() + + # Console display (only one per second) + if verbose and (t - last_display) > 1.0: + last_display = t + message = 'Step {:5d} estim_b ={:5.2f} batch_limit ={:7d}' + print(message.format(i, + estim_b, + int(self.dataset.batch_limit[0]))) + + if breaking: + break + + # Use collected neighbor histogram to get neighbors limit + cumsum = np.cumsum(neighb_hists.T, axis=0) + percentiles = np.sum(cumsum < (untouched_ratio * cumsum[hist_n - 1, :]), axis=0) + self.dataset.neighborhood_limits = percentiles + + if verbose: + + # Crop histogram + while np.sum(neighb_hists[:, -1]) == 0: + neighb_hists = neighb_hists[:, :-1] + hist_n = neighb_hists.shape[1] + + print('\n**************************************************\n') + line0 = 'neighbors_num ' + for layer in range(neighb_hists.shape[0]): + line0 += '| layer {:2d} '.format(layer) + print(line0) + for neighb_size in range(hist_n): + line0 = ' {:4d} '.format(neighb_size) + for layer in range(neighb_hists.shape[0]): + if neighb_size > percentiles[layer]: + color = bcolors.FAIL + else: + color = bcolors.OKGREEN + line0 += '|{:}{:10d}{:} '.format(color, + neighb_hists[layer, neighb_size], + bcolors.ENDC) + + print(line0) + + print('\n**************************************************\n') + print('\nchosen neighbors limits: ', percentiles) + print() + + # Control max_in_points value + print('\n**************************************************\n') + if cropped_n > 0.3 * all_n: + color = bcolors.FAIL + else: + color = bcolors.OKGREEN + print('Current value of max_in_points {:d}'.format(self.dataset.max_in_p)) + print(' > {:}{:.1f}% inputs are cropped{:}'.format(color, 100 * cropped_n / all_n, bcolors.ENDC)) + if cropped_n > 0.3 * all_n: + print('\nTry a higher max_in_points value\n'.format(100 * cropped_n / all_n)) + #raise ValueError('Value of max_in_points too low') + print('\n**************************************************\n') + + # Save batch_limit dictionary + key = '{:s}_{:.3f}_{:.3f}_{:d}_{:d}'.format(sampler_method, + self.dataset.in_R, + self.dataset.config.first_subsampling_dl, + self.dataset.batch_num, + self.dataset.max_in_p) + batch_lim_dict[key] = float(self.dataset.batch_limit[0]) + with open(batch_lim_file, 'wb') as file: + pickle.dump(batch_lim_dict, file) + + # Save neighb_limit dictionary + for layer_ind in range(self.dataset.config.num_layers): + dl = self.dataset.config.first_subsampling_dl * (2 ** layer_ind) + if self.dataset.config.deform_layers[layer_ind]: + r = dl * self.dataset.config.deform_radius + else: + r = dl * self.dataset.config.conv_radius + key = '{:s}_{:d}_{:.3f}_{:.3f}'.format(sampler_method, self.dataset.max_in_p, dl, r) + neighb_lim_dict[key] = self.dataset.neighborhood_limits[layer_ind] + with open(neighb_lim_file, 'wb') as file: + pickle.dump(neighb_lim_dict, file) + + + print('Calibration done in {:.1f}s\n'.format(time.time() - t0)) + return + + +class SemanticKittiCustomBatch: + """Custom batch definition with memory pinning for SemanticKitti""" + + def __init__(self, input_list): + + # Get rid of batch dimension + input_list = input_list[0] + + # Number of layers + L = int(input_list[0]) + + # Extract input tensors from the list of numpy array + ind = 1 + self.points = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.neighbors = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.pools = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.upsamples = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.lengths = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.features = torch.from_numpy(input_list[ind]) + ind += 1 + self.labels = torch.from_numpy(input_list[ind]) + ind += 1 + self.scales = torch.from_numpy(input_list[ind]) + ind += 1 + self.rots = torch.from_numpy(input_list[ind]) + ind += 1 + self.frame_inds = torch.from_numpy(input_list[ind]) + ind += 1 + self.frame_centers = torch.from_numpy(input_list[ind]) + ind += 1 + self.reproj_inds = input_list[ind] + ind += 1 + self.reproj_masks = input_list[ind] + ind += 1 + self.val_labels = input_list[ind] + + return + + def pin_memory(self): + """ + Manual pinning of the memory + """ + + self.points = [in_tensor.pin_memory() for in_tensor in self.points] + self.neighbors = [in_tensor.pin_memory() for in_tensor in self.neighbors] + self.pools = [in_tensor.pin_memory() for in_tensor in self.pools] + self.upsamples = [in_tensor.pin_memory() for in_tensor in self.upsamples] + self.lengths = [in_tensor.pin_memory() for in_tensor in self.lengths] + self.features = self.features.pin_memory() + self.labels = self.labels.pin_memory() + self.scales = self.scales.pin_memory() + self.rots = self.rots.pin_memory() + self.frame_inds = self.frame_inds.pin_memory() + self.frame_centers = self.frame_centers.pin_memory() + + return self + + def to(self, device): + + self.points = [in_tensor.to(device) for in_tensor in self.points] + self.neighbors = [in_tensor.to(device) for in_tensor in self.neighbors] + self.pools = [in_tensor.to(device) for in_tensor in self.pools] + self.upsamples = [in_tensor.to(device) for in_tensor in self.upsamples] + self.lengths = [in_tensor.to(device) for in_tensor in self.lengths] + self.features = self.features.to(device) + self.labels = self.labels.to(device) + self.scales = self.scales.to(device) + self.rots = self.rots.to(device) + self.frame_inds = self.frame_inds.to(device) + self.frame_centers = self.frame_centers.to(device) + + return self + + def unstack_points(self, layer=None): + """Unstack the points""" + return self.unstack_elements('points', layer) + + def unstack_neighbors(self, layer=None): + """Unstack the neighbors indices""" + return self.unstack_elements('neighbors', layer) + + def unstack_pools(self, layer=None): + """Unstack the pooling indices""" + return self.unstack_elements('pools', layer) + + def unstack_elements(self, element_name, layer=None, to_numpy=True): + """ + Return a list of the stacked elements in the batch at a certain layer. If no layer is given, then return all + layers + """ + + if element_name == 'points': + elements = self.points + elif element_name == 'neighbors': + elements = self.neighbors + elif element_name == 'pools': + elements = self.pools[:-1] + else: + raise ValueError('Unknown element name: {:s}'.format(element_name)) + + all_p_list = [] + for layer_i, layer_elems in enumerate(elements): + + if layer is None or layer == layer_i: + + i0 = 0 + p_list = [] + if element_name == 'pools': + lengths = self.lengths[layer_i+1] + else: + lengths = self.lengths[layer_i] + + for b_i, length in enumerate(lengths): + + elem = layer_elems[i0:i0 + length] + if element_name == 'neighbors': + elem[elem >= self.points[layer_i].shape[0]] = -1 + elem[elem >= 0] -= i0 + elif element_name == 'pools': + elem[elem >= self.points[layer_i].shape[0]] = -1 + elem[elem >= 0] -= torch.sum(self.lengths[layer_i][:b_i]) + i0 += length + + if to_numpy: + p_list.append(elem.numpy()) + else: + p_list.append(elem) + + if layer == layer_i: + return p_list + + all_p_list.append(p_list) + + return all_p_list + + +def SemanticKittiCollate(batch_data): + return SemanticKittiCustomBatch(batch_data) + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Debug functions +# \*********************/ + + +def debug_timing(dataset, loader): + """Timing of generator function""" + + t = [time.time()] + last_display = time.time() + mean_dt = np.zeros(2) + estim_b = dataset.batch_num + estim_N = 0 + + for epoch in range(10): + + for batch_i, batch in enumerate(loader): + # print(batch_i, tuple(points.shape), tuple(normals.shape), labels, indices, in_sizes) + + # New time + t = t[-1:] + t += [time.time()] + + # Update estim_b (low pass filter) + estim_b += (len(batch.frame_inds) - estim_b) / 100 + estim_N += (batch.features.shape[0] - estim_N) / 10 + + # Pause simulating computations + time.sleep(0.05) + t += [time.time()] + + # Average timing + mean_dt = 0.9 * mean_dt + 0.1 * (np.array(t[1:]) - np.array(t[:-1])) + + # Console display (only one per second) + if (t[-1] - last_display) > -1.0: + last_display = t[-1] + message = 'Step {:08d} -> (ms/batch) {:8.2f} {:8.2f} / batch = {:.2f} - {:.0f}' + print(message.format(batch_i, + 1000 * mean_dt[0], + 1000 * mean_dt[1], + estim_b, + estim_N)) + + print('************* Epoch ended *************') + + _, counts = np.unique(dataset.input_labels, return_counts=True) + print(counts) + + +def debug_class_w(dataset, loader): + """Timing of generator function""" + + i = 0 + + counts = np.zeros((dataset.num_classes,), dtype=np.int64) + + s = '{:^6}|'.format('step') + for c in dataset.label_names: + s += '{:^6}'.format(c[:4]) + print(s) + print(6*'-' + '|' + 6*dataset.num_classes*'-') + + for epoch in range(10): + for batch_i, batch in enumerate(loader): + # print(batch_i, tuple(points.shape), tuple(normals.shape), labels, indices, in_sizes) + + # count labels + new_counts = np.bincount(batch.labels) + + counts[:new_counts.shape[0]] += new_counts.astype(np.int64) + + # Update proportions + proportions = 1000 * counts / np.sum(counts) + + s = '{:^6d}|'.format(i) + for pp in proportions: + s += '{:^6.1f}'.format(pp) + print(s) + i += 1 + diff --git a/competing_methods/my_KPConv/datasets/UrbanMesh.py b/competing_methods/my_KPConv/datasets/UrbanMesh.py new file mode 100644 index 00000000..0fc5d1f3 --- /dev/null +++ b/competing_methods/my_KPConv/datasets/UrbanMesh.py @@ -0,0 +1,1585 @@ +# +# +# 0=================================0 +# | Kernel Point Convolutions | +# 0=================================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Class handling UrbanMesh dataset. +# Implements a Dataset, a Sampler, and a collate_fn +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 11/06/2018 +# + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Imports and global variables +# \**********************************/ +# + +# Common libs +import time +import numpy as np +import pickle +import torch +import math +from multiprocessing import Lock + + +# OS functions +from os import listdir +from os.path import exists, join, isdir + +# Dataset parent class +from os.path import join, exists, dirname, abspath +from datasets.common import PointCloudDataset +from torch.utils.data import Sampler, get_worker_info +from utils.mayavi_visu import * + +from datasets.common import grid_subsampling +from utils.config import bcolors +from plyfile import PlyData, PlyElement +import glob, os + +################################### UTILS Functions ####################################### +def read_ply_with_plyfilelib(filename): + """convert from a ply file. include the label and the object number""" + # ---read the ply file-------- + plydata = PlyData.read(filename) + xyz = np.stack([plydata['vertex'][n] for n in ['x', 'y', 'z']], axis=1) + try: + rgb = np.stack([plydata['vertex'][n] + for n in ['red', 'green', 'blue']] + , axis=1).astype(np.uint8) + except ValueError: + rgb = np.stack([plydata['vertex'][n] + for n in ['r', 'g', 'b']] + , axis=1).astype(np.float32) + if np.max(rgb) > 1: + rgb = rgb + try: + object_indices = plydata['vertex']['object_index'] + labels = plydata['vertex']['label'] + return xyz, rgb, labels, object_indices + except ValueError: + try: + labels = plydata['vertex']['label'] + return xyz, rgb, labels + except ValueError: + return xyz, rgb + + +def write_ply_with_plyfilelib(filename, xyz, rgb, label): + """write into a ply file""" + prop = [('x', 'f8'), ('y', 'f8'), ('z', 'f8'), ('red', 'u1'), ('green', 'u1'), ('blue', 'u1'), ('label', 'u1')] #Classification', + #colors = COLOR_MAP[np.asarray(label)] + vertex_all = np.empty(len(xyz), dtype=prop) + for i_prop in range(0, 3): + vertex_all[prop[i_prop][0]] = xyz[:, i_prop] + for i_prop in range(0, 3): + vertex_all[prop[i_prop + 3][0]] = rgb[:, i_prop] + vertex_all[prop[6][0]] = label + ply = PlyData([PlyElement.describe(vertex_all, 'vertex')], text=False) # True ascii + ply.write(filename) + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Dataset class definition +# \******************************/ +BASE_DIR = dirname(abspath(__file__)) +ROOT_DIR = dirname(BASE_DIR) +sys.path.append(ROOT_DIR) + +class UrbanMeshDataset(PointCloudDataset): + """Class to handle UrbanMesh dataset.""" + + def __init__(self, config, set='training', use_potentials=True, load_data=True): + """ + This dataset is small enough to be stored in-memory, so load all point clouds here + """ + PointCloudDataset.__init__(self, 'UrbanMesh') + + ############ + # Parameters + ############ + + # Dict from labels to names + self.label_to_names = {0: 'unlabelled', + 1: 'ground', + 2: 'vegetation', + 3: 'building', + 4: 'water', + 5: 'car', + 6: 'boat'} + + # Initialize a bunch of variables concerning class labels + self.init_labels() + + # List of classes ignored during training (can be empty) + self.ignored_labels = np.sort([0]) + + # Dataset folder + self.root = ROOT_DIR + '/' + self.path = ROOT_DIR + '/data' + + # Type of task conducted on this dataset + self.dataset_task = 'cloud_segmentation' + + # Update number of class and data task in configuration + config.num_classes = self.num_classes + config.dataset_task = self.dataset_task + + # Parameters from config + self.config = config + + # Training or test set + self.set = set + + # Using potential or random epoch generation + self.use_potentials = use_potentials + + # Proportion of validation scenes + #self.cloud_names = ['Area_1', 'Area_2', 'Area_3', 'Area_4', 'Area_5', 'Area_6'] + #self.all_splits = [0, 1, 2, 3, 4, 5] + #self.validation_split = 1 + + # Number of models used per epoch + if self.set == 'training': + self.epoch_n = config.epoch_steps * config.batch_num + elif self.set in ['validation', 'test']: + self.epoch_n = config.validation_size * config.batch_num + else: + raise ValueError('Unknown set for UrbanMesh data: ', self.set) + + # Stop data is not needed + if not load_data: + return + + ################ + # Load ply files + ################ + + # List of training files + folders = ["train/", "test/", "validate/"] + self.files = [] + self.cloud_names = [] + for folder in folders: + data_folder = self.path + "/raw/" + folder + #data_folder = self.path + "/raw_small/" + folder + files = glob.glob(data_folder + "*.ply") + if self.set == 'training' and folder == "train/": + self.files += files + for file in files: + file_name = os.path.splitext(os.path.basename(file))[0] + self.cloud_names.append(file_name) + break + elif self.set == 'validation' and folder == "validate/": + self.files += files + for file in files: + file_name = os.path.splitext(os.path.basename(file))[0] + self.cloud_names.append(file_name) + break + elif self.set == 'test' and folder == "test/": + self.files += files + for file in files: + file_name = os.path.splitext(os.path.basename(file))[0] + self.cloud_names.append(file_name) + break + + if len(self.files) == 0: + raise ValueError('Unknown set for UrbanMesh data: ', self.set) + + if 0 < self.config.first_subsampling_dl <= 0.01: + raise ValueError('subsampling_parameter too low (should be over 1 cm') + + # Initiate containers + self.input_trees = [] + self.input_colors = [] + self.input_labels = [] + self.pot_trees = [] + self.num_clouds = 0 + self.test_proj = [] + self.validation_labels = [] + + # Start loading + self.load_subsampled_clouds() + + ############################ + # Batch selection parameters + ############################ + + # Initialize value for batch limit (max number of points per batch). + self.batch_limit = torch.tensor([1], dtype=torch.float32) + self.batch_limit.share_memory_() + + # Initialize potentials + if use_potentials: + self.potentials = [] + self.min_potentials = [] + self.argmin_potentials = [] + for i, tree in enumerate(self.pot_trees): + self.potentials += [torch.from_numpy(np.random.rand(tree.data.shape[0]) * 1e-3)] + min_ind = int(torch.argmin(self.potentials[-1])) + self.argmin_potentials += [min_ind] + self.min_potentials += [float(self.potentials[-1][min_ind])] + + # Share potential memory + self.argmin_potentials = torch.from_numpy(np.array(self.argmin_potentials, dtype=np.int64)) + self.min_potentials = torch.from_numpy(np.array(self.min_potentials, dtype=np.float64)) + self.argmin_potentials.share_memory_() + self.min_potentials.share_memory_() + for i, _ in enumerate(self.pot_trees): + self.potentials[i].share_memory_() + + self.worker_waiting = torch.tensor([0 for _ in range(config.input_threads)], dtype=torch.int32) + self.worker_waiting.share_memory_() + self.epoch_inds = None + self.epoch_i = 0 + + else: + self.potentials = None + self.min_potentials = None + self.argmin_potentials = None + N = config.epoch_steps * config.batch_num + self.epoch_inds = torch.from_numpy(np.zeros((2, N), dtype=np.int64)) + self.epoch_i = torch.from_numpy(np.zeros((1,), dtype=np.int64)) + self.epoch_i.share_memory_() + self.epoch_inds.share_memory_() + + self.worker_lock = Lock() + + # For ERF visualization, we want only one cloud per batch and no randomness + if self.set == 'ERF': + self.batch_limit = torch.tensor([1], dtype=torch.float32) + self.batch_limit.share_memory_() + np.random.seed(42) + + return + + def __len__(self): + """ + Return the length of data here + """ + return len(self.cloud_names) + + def __getitem__(self, batch_i): + """ + The main thread gives a list of indices to load a batch. Each worker is going to work in parallel to load a + different list of indices. + """ + + if self.use_potentials: + return self.potential_item(batch_i) + else: + return self.random_item(batch_i) + + def potential_item(self, batch_i, debug_workers=False): + + t = [time.time()] + + # Initiate concatanation lists + p_list = [] + f_list = [] + l_list = [] + i_list = [] + pi_list = [] + ci_list = [] + s_list = [] + R_list = [] + batch_n = 0 + + info = get_worker_info() + if info is not None: + wid = info.id + else: + wid = None + + while True: + + t += [time.time()] + + if debug_workers: + message = '' + for wi in range(info.num_workers): + if wi == wid: + message += ' {:}X{:} '.format(bcolors.FAIL, bcolors.ENDC) + elif self.worker_waiting[wi] == 0: + message += ' ' + elif self.worker_waiting[wi] == 1: + message += ' | ' + elif self.worker_waiting[wi] == 2: + message += ' o ' + print(message) + self.worker_waiting[wid] = 0 + + with self.worker_lock: + + if debug_workers: + message = '' + for wi in range(info.num_workers): + if wi == wid: + message += ' {:}v{:} '.format(bcolors.OKGREEN, bcolors.ENDC) + elif self.worker_waiting[wi] == 0: + message += ' ' + elif self.worker_waiting[wi] == 1: + message += ' | ' + elif self.worker_waiting[wi] == 2: + message += ' o ' + print(message) + self.worker_waiting[wid] = 1 + + # Get potential minimum + cloud_ind = int(torch.argmin(self.min_potentials)) + point_ind = int(self.argmin_potentials[cloud_ind]) + + # Get potential points from tree structure + pot_points = np.array(self.pot_trees[cloud_ind].data, copy=False) + + # Center point of input region + center_point = pot_points[point_ind, :].reshape(1, -1) + + # Add a small noise to center point + if self.set != 'ERF': + center_point += np.random.normal(scale=self.config.in_radius / 10, size=center_point.shape) + + # Indices of points in input region + pot_inds, dists = self.pot_trees[cloud_ind].query_radius(center_point, + r=self.config.in_radius, + return_distance=True) + + d2s = np.square(dists[0]) + pot_inds = pot_inds[0] + + # Update potentials (Tukey weights) + if self.set != 'ERF': + tukeys = np.square(1 - d2s / np.square(self.config.in_radius)) + tukeys[d2s > np.square(self.config.in_radius)] = 0 + self.potentials[cloud_ind][pot_inds] += tukeys + min_ind = torch.argmin(self.potentials[cloud_ind]) + self.min_potentials[[cloud_ind]] = self.potentials[cloud_ind][min_ind] + self.argmin_potentials[[cloud_ind]] = min_ind + + t += [time.time()] + + # Get points from tree structure + points = np.array(self.input_trees[cloud_ind].data, copy=False) + + + # Indices of points in input region + input_inds = self.input_trees[cloud_ind].query_radius(center_point, + r=self.config.in_radius)[0] + + t += [time.time()] + + # Number collected + n = input_inds.shape[0] + + # Collect labels and colors + input_points = (points[input_inds] - center_point).astype(np.float32) + input_colors = self.input_colors[cloud_ind][input_inds] + if self.set in ['test', 'ERF']: + input_labels = np.zeros(input_points.shape[0]) + else: + input_labels = self.input_labels[cloud_ind][input_inds] + input_labels = np.array([self.label_to_idx[l] for l in input_labels]) + + t += [time.time()] + + # Data augmentation + input_points, scale, R = self.augmentation_transform(input_points) + + # Color augmentation + if np.random.rand() > self.config.augment_color: + input_colors *= 0 + + # Get original height as additional feature + input_features = np.hstack((input_colors, input_points[:, 2:] + center_point[:, 2:])).astype(np.float32) + + t += [time.time()] + + # Stack batch + p_list += [input_points] + f_list += [input_features] + l_list += [input_labels] + pi_list += [input_inds] + i_list += [point_ind] + ci_list += [cloud_ind] + s_list += [scale] + R_list += [R] + + # Update batch size + batch_n += n + + # In case batch is full, stop + if batch_n > int(self.batch_limit): + break + + # Randomly drop some points (act as an augmentation process and a safety for GPU memory consumption) + # if n > int(self.batch_limit): + # input_inds = np.random.choice(input_inds, size=int(self.batch_limit) - 1, replace=False) + # n = input_inds.shape[0] + + ################### + # Concatenate batch + ################### + + stacked_points = np.concatenate(p_list, axis=0) + features = np.concatenate(f_list, axis=0) + labels = np.concatenate(l_list, axis=0) + point_inds = np.array(i_list, dtype=np.int32) + cloud_inds = np.array(ci_list, dtype=np.int32) + input_inds = np.concatenate(pi_list, axis=0) + stack_lengths = np.array([pp.shape[0] for pp in p_list], dtype=np.int32) + scales = np.array(s_list, dtype=np.float32) + rots = np.stack(R_list, axis=0) + + # Input features + stacked_features = np.ones_like(stacked_points[:, :1], dtype=np.float32) + if self.config.in_features_dim == 1: + pass + elif self.config.in_features_dim == 4: + stacked_features = np.hstack((stacked_features, features[:, :3])) + elif self.config.in_features_dim == 5: + stacked_features = np.hstack((stacked_features, features)) + else: + raise ValueError('Only accepted input dimensions are 1, 4 and 7 (without and with XYZ)') + + ####################### + # Create network inputs + ####################### + # + # Points, neighbors, pooling indices for each layers + # + + t += [time.time()] + + # Get the whole input list + input_list = self.segmentation_inputs(stacked_points, + stacked_features, + labels, + stack_lengths) + + t += [time.time()] + + # Add scale and rotation for testing + input_list += [scales, rots, cloud_inds, point_inds, input_inds] + + if debug_workers: + message = '' + for wi in range(info.num_workers): + if wi == wid: + message += ' {:}0{:} '.format(bcolors.OKBLUE, bcolors.ENDC) + elif self.worker_waiting[wi] == 0: + message += ' ' + elif self.worker_waiting[wi] == 1: + message += ' | ' + elif self.worker_waiting[wi] == 2: + message += ' o ' + print(message) + self.worker_waiting[wid] = 2 + + t += [time.time()] + + # Display timings + debugT = False + if debugT: + print('\n************************\n') + print('Timings:') + ti = 0 + N = 5 + mess = 'Init ...... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += 1 + mess = 'Pots ...... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += 1 + mess = 'Sphere .... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += 1 + mess = 'Collect ... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += 1 + mess = 'Augment ... {:5.1f}ms /' + loop_times = [1000 * (t[ti + N * i + 1] - t[ti + N * i]) for i in range(len(stack_lengths))] + for dt in loop_times: + mess += ' {:5.1f}'.format(dt) + print(mess.format(np.sum(loop_times))) + ti += N * (len(stack_lengths) - 1) + 1 + print('concat .... {:5.1f}ms'.format(1000 * (t[ti+1] - t[ti]))) + ti += 1 + print('input ..... {:5.1f}ms'.format(1000 * (t[ti+1] - t[ti]))) + ti += 1 + print('stack ..... {:5.1f}ms'.format(1000 * (t[ti+1] - t[ti]))) + ti += 1 + print('\n************************\n') + return input_list + + def random_item(self, batch_i): + + # Initiate concatanation lists + p_list = [] + f_list = [] + l_list = [] + i_list = [] + pi_list = [] + ci_list = [] + s_list = [] + R_list = [] + batch_n = 0 + + while True: + + with self.worker_lock: + + # Get potential minimum + cloud_ind = int(self.epoch_inds[0, self.epoch_i]) + point_ind = int(self.epoch_inds[1, self.epoch_i]) + + # Update epoch indice + self.epoch_i += 1 + + # Get points from tree structure + points = np.array(self.input_trees[cloud_ind].data, copy=False) + + # Center point of input region + center_point = points[point_ind, :].reshape(1, -1) + + # Add a small noise to center point + if self.set != 'ERF': + center_point += np.random.normal(scale=self.config.in_radius / 10, size=center_point.shape) + + # Indices of points in input region + input_inds = self.input_trees[cloud_ind].query_radius(center_point, + r=self.config.in_radius)[0] + + # Number collected + n = input_inds.shape[0] + + # Collect labels and colors + input_points = (points[input_inds] - center_point).astype(np.float32) + input_colors = self.input_colors[cloud_ind][input_inds] + if self.set in ['test', 'ERF']: + input_labels = np.zeros(input_points.shape[0]) + else: + input_labels = self.input_labels[cloud_ind][input_inds] + input_labels = np.array([self.label_to_idx[l] for l in input_labels]) + + # Data augmentation + input_points, scale, R = self.augmentation_transform(input_points) + + # Color augmentation + if np.random.rand() > self.config.augment_color: + input_colors *= 0 + + # Get original height as additional feature + input_features = np.hstack((input_colors, input_points[:, 2:] + center_point[:, 2:])).astype(np.float32) + + # Stack batch + p_list += [input_points] + f_list += [input_features] + l_list += [input_labels] + pi_list += [input_inds] + i_list += [point_ind] + ci_list += [cloud_ind] + s_list += [scale] + R_list += [R] + + # Update batch size + batch_n += n + + # In case batch is full, stop + if batch_n > int(self.batch_limit): + break + + # Randomly drop some points (act as an augmentation process and a safety for GPU memory consumption) + # if n > int(self.batch_limit): + # input_inds = np.random.choice(input_inds, size=int(self.batch_limit) - 1, replace=False) + # n = input_inds.shape[0] + + ################### + # Concatenate batch + ################### + + stacked_points = np.concatenate(p_list, axis=0) + features = np.concatenate(f_list, axis=0) + labels = np.concatenate(l_list, axis=0) + point_inds = np.array(i_list, dtype=np.int32) + cloud_inds = np.array(ci_list, dtype=np.int32) + input_inds = np.concatenate(pi_list, axis=0) + stack_lengths = np.array([pp.shape[0] for pp in p_list], dtype=np.int32) + scales = np.array(s_list, dtype=np.float32) + rots = np.stack(R_list, axis=0) + + # Input features + stacked_features = np.ones_like(stacked_points[:, :1], dtype=np.float32) + if self.config.in_features_dim == 1: + pass + elif self.config.in_features_dim == 4: + stacked_features = np.hstack((stacked_features, features[:, :3])) + elif self.config.in_features_dim == 5: + stacked_features = np.hstack((stacked_features, features)) + else: + raise ValueError('Only accepted input dimensions are 1, 4 and 7 (without and with XYZ)') + + ####################### + # Create network inputs + ####################### + # + # Points, neighbors, pooling indices for each layers + # + + # Get the whole input list + input_list = self.segmentation_inputs(stacked_points, + stacked_features, + labels, + stack_lengths) + + # Add scale and rotation for testing + input_list += [scales, rots, cloud_inds, point_inds, input_inds] + + return input_list + + def load_subsampled_clouds(self): + + # Parameter + dl = self.config.first_subsampling_dl + + # Create path for files + tree_path = join(self.path, 'input_{:.3f}'.format(dl)) + if not exists(tree_path): + makedirs(tree_path) + + ############## + # Load KDTrees + ############## + + for i, file_path in enumerate(self.files): + + # Restart timer + t0 = time.time() + + # Get cloud name + cloud_name = self.cloud_names[i] + + # Name of the input files + KDTree_file = join(tree_path, '{:s}_KDTree.pkl'.format(cloud_name)) + sub_ply_file = join(tree_path, '{:s}.ply'.format(cloud_name)) + + # Check if inputs have already been computed + if exists(KDTree_file): + print('\nFound KDTree for cloud {:s}, subsampled at {:.3f}'.format(cloud_name, dl)) + + # read ply with data + # data = read_ply(sub_ply_file) + # sub_colors = np.vstack((data['red'], data['green'], data['blue'])).T + # sub_labels = data['class'] + data, sub_colors, sub_labels = read_ply_with_plyfilelib(sub_ply_file) + + # Read pkl with search tree + with open(KDTree_file, 'rb') as f: + search_tree = pickle.load(f) + + else: + print('\nPreparing KDTree for cloud {:s}, subsampled at {:.3f}'.format(cloud_name, dl)) + + # Read ply file + xyz, rgb, rawlabels = read_ply_with_plyfilelib(file_path) + points = xyz + colors = 255 * rgb # Now scale by 255 + labels = np.array([x if x > 0 else x + 1 for x in rawlabels]) + + xyz = xyz.astype(np.float32) + colors = colors.astype(np.float32) + labels = labels.astype(np.int32) + + # Subsample cloud + sub_points, sub_colors, sub_labels = grid_subsampling(points, + features=colors, + labels=labels, + sampleDl=dl) + + # Rescale float color and squeeze label + sub_colors = sub_colors / 255 + sub_labels = np.squeeze(sub_labels) + + # Get chosen neighborhoods + search_tree = KDTree(sub_points, leaf_size=50) #10 + #search_tree = nnfln.KDTree(n_neighbors=1, metric='L2', leaf_size=10) + #search_tree.fit(sub_points) + + # Save KDTree + with open(KDTree_file, 'wb') as f: + pickle.dump(search_tree, f) + + # Save ply + # write_ply(sub_ply_file, + # [sub_points, sub_colors, sub_labels], + # ['x', 'y', 'z', 'red', 'green', 'blue', 'class']) + write_ply_with_plyfilelib(sub_ply_file, sub_points, sub_colors, sub_labels) + + # Fill data containers + self.input_trees += [search_tree] + self.input_colors += [sub_colors] + self.input_labels += [sub_labels] + + size = sub_colors.shape[0] * 4 * 7 + print('{:.1f} MB loaded in {:.1f}s'.format(size * 1e-6, time.time() - t0)) + + ############################ + # Coarse potential locations + ############################ + + # Only necessary for validation and test sets + if self.use_potentials: + print('\nPreparing potentials') + + # Restart timer + t0 = time.time() + + pot_dl = self.config.in_radius / 10 + cloud_ind = 0 + + for i, file_path in enumerate(self.files): + + # Get cloud name + cloud_name = self.cloud_names[i] + + # Name of the input files + coarse_KDTree_file = join(tree_path, '{:s}_coarse_KDTree.pkl'.format(cloud_name)) + + # Check if inputs have already been computed + if exists(coarse_KDTree_file): + # Read pkl with search tree + with open(coarse_KDTree_file, 'rb') as f: + search_tree = pickle.load(f) + + else: + # Subsample cloud + sub_points = np.array(self.input_trees[cloud_ind].data, copy=False) + coarse_points = grid_subsampling(sub_points.astype(np.float32), sampleDl=pot_dl) + + # Get chosen neighborhoods + search_tree = KDTree(coarse_points, leaf_size=10) + + # Save KDTree + with open(coarse_KDTree_file, 'wb') as f: + pickle.dump(search_tree, f) + + # Fill data containers + self.pot_trees += [search_tree] + cloud_ind += 1 + + print('Done in {:.1f}s'.format(time.time() - t0)) + + ###################### + # Reprojection indices + ###################### + + # Get number of clouds + self.num_clouds = len(self.input_trees) + + # Only necessary for validation and test sets + if self.set in ['validation', 'test']: + + print('\nPreparing reprojection indices for testing') + + # Get validation/test reprojection indices + for i, file_path in enumerate(self.files): + + # Restart timer + t0 = time.time() + + # Get info on this cloud + cloud_name = self.cloud_names[i] + + # File name for saving + proj_file = join(tree_path, '{:s}_proj.pkl'.format(cloud_name)) + + # Try to load previous indices + if exists(proj_file): + with open(proj_file, 'rb') as f: + proj_inds, labels = pickle.load(f) + else: + # Read ply file + xyz, rgb, rawlabels = read_ply_with_plyfilelib(file_path) + points = xyz + colors = 255 * rgb # Now scale by 255 + labels = np.array([x if x > 0 else x + 1 for x in rawlabels]) + + # Compute projection inds + idxs = self.input_trees[i].query(points, return_distance=False) + #dists, idxs = self.input_trees[i_cloud].kneighbors(points) + proj_inds = np.squeeze(idxs).astype(np.int32) + + # Save + with open(proj_file, 'wb') as f: + pickle.dump([proj_inds, labels], f) + + self.test_proj += [proj_inds] + self.validation_labels += [labels] + print('{:s} done in {:.1f}s'.format(cloud_name, time.time() - t0)) + + print() + return + + def load_evaluation_points(self, file_path): + """ + Load points (from test or validation split) on which the metrics should be evaluated + """ + + # Get original points + # data = read_ply(file_path) + # return np.vstack((data['x'], data['y'], data['z'])).T + + xyz, rgb, rawlabels = read_ply_with_plyfilelib(file_path) + return xyz + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Utility classes definition +# \********************************/ + + +class UrbanMeshSampler(Sampler): + """Sampler for UrbanMesh""" + + def __init__(self, dataset: UrbanMeshDataset): + Sampler.__init__(self, dataset) + + # Dataset used by the sampler (no copy is made in memory) + self.dataset = dataset + + # Number of step per epoch + if dataset.set == 'training': + self.N = dataset.config.epoch_steps + else: + self.N = dataset.config.validation_size + + return + + def __iter__(self): + """ + Yield next batch indices here. In this dataset, this is a dummy sampler that yield the index of batch element + (input sphere) in epoch instead of the list of point indices + """ + + if not self.dataset.use_potentials: + + # Initiate current epoch ind + self.dataset.epoch_i *= 0 + self.dataset.epoch_inds *= 0 + + # Initiate container for indices + all_epoch_inds = np.zeros((2, 0), dtype=np.int32) + + # Number of sphere centers taken per class in each cloud + num_centers = self.N * self.dataset.config.batch_num + random_pick_n = int(np.ceil(num_centers / (self.dataset.num_clouds * self.dataset.config.num_classes))) + + # Choose random points of each class for each cloud + for cloud_ind, cloud_labels in enumerate(self.dataset.input_labels): + epoch_indices = np.empty((0,), dtype=np.int32) + for label_ind, label in enumerate(self.dataset.label_values): + if label not in self.dataset.ignored_labels: + label_indices = np.where(np.equal(cloud_labels, label))[0] + if len(label_indices) <= random_pick_n: + epoch_indices = np.hstack((epoch_indices, label_indices)) + elif len(label_indices) < 50 * random_pick_n: + new_randoms = np.random.choice(label_indices, size=random_pick_n, replace=False) + epoch_indices = np.hstack((epoch_indices, new_randoms.astype(np.int32))) + else: + rand_inds = [] + while len(rand_inds) < random_pick_n: + rand_inds = np.unique(np.random.choice(label_indices, size=5 * random_pick_n, replace=True)) + epoch_indices = np.hstack((epoch_indices, rand_inds[:random_pick_n].astype(np.int32))) + + # Stack those indices with the cloud index + epoch_indices = np.vstack((np.full(epoch_indices.shape, cloud_ind, dtype=np.int32), epoch_indices)) + + # Update the global indice container + all_epoch_inds = np.hstack((all_epoch_inds, epoch_indices)) + + # Random permutation of the indices + random_order = np.random.permutation(all_epoch_inds.shape[1]) + all_epoch_inds = all_epoch_inds[:, random_order].astype(np.int64) + + # Update epoch inds + self.dataset.epoch_inds += torch.from_numpy(all_epoch_inds[:, :num_centers]) + + # Generator loop + for i in range(self.N): + yield i + + def __len__(self): + """ + The number of yielded samples is variable + """ + return self.N + + def fast_calib(self): + """ + This method calibrates the batch sizes while ensuring the potentials are well initialized. Indeed on a dataset + like Semantic3D, before potential have been updated over the dataset, there are cahnces that all the dense area + are picked in the begining and in the end, we will have very large batch of small point clouds + :return: + """ + + # Estimated average batch size and target value + estim_b = 0 + target_b = self.dataset.config.batch_num + + # Calibration parameters + low_pass_T = 10 + Kp = 100.0 + finer = False + breaking = False + + # Convergence parameters + smooth_errors = [] + converge_threshold = 0.1 + + t = [time.time()] + last_display = time.time() + mean_dt = np.zeros(2) + + for epoch in range(10): + for i, test in enumerate(self): + + # New time + t = t[-1:] + t += [time.time()] + + # batch length + b = len(test) + + # Update estim_b (low pass filter) + estim_b += (b - estim_b) / low_pass_T + + # Estimate error (noisy) + error = target_b - b + + # Save smooth errors for convergene check + smooth_errors.append(target_b - estim_b) + if len(smooth_errors) > 10: + smooth_errors = smooth_errors[1:] + + # Update batch limit with P controller + self.dataset.batch_limit += Kp * error + + # finer low pass filter when closing in + if not finer and np.abs(estim_b - target_b) < 1: + low_pass_T = 100 + finer = True + + # Convergence + if finer and np.max(np.abs(smooth_errors)) < converge_threshold: + breaking = True + break + + # Average timing + t += [time.time()] + mean_dt = 0.9 * mean_dt + 0.1 * (np.array(t[1:]) - np.array(t[:-1])) + + # Console display (only one per second) + if (t[-1] - last_display) > 1.0: + last_display = t[-1] + message = 'Step {:5d} estim_b ={:5.2f} batch_limit ={:7d}, // {:.1f}ms {:.1f}ms' + print(message.format(i, + estim_b, + int(self.dataset.batch_limit), + 1000 * mean_dt[0], + 1000 * mean_dt[1])) + + if breaking: + break + + def calibration(self, dataloader, untouched_ratio=0.9, verbose=False, force_redo=False): + """ + Method performing batch and neighbors calibration. + Batch calibration: Set "batch_limit" (the maximum number of points allowed in every batch) so that the + average batch size (number of stacked pointclouds) is the one asked. + Neighbors calibration: Set the "neighborhood_limits" (the maximum number of neighbors allowed in convolutions) + so that 90% of the neighborhoods remain untouched. There is a limit for each layer. + """ + + ############################## + # Previously saved calibration + ############################## + + print('\nStarting Calibration (use verbose=True for more details)') + t0 = time.time() + + redo = force_redo + + # Batch limit + # *********** + + # Load batch_limit dictionary + batch_lim_file = join(self.dataset.path, 'batch_limits.pkl') + if exists(batch_lim_file): + with open(batch_lim_file, 'rb') as file: + batch_lim_dict = pickle.load(file) + else: + batch_lim_dict = {} + + # Check if the batch limit associated with current parameters exists + if self.dataset.use_potentials: + sampler_method = 'potentials' + else: + sampler_method = 'random' + key = '{:s}_{:.3f}_{:.3f}_{:d}'.format(sampler_method, + self.dataset.config.in_radius, + self.dataset.config.first_subsampling_dl, + self.dataset.config.batch_num) + if not redo and key in batch_lim_dict: + self.dataset.batch_limit[0] = batch_lim_dict[key] + else: + redo = True + + if verbose: + print('\nPrevious calibration found:') + print('Check batch limit dictionary') + if key in batch_lim_dict: + color = bcolors.OKGREEN + v = str(int(batch_lim_dict[key])) + else: + color = bcolors.FAIL + v = '?' + print('{:}\"{:s}\": {:s}{:}'.format(color, key, v, bcolors.ENDC)) + + # Neighbors limit + # *************** + + # Load neighb_limits dictionary + neighb_lim_file = join(self.dataset.path, 'neighbors_limits.pkl') + if exists(neighb_lim_file): + with open(neighb_lim_file, 'rb') as file: + neighb_lim_dict = pickle.load(file) + else: + neighb_lim_dict = {} + + # Check if the limit associated with current parameters exists (for each layer) + neighb_limits = [] + for layer_ind in range(self.dataset.config.num_layers): + + dl = self.dataset.config.first_subsampling_dl * (2**layer_ind) + if self.dataset.config.deform_layers[layer_ind]: + r = dl * self.dataset.config.deform_radius + else: + r = dl * self.dataset.config.conv_radius + + key = '{:.3f}_{:.3f}'.format(dl, r) + if key in neighb_lim_dict: + neighb_limits += [neighb_lim_dict[key]] + + if not redo and len(neighb_limits) == self.dataset.config.num_layers: + self.dataset.neighborhood_limits = neighb_limits + else: + redo = True + + if verbose: + print('Check neighbors limit dictionary') + for layer_ind in range(self.dataset.config.num_layers): + dl = self.dataset.config.first_subsampling_dl * (2**layer_ind) + if self.dataset.config.deform_layers[layer_ind]: + r = dl * self.dataset.config.deform_radius + else: + r = dl * self.dataset.config.conv_radius + key = '{:.3f}_{:.3f}'.format(dl, r) + + if key in neighb_lim_dict: + color = bcolors.OKGREEN + v = str(neighb_lim_dict[key]) + else: + color = bcolors.FAIL + v = '?' + print('{:}\"{:s}\": {:s}{:}'.format(color, key, v, bcolors.ENDC)) + + if redo: + + ############################ + # Neighbors calib parameters + ############################ + + # From config parameter, compute higher bound of neighbors number in a neighborhood + hist_n = int(np.ceil(4 / 3 * np.pi * (self.dataset.config.deform_radius + 1) ** 3)) + + # Histogram of neighborhood sizes + neighb_hists = np.zeros((self.dataset.config.num_layers, hist_n), dtype=np.int32) + + ######################## + # Batch calib parameters + ######################## + + # Estimated average batch size and target value + estim_b = 0 + target_b = self.dataset.config.batch_num + + # Calibration parameters + low_pass_T = 10 + Kp = 100.0 + finer = False + + # Convergence parameters + smooth_errors = [] + converge_threshold = 0.1 + + # Loop parameters + last_display = time.time() + i = 0 + breaking = False + + ##################### + # Perform calibration + ##################### + + for epoch in range(10): + for batch_i, batch in enumerate(dataloader): + + # Update neighborhood histogram + counts = [np.sum(neighb_mat.numpy() < neighb_mat.shape[0], axis=1) for neighb_mat in batch.neighbors] + hists = [np.bincount(c, minlength=hist_n)[:hist_n] for c in counts] + neighb_hists += np.vstack(hists) + + # batch length + b = len(batch.cloud_inds) + + # Update estim_b (low pass filter) + estim_b += (b - estim_b) / low_pass_T + + # Estimate error (noisy) + error = target_b - b + + # Save smooth errors for convergene check + smooth_errors.append(target_b - estim_b) + if len(smooth_errors) > 10: + smooth_errors = smooth_errors[1:] + + # Update batch limit with P controller + self.dataset.batch_limit += Kp * error + + # finer low pass filter when closing in + if not finer and np.abs(estim_b - target_b) < 1: + low_pass_T = 100 + finer = True + + # Convergence + if finer and np.max(np.abs(smooth_errors)) < converge_threshold: + breaking = True + break + + i += 1 + t = time.time() + + # Console display (only one per second) + if verbose and (t - last_display) > 1.0: + last_display = t + message = 'Step {:5d} estim_b ={:5.2f} batch_limit ={:7d}' + print(message.format(i, + estim_b, + int(self.dataset.batch_limit))) + + if breaking: + break + + # Use collected neighbor histogram to get neighbors limit + cumsum = np.cumsum(neighb_hists.T, axis=0) + percentiles = np.sum(cumsum < (untouched_ratio * cumsum[hist_n - 1, :]), axis=0) + self.dataset.neighborhood_limits = percentiles + + if verbose: + + # Crop histogram + while np.sum(neighb_hists[:, -1]) == 0: + neighb_hists = neighb_hists[:, :-1] + hist_n = neighb_hists.shape[1] + + print('\n**************************************************\n') + line0 = 'neighbors_num ' + for layer in range(neighb_hists.shape[0]): + line0 += '| layer {:2d} '.format(layer) + print(line0) + for neighb_size in range(hist_n): + line0 = ' {:4d} '.format(neighb_size) + for layer in range(neighb_hists.shape[0]): + if neighb_size > percentiles[layer]: + color = bcolors.FAIL + else: + color = bcolors.OKGREEN + line0 += '|{:}{:10d}{:} '.format(color, + neighb_hists[layer, neighb_size], + bcolors.ENDC) + + print(line0) + + print('\n**************************************************\n') + print('\nchosen neighbors limits: ', percentiles) + print() + + # Save batch_limit dictionary + if self.dataset.use_potentials: + sampler_method = 'potentials' + else: + sampler_method = 'random' + key = '{:s}_{:.3f}_{:.3f}_{:d}'.format(sampler_method, + self.dataset.config.in_radius, + self.dataset.config.first_subsampling_dl, + self.dataset.config.batch_num) + batch_lim_dict[key] = float(self.dataset.batch_limit) + with open(batch_lim_file, 'wb') as file: + pickle.dump(batch_lim_dict, file) + + # Save neighb_limit dictionary + for layer_ind in range(self.dataset.config.num_layers): + dl = self.dataset.config.first_subsampling_dl * (2 ** layer_ind) + if self.dataset.config.deform_layers[layer_ind]: + r = dl * self.dataset.config.deform_radius + else: + r = dl * self.dataset.config.conv_radius + key = '{:.3f}_{:.3f}'.format(dl, r) + neighb_lim_dict[key] = self.dataset.neighborhood_limits[layer_ind] + with open(neighb_lim_file, 'wb') as file: + pickle.dump(neighb_lim_dict, file) + + + print('Calibration done in {:.1f}s\n'.format(time.time() - t0)) + return + + +class UrbanMeshCustomBatch: + """Custom batch definition with memory pinning for UrbanMesh""" + + def __init__(self, input_list): + + # Get rid of batch dimension + input_list = input_list[0] + + # Number of layers + L = (len(input_list) - 7) // 5 + + # Extract input tensors from the list of numpy array + ind = 0 + self.points = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.neighbors = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.pools = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.upsamples = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.lengths = [torch.from_numpy(nparray) for nparray in input_list[ind:ind+L]] + ind += L + self.features = torch.from_numpy(input_list[ind]) + ind += 1 + self.labels = torch.from_numpy(input_list[ind]).long() # torch.from_numpy(input_list[ind]) + ind += 1 + self.scales = torch.from_numpy(input_list[ind]) + ind += 1 + self.rots = torch.from_numpy(input_list[ind]) + ind += 1 + self.cloud_inds = torch.from_numpy(input_list[ind]) + ind += 1 + self.center_inds = torch.from_numpy(input_list[ind]) + ind += 1 + self.input_inds = torch.from_numpy(input_list[ind]) + + return + + def pin_memory(self): + """ + Manual pinning of the memory + """ + + self.points = [in_tensor.pin_memory() for in_tensor in self.points] + self.neighbors = [in_tensor.pin_memory() for in_tensor in self.neighbors] + self.pools = [in_tensor.pin_memory() for in_tensor in self.pools] + self.upsamples = [in_tensor.pin_memory() for in_tensor in self.upsamples] + self.lengths = [in_tensor.pin_memory() for in_tensor in self.lengths] + self.features = self.features.pin_memory() + self.labels = self.labels.pin_memory() + self.scales = self.scales.pin_memory() + self.rots = self.rots.pin_memory() + self.cloud_inds = self.cloud_inds.pin_memory() + self.center_inds = self.center_inds.pin_memory() + self.input_inds = self.input_inds.pin_memory() + + return self + + def to(self, device): + + self.points = [in_tensor.to(device) for in_tensor in self.points] + self.neighbors = [in_tensor.to(device) for in_tensor in self.neighbors] + self.pools = [in_tensor.to(device) for in_tensor in self.pools] + self.upsamples = [in_tensor.to(device) for in_tensor in self.upsamples] + self.lengths = [in_tensor.to(device) for in_tensor in self.lengths] + self.features = self.features.to(device) + self.labels = self.labels.to(device) + self.scales = self.scales.to(device) + self.rots = self.rots.to(device) + self.cloud_inds = self.cloud_inds.to(device) + self.center_inds = self.center_inds.to(device) + self.input_inds = self.input_inds.to(device) + + return self + + def unstack_points(self, layer=None): + """Unstack the points""" + return self.unstack_elements('points', layer) + + def unstack_neighbors(self, layer=None): + """Unstack the neighbors indices""" + return self.unstack_elements('neighbors', layer) + + def unstack_pools(self, layer=None): + """Unstack the pooling indices""" + return self.unstack_elements('pools', layer) + + def unstack_elements(self, element_name, layer=None, to_numpy=True): + """ + Return a list of the stacked elements in the batch at a certain layer. If no layer is given, then return all + layers + """ + + if element_name == 'points': + elements = self.points + elif element_name == 'neighbors': + elements = self.neighbors + elif element_name == 'pools': + elements = self.pools[:-1] + else: + raise ValueError('Unknown element name: {:s}'.format(element_name)) + + all_p_list = [] + for layer_i, layer_elems in enumerate(elements): + + if layer is None or layer == layer_i: + + i0 = 0 + p_list = [] + if element_name == 'pools': + lengths = self.lengths[layer_i+1] + else: + lengths = self.lengths[layer_i] + + for b_i, length in enumerate(lengths): + + elem = layer_elems[i0:i0 + length] + if element_name == 'neighbors': + elem[elem >= self.points[layer_i].shape[0]] = -1 + elem[elem >= 0] -= i0 + elif element_name == 'pools': + elem[elem >= self.points[layer_i].shape[0]] = -1 + elem[elem >= 0] -= torch.sum(self.lengths[layer_i][:b_i]) + i0 += length + + if to_numpy: + p_list.append(elem.numpy()) + else: + p_list.append(elem) + + if layer == layer_i: + return p_list + + all_p_list.append(p_list) + + return all_p_list + + +def UrbanMeshCollate(batch_data): + return UrbanMeshCustomBatch(batch_data) + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Debug functions +# \*********************/ + + +def debug_upsampling(dataset, loader): + """Shows which labels are sampled according to strategy chosen""" + + + for epoch in range(10): + + for batch_i, batch in enumerate(loader): + + pc1 = batch.points[1].numpy() + pc2 = batch.points[2].numpy() + up1 = batch.upsamples[1].numpy() + + print(pc1.shape, '=>', pc2.shape) + print(up1.shape, np.max(up1)) + + pc2 = np.vstack((pc2, np.zeros_like(pc2[:1, :]))) + + # Get neighbors distance + p0 = pc1[10, :] + neighbs0 = up1[10, :] + neighbs0 = pc2[neighbs0, :] - p0 + d2 = np.sum(neighbs0 ** 2, axis=1) + + print(neighbs0.shape) + print(neighbs0[:5]) + print(d2[:5]) + + print('******************') + print('*******************************************') + + _, counts = np.unique(dataset.input_labels, return_counts=True) + print(counts) + + +def debug_timing(dataset, loader): + """Timing of generator function""" + + t = [time.time()] + last_display = time.time() + mean_dt = np.zeros(2) + estim_b = dataset.config.batch_num + estim_N = 0 + + for epoch in range(10): + + for batch_i, batch in enumerate(loader): + # print(batch_i, tuple(points.shape), tuple(normals.shape), labels, indices, in_sizes) + + # New time + t = t[-1:] + t += [time.time()] + + # Update estim_b (low pass filter) + estim_b += (len(batch.cloud_inds) - estim_b) / 100 + estim_N += (batch.features.shape[0] - estim_N) / 10 + + # Pause simulating computations + time.sleep(0.05) + t += [time.time()] + + # Average timing + mean_dt = 0.9 * mean_dt + 0.1 * (np.array(t[1:]) - np.array(t[:-1])) + + # Console display (only one per second) + if (t[-1] - last_display) > -1.0: + last_display = t[-1] + message = 'Step {:08d} -> (ms/batch) {:8.2f} {:8.2f} / batch = {:.2f} - {:.0f}' + print(message.format(batch_i, + 1000 * mean_dt[0], + 1000 * mean_dt[1], + estim_b, + estim_N)) + + print('************* Epoch ended *************') + + _, counts = np.unique(dataset.input_labels, return_counts=True) + print(counts) + + +def debug_show_clouds(dataset, loader): + + + for epoch in range(10): + + clouds = [] + cloud_normals = [] + cloud_labels = [] + + L = dataset.config.num_layers + + for batch_i, batch in enumerate(loader): + + # Print characteristics of input tensors + print('\nPoints tensors') + for i in range(L): + print(batch.points[i].dtype, batch.points[i].shape) + print('\nNeigbors tensors') + for i in range(L): + print(batch.neighbors[i].dtype, batch.neighbors[i].shape) + print('\nPools tensors') + for i in range(L): + print(batch.pools[i].dtype, batch.pools[i].shape) + print('\nStack lengths') + for i in range(L): + print(batch.lengths[i].dtype, batch.lengths[i].shape) + print('\nFeatures') + print(batch.features.dtype, batch.features.shape) + print('\nLabels') + print(batch.labels.dtype, batch.labels.shape) + print('\nAugment Scales') + print(batch.scales.dtype, batch.scales.shape) + print('\nAugment Rotations') + print(batch.rots.dtype, batch.rots.shape) + print('\nModel indices') + print(batch.model_inds.dtype, batch.model_inds.shape) + + print('\nAre input tensors pinned') + print(batch.neighbors[0].is_pinned()) + print(batch.neighbors[-1].is_pinned()) + print(batch.points[0].is_pinned()) + print(batch.points[-1].is_pinned()) + print(batch.labels.is_pinned()) + print(batch.scales.is_pinned()) + print(batch.rots.is_pinned()) + print(batch.model_inds.is_pinned()) + + show_input_batch(batch) + + print('*******************************************') + + _, counts = np.unique(dataset.input_labels, return_counts=True) + print(counts) + + +def debug_batch_and_neighbors_calib(dataset, loader): + """Timing of generator function""" + + t = [time.time()] + last_display = time.time() + mean_dt = np.zeros(2) + + for epoch in range(10): + + for batch_i, input_list in enumerate(loader): + # print(batch_i, tuple(points.shape), tuple(normals.shape), labels, indices, in_sizes) + + # New time + t = t[-1:] + t += [time.time()] + + # Pause simulating computations + time.sleep(0.01) + t += [time.time()] + + # Average timing + mean_dt = 0.9 * mean_dt + 0.1 * (np.array(t[1:]) - np.array(t[:-1])) + + # Console display (only one per second) + if (t[-1] - last_display) > 1.0: + last_display = t[-1] + message = 'Step {:08d} -> Average timings (ms/batch) {:8.2f} {:8.2f} ' + print(message.format(batch_i, + 1000 * mean_dt[0], + 1000 * mean_dt[1])) + + print('************* Epoch ended *************') + + _, counts = np.unique(dataset.input_labels, return_counts=True) + print(counts) diff --git a/competing_methods/my_KPConv/datasets/common.py b/competing_methods/my_KPConv/datasets/common.py new file mode 100644 index 00000000..4e86813e --- /dev/null +++ b/competing_methods/my_KPConv/datasets/common.py @@ -0,0 +1,586 @@ +# +# +# 0=================================0 +# | Kernel Point Convolutions | +# 0=================================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Class handling datasets +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 11/06/2018 +# + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Imports and global variables +# \**********************************/ +# + +# Common libs +import time +import os +import numpy as np +import sys +import torch +from torch.utils.data import DataLoader, Dataset +from utils.config import Config +from utils.mayavi_visu import * +from kernels.kernel_points import create_3D_rotations + +# Subsampling extension +import cpp_wrappers.cpp_subsampling.grid_subsampling as cpp_subsampling +import cpp_wrappers.cpp_neighbors.radius_neighbors as cpp_neighbors + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Utility functions +# \***********************/ +# + +def grid_subsampling(points, features=None, labels=None, sampleDl=0.1, verbose=0): + """ + CPP wrapper for a grid subsampling (method = barycenter for points and features) + :param points: (N, 3) matrix of input points + :param features: optional (N, d) matrix of features (floating number) + :param labels: optional (N,) matrix of integer labels + :param sampleDl: parameter defining the size of grid voxels + :param verbose: 1 to display + :return: subsampled points, with features and/or labels depending of the input + """ + + if (features is None) and (labels is None): + return cpp_subsampling.subsample(points, + sampleDl=sampleDl, + verbose=verbose) + elif (labels is None): + return cpp_subsampling.subsample(points, + features=features, + sampleDl=sampleDl, + verbose=verbose) + elif (features is None): + return cpp_subsampling.subsample(points, + classes=labels, + sampleDl=sampleDl, + verbose=verbose) + else: + return cpp_subsampling.subsample(points, + features=features, + classes=labels, + sampleDl=sampleDl, + verbose=verbose) + + +def batch_grid_subsampling(points, batches_len, features=None, labels=None, + sampleDl=0.1, max_p=0, verbose=0, random_grid_orient=True): + """ + CPP wrapper for a grid subsampling (method = barycenter for points and features) + :param points: (N, 3) matrix of input points + :param features: optional (N, d) matrix of features (floating number) + :param labels: optional (N,) matrix of integer labels + :param sampleDl: parameter defining the size of grid voxels + :param verbose: 1 to display + :return: subsampled points, with features and/or labels depending of the input + """ + + R = None + B = len(batches_len) + if random_grid_orient: + + ######################################################## + # Create a random rotation matrix for each batch element + ######################################################## + + # Choose two random angles for the first vector in polar coordinates + theta = np.random.rand(B) * 2 * np.pi + phi = (np.random.rand(B) - 0.5) * np.pi + + # Create the first vector in carthesian coordinates + u = np.vstack([np.cos(theta) * np.cos(phi), np.sin(theta) * np.cos(phi), np.sin(phi)]) + + # Choose a random rotation angle + alpha = np.random.rand(B) * 2 * np.pi + + # Create the rotation matrix with this vector and angle + R = create_3D_rotations(u.T, alpha).astype(np.float32) + + ################# + # Apply rotations + ################# + + i0 = 0 + points = points.copy() + for bi, length in enumerate(batches_len): + # Apply the rotation + points[i0:i0 + length, :] = np.sum(np.expand_dims(points[i0:i0 + length, :], 2) * R[bi], axis=1) + i0 += length + + ####################### + # Sunsample and realign + ####################### + + if (features is None) and (labels is None): + s_points, s_len = cpp_subsampling.subsample_batch(points, + batches_len, + sampleDl=sampleDl, + max_p=max_p, + verbose=verbose) + if random_grid_orient: + i0 = 0 + for bi, length in enumerate(s_len): + s_points[i0:i0 + length, :] = np.sum(np.expand_dims(s_points[i0:i0 + length, :], 2) * R[bi].T, axis=1) + i0 += length + return s_points, s_len + + elif (labels is None): + s_points, s_len, s_features = cpp_subsampling.subsample_batch(points, + batches_len, + features=features, + sampleDl=sampleDl, + max_p=max_p, + verbose=verbose) + if random_grid_orient: + i0 = 0 + for bi, length in enumerate(s_len): + # Apply the rotation + s_points[i0:i0 + length, :] = np.sum(np.expand_dims(s_points[i0:i0 + length, :], 2) * R[bi].T, axis=1) + i0 += length + return s_points, s_len, s_features + + elif (features is None): + s_points, s_len, s_labels = cpp_subsampling.subsample_batch(points, + batches_len, + classes=labels, + sampleDl=sampleDl, + max_p=max_p, + verbose=verbose) + if random_grid_orient: + i0 = 0 + for bi, length in enumerate(s_len): + # Apply the rotation + s_points[i0:i0 + length, :] = np.sum(np.expand_dims(s_points[i0:i0 + length, :], 2) * R[bi].T, axis=1) + i0 += length + return s_points, s_len, s_labels + + else: + s_points, s_len, s_features, s_labels = cpp_subsampling.subsample_batch(points, + batches_len, + features=features, + classes=labels, + sampleDl=sampleDl, + max_p=max_p, + verbose=verbose) + if random_grid_orient: + i0 = 0 + for bi, length in enumerate(s_len): + # Apply the rotation + s_points[i0:i0 + length, :] = np.sum(np.expand_dims(s_points[i0:i0 + length, :], 2) * R[bi].T, axis=1) + i0 += length + return s_points, s_len, s_features, s_labels + + +def batch_neighbors(queries, supports, q_batches, s_batches, radius): + """ + Computes neighbors for a batch of queries and supports + :param queries: (N1, 3) the query points + :param supports: (N2, 3) the support points + :param q_batches: (B) the list of lengths of batch elements in queries + :param s_batches: (B)the list of lengths of batch elements in supports + :param radius: float32 + :return: neighbors indices + """ + + return cpp_neighbors.batch_query(queries, supports, q_batches, s_batches, radius=radius) + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Class definition +# \**********************/ + + +class PointCloudDataset(Dataset): + """Parent class for Point Cloud Datasets.""" + + def __init__(self, name): + """ + Initialize parameters of the dataset here. + """ + + self.name = name + self.path = '' + self.label_to_names = {} + self.num_classes = 0 + self.label_values = np.zeros((0,), dtype=np.int32) + self.label_names = [] + self.label_to_idx = {} + self.name_to_label = {} + self.config = Config() + self.neighborhood_limits = [] + + return + + def __len__(self): + """ + Return the length of data here + """ + return 0 + + def __getitem__(self, idx): + """ + Return the item at the given index + """ + + return 0 + + def init_labels(self): + + # Initialize all label parameters given the label_to_names dict + self.num_classes = len(self.label_to_names) + self.label_values = np.sort([k for k, v in self.label_to_names.items()]) + self.label_names = [self.label_to_names[k] for k in self.label_values] + self.label_to_idx = {l: i for i, l in enumerate(self.label_values)} + self.name_to_label = {v: k for k, v in self.label_to_names.items()} + + def augmentation_transform(self, points, normals=None, verbose=False): + """Implementation of an augmentation transform for point clouds.""" + + ########## + # Rotation + ########## + + # Initialize rotation matrix + R = np.eye(points.shape[1]) + + if points.shape[1] == 3: + if self.config.augment_rotation == 'vertical': + + # Create random rotations + theta = np.random.rand() * 2 * np.pi + c, s = np.cos(theta), np.sin(theta) + R = np.array([[c, -s, 0], [s, c, 0], [0, 0, 1]], dtype=np.float32) + + elif self.config.augment_rotation == 'all': + + # Choose two random angles for the first vector in polar coordinates + theta = np.random.rand() * 2 * np.pi + phi = (np.random.rand() - 0.5) * np.pi + + # Create the first vector in carthesian coordinates + u = np.array([np.cos(theta) * np.cos(phi), np.sin(theta) * np.cos(phi), np.sin(phi)]) + + # Choose a random rotation angle + alpha = np.random.rand() * 2 * np.pi + + # Create the rotation matrix with this vector and angle + R = create_3D_rotations(np.reshape(u, (1, -1)), np.reshape(alpha, (1, -1)))[0] + + R = R.astype(np.float32) + + ####### + # Scale + ####### + + # Choose random scales for each example + min_s = self.config.augment_scale_min + max_s = self.config.augment_scale_max + if self.config.augment_scale_anisotropic: + scale = np.random.rand(points.shape[1]) * (max_s - min_s) + min_s + else: + scale = np.random.rand() * (max_s - min_s) - min_s + + # Add random symmetries to the scale factor + symmetries = np.array(self.config.augment_symmetries).astype(np.int32) + symmetries *= np.random.randint(2, size=points.shape[1]) + scale = (scale * (1 - symmetries * 2)).astype(np.float32) + + ####### + # Noise + ####### + + noise = (np.random.randn(points.shape[0], points.shape[1]) * self.config.augment_noise).astype(np.float32) + + ################## + # Apply transforms + ################## + + # Do not use np.dot because it is multi-threaded + #augmented_points = np.dot(points, R) * scale + noise + augmented_points = np.sum(np.expand_dims(points, 2) * R, axis=1) * scale + noise + + + if normals is None: + return augmented_points, scale, R + else: + # Anisotropic scale of the normals thanks to cross product formula + normal_scale = scale[[1, 2, 0]] * scale[[2, 0, 1]] + augmented_normals = np.dot(normals, R) * normal_scale + # Renormalise + augmented_normals *= 1 / (np.linalg.norm(augmented_normals, axis=1, keepdims=True) + 1e-6) + + if verbose: + test_p = [np.vstack([points, augmented_points])] + test_n = [np.vstack([normals, augmented_normals])] + test_l = [np.hstack([points[:, 2]*0, augmented_points[:, 2]*0+1])] + show_ModelNet_examples(test_p, test_n, test_l) + + return augmented_points, augmented_normals, scale, R + + def big_neighborhood_filter(self, neighbors, layer): + """ + Filter neighborhoods with max number of neighbors. Limit is set to keep XX% of the neighborhoods untouched. + Limit is computed at initialization + """ + + # crop neighbors matrix + if len(self.neighborhood_limits) > 0: + return neighbors[:, :self.neighborhood_limits[layer]] + else: + return neighbors + + def classification_inputs(self, + stacked_points, + stacked_features, + labels, + stack_lengths): + + # Starting radius of convolutions + r_normal = self.config.first_subsampling_dl * self.config.conv_radius + + # Starting layer + layer_blocks = [] + + # Lists of inputs + input_points = [] + input_neighbors = [] + input_pools = [] + input_stack_lengths = [] + deform_layers = [] + + ###################### + # Loop over the blocks + ###################### + + arch = self.config.architecture + + for block_i, block in enumerate(arch): + + # Get all blocks of the layer + if not ('pool' in block or 'strided' in block or 'global' in block or 'upsample' in block): + layer_blocks += [block] + continue + + # Convolution neighbors indices + # ***************************** + + deform_layer = False + if layer_blocks: + # Convolutions are done in this layer, compute the neighbors with the good radius + if np.any(['deformable' in blck for blck in layer_blocks]): + r = r_normal * self.config.deform_radius / self.config.conv_radius + deform_layer = True + else: + r = r_normal + conv_i = batch_neighbors(stacked_points, stacked_points, stack_lengths, stack_lengths, r) + + else: + # This layer only perform pooling, no neighbors required + conv_i = np.zeros((0, 1), dtype=np.int32) + + # Pooling neighbors indices + # ************************* + + # If end of layer is a pooling operation + if 'pool' in block or 'strided' in block: + + # New subsampling length + dl = 2 * r_normal / self.config.conv_radius + + # Subsampled points + pool_p, pool_b = batch_grid_subsampling(stacked_points, stack_lengths, sampleDl=dl) + + # Radius of pooled neighbors + if 'deformable' in block: + r = r_normal * self.config.deform_radius / self.config.conv_radius + deform_layer = True + else: + r = r_normal + + # Subsample indices + pool_i = batch_neighbors(pool_p, stacked_points, pool_b, stack_lengths, r) + + else: + # No pooling in the end of this layer, no pooling indices required + pool_i = np.zeros((0, 1), dtype=np.int32) + pool_p = np.zeros((0, 1), dtype=np.float32) + pool_b = np.zeros((0,), dtype=np.int32) + + # Reduce size of neighbors matrices by eliminating furthest point + conv_i = self.big_neighborhood_filter(conv_i, len(input_points)) + pool_i = self.big_neighborhood_filter(pool_i, len(input_points)) + + # Updating input lists + input_points += [stacked_points] + input_neighbors += [conv_i.astype(np.int64)] + input_pools += [pool_i.astype(np.int64)] + input_stack_lengths += [stack_lengths] + deform_layers += [deform_layer] + + # New points for next layer + stacked_points = pool_p + stack_lengths = pool_b + + # Update radius and reset blocks + r_normal *= 2 + layer_blocks = [] + + # Stop when meeting a global pooling or upsampling + if 'global' in block or 'upsample' in block: + break + + ############### + # Return inputs + ############### + + # Save deform layers + + # list of network inputs + li = input_points + input_neighbors + input_pools + input_stack_lengths + li += [stacked_features, labels] + + return li + + + def segmentation_inputs(self, + stacked_points, + stacked_features, + labels, + stack_lengths): + + # Starting radius of convolutions + r_normal = self.config.first_subsampling_dl * self.config.conv_radius + + # Starting layer + layer_blocks = [] + + # Lists of inputs + input_points = [] + input_neighbors = [] + input_pools = [] + input_upsamples = [] + input_stack_lengths = [] + deform_layers = [] + + ###################### + # Loop over the blocks + ###################### + + arch = self.config.architecture + + for block_i, block in enumerate(arch): + + # Get all blocks of the layer + if not ('pool' in block or 'strided' in block or 'global' in block or 'upsample' in block): + layer_blocks += [block] + continue + + # Convolution neighbors indices + # ***************************** + + deform_layer = False + if layer_blocks: + # Convolutions are done in this layer, compute the neighbors with the good radius + if np.any(['deformable' in blck for blck in layer_blocks]): + r = r_normal * self.config.deform_radius / self.config.conv_radius + deform_layer = True + else: + r = r_normal + conv_i = batch_neighbors(stacked_points, stacked_points, stack_lengths, stack_lengths, r) + + else: + # This layer only perform pooling, no neighbors required + conv_i = np.zeros((0, 1), dtype=np.int32) + + # Pooling neighbors indices + # ************************* + + # If end of layer is a pooling operation + if 'pool' in block or 'strided' in block: + + # New subsampling length + dl = 2 * r_normal / self.config.conv_radius + + # Subsampled points + pool_p, pool_b = batch_grid_subsampling(stacked_points, stack_lengths, sampleDl=dl) + + # Radius of pooled neighbors + if 'deformable' in block: + r = r_normal * self.config.deform_radius / self.config.conv_radius + deform_layer = True + else: + r = r_normal + + # Subsample indices + pool_i = batch_neighbors(pool_p, stacked_points, pool_b, stack_lengths, r) + + # Upsample indices (with the radius of the next layer to keep wanted density) + up_i = batch_neighbors(stacked_points, pool_p, stack_lengths, pool_b, 2 * r) + + else: + # No pooling in the end of this layer, no pooling indices required + pool_i = np.zeros((0, 1), dtype=np.int32) + pool_p = np.zeros((0, 3), dtype=np.float32) + pool_b = np.zeros((0,), dtype=np.int32) + up_i = np.zeros((0, 1), dtype=np.int32) + + # Reduce size of neighbors matrices by eliminating furthest point + conv_i = self.big_neighborhood_filter(conv_i, len(input_points)) + pool_i = self.big_neighborhood_filter(pool_i, len(input_points)) + if up_i.shape[0] > 0: + up_i = self.big_neighborhood_filter(up_i, len(input_points)+1) + + # Updating input lists + input_points += [stacked_points] + input_neighbors += [conv_i.astype(np.int64)] + input_pools += [pool_i.astype(np.int64)] + input_upsamples += [up_i.astype(np.int64)] + input_stack_lengths += [stack_lengths] + deform_layers += [deform_layer] + + # New points for next layer + stacked_points = pool_p + stack_lengths = pool_b + + # Update radius and reset blocks + r_normal *= 2 + layer_blocks = [] + + # Stop when meeting a global pooling or upsampling + if 'global' in block or 'upsample' in block: + break + + ############### + # Return inputs + ############### + + # list of network inputs + li = input_points + input_neighbors + input_pools + input_upsamples + input_stack_lengths + li += [stacked_features, labels] + + return li + + + + + + + + + + + + + diff --git a/competing_methods/my_KPConv/doc/Github_intro.png b/competing_methods/my_KPConv/doc/Github_intro.png new file mode 100644 index 00000000..b0ac13a8 Binary files /dev/null and b/competing_methods/my_KPConv/doc/Github_intro.png differ diff --git a/competing_methods/my_KPConv/doc/object_classification_guide.md b/competing_methods/my_KPConv/doc/object_classification_guide.md new file mode 100644 index 00000000..ff600a32 --- /dev/null +++ b/competing_methods/my_KPConv/doc/object_classification_guide.md @@ -0,0 +1,38 @@ + +## Object classification on ModelNet40 + +### Data + +We consider our experiment folder is located at `XXXX/Experiments/KPConv-PyTorch`. And we use a common Data folder +loacated at `XXXX/Data`. Therefore the relative path to the Data folder is `../../Data`. + +Regularly sampled clouds from ModelNet40 dataset can be downloaded +here (1.6 GB). +Uncompress the data and move it inside the folder `../../Data/ModelNet40`. + +N.B. If you want to place your data anywhere else, you just have to change the variable +`self.path` of `ModelNet40Dataset` class ([here](https://github.com/HuguesTHOMAS/KPConv-PyTorch/blob/e9d328135c0a3818ee0cf1bb5bb63434ce15c22e/datasets/ModelNet40.py#L113)). + + +### Training a model + +Simply run the following script to start the training: + + python3 training_ModelNet40.py + +This file contains a configuration subclass `ModelNet40Config`, inherited from the general configuration class `Config` defined in `utils/config.py`. The value of every parameter can be modified in the subclass. The first run of this script will precompute structures for the dataset which might take some time. + +### Plot a logged training + +When you start a new training, it is saved in a `results` folder. A dated log folder will be created, containing many information including loss values, validation metrics, model checkpoints, etc. + +In `plot_convergence.py`, you will find detailed comments explaining how to choose which training log you want to plot. Follow them and then run the script : + + python3 plot_convergence.py + + +### Test the trained model + +The test script is the same for all models (segmentation or classification). In `test_any_model.py`, you will find detailed comments explaining how to choose which logged trained model you want to test. Follow them and then run the script : + + python3 test_any_model.py diff --git a/competing_methods/my_KPConv/doc/pretrained_models_guide.md b/competing_methods/my_KPConv/doc/pretrained_models_guide.md new file mode 100644 index 00000000..86a47ce7 --- /dev/null +++ b/competing_methods/my_KPConv/doc/pretrained_models_guide.md @@ -0,0 +1,5 @@ + + +## Test a pretrained network + +TODO \ No newline at end of file diff --git a/competing_methods/my_KPConv/doc/scene_segmentation_guide.md b/competing_methods/my_KPConv/doc/scene_segmentation_guide.md new file mode 100644 index 00000000..acc35fc9 --- /dev/null +++ b/competing_methods/my_KPConv/doc/scene_segmentation_guide.md @@ -0,0 +1,37 @@ + +## Scene Segmentation on S3DIS + +### Data + +We consider our experiment folder is located at `XXXX/Experiments/KPConv-PyTorch`. And we use a common Data folder +loacated at `XXXX/Data`. Therefore the relative path to the Data folder is `../../Data`. + +S3DIS dataset can be downloaded here (4.8 GB). +Download the file named `Stanford3dDataset_v1.2.zip`, uncompress the data and move it to `../../Data/S3DIS`. + +N.B. If you want to place your data anywhere else, you just have to change the variable +`self.path` of `S3DISDataset` class ([here](https://github.com/HuguesTHOMAS/KPConv-PyTorch/blob/afa18c92f00c6ed771b61cb08b285d2f93446ea4/datasets/S3DIS.py#L88)). + +### Training + +Simply run the following script to start the training: + + python3 training_S3DIS.py + +Similarly to ModelNet40 training, the parameters can be modified in a configuration subclass called `S3DISConfig`, and the first run of this script might take some time to precompute dataset structures. + + +### Plot a logged training + +When you start a new training, it is saved in a `results` folder. A dated log folder will be created, containing many information including loss values, validation metrics, model checkpoints, etc. + +In `plot_convergence.py`, you will find detailed comments explaining how to choose which training log you want to plot. Follow them and then run the script : + + python3 plot_convergence.py + + +### Test the trained model + +The test script is the same for all models (segmentation or classification). In `test_any_model.py`, you will find detailed comments explaining how to choose which logged trained model you want to test. Follow them and then run the script : + + python3 test_any_model.py diff --git a/competing_methods/my_KPConv/doc/slam_segmentation_guide.md b/competing_methods/my_KPConv/doc/slam_segmentation_guide.md new file mode 100644 index 00000000..be47d75f --- /dev/null +++ b/competing_methods/my_KPConv/doc/slam_segmentation_guide.md @@ -0,0 +1,48 @@ + +## Scene Segmentation on SemanticKitti + +### Data + +We consider our experiment folder is located at `XXXX/Experiments/KPConv-PyTorch`. And we use a common Data folder +loacated at `XXXX/Data`. Therefore the relative path to the Data folder is `../../Data`. + +SemanticKitti dataset can be downloaded here (80 GB). +Download the three file named: + * [`data_odometry_velodyne.zip` (80 GB)](http://www.cvlibs.net/download.php?file=data_odometry_velodyne.zip) + * [`data_odometry_calib.zip` (1 MB)](http://www.cvlibs.net/download.php?file=data_odometry_calib.zip) + * [`data_odometry_labels.zip` (179 MB)](http://semantic-kitti.org/assets/data_odometry_labels.zip) + +uncompress the data and move it to `../../Data/SemanticKitti`. + +You also need to download the files +[`semantic-kitti-all.yaml`](https://github.com/PRBonn/semantic-kitti-api/blob/master/config/semantic-kitti-all.yaml) +and +[`semantic-kitti.yaml`](https://github.com/PRBonn/semantic-kitti-api/blob/master/config/semantic-kitti.yaml). +Place them in your `../../Data/SemanticKitti` folder. + +N.B. If you want to place your data anywhere else, you just have to change the variable +`self.path` of `SemanticKittiDataset` class ([here](https://github.com/HuguesTHOMAS/KPConv-PyTorch/blob/c32e6ce94ed34a3dd9584f98d8dc0be02535dfb4/datasets/SemanticKitti.py#L65)). + +### Training + +Simply run the following script to start the training: + + python3 training_SemanticKitti.py + +Similarly to ModelNet40 training, the parameters can be modified in a configuration subclass called `SemanticKittiConfig`, and the first run of this script might take some time to precompute dataset structures. + + +### Plot a logged training + +When you start a new training, it is saved in a `results` folder. A dated log folder will be created, containing many information including loss values, validation metrics, model checkpoints, etc. + +In `plot_convergence.py`, you will find detailed comments explaining how to choose which training log you want to plot. Follow them and then run the script : + + python3 plot_convergence.py + + +### Test the trained model + +The test script is the same for all models (segmentation or classification). In `test_any_model.py`, you will find detailed comments explaining how to choose which logged trained model you want to test. Follow them and then run the script : + + python3 test_any_model.py diff --git a/competing_methods/my_KPConv/doc/visualization_guide.md b/competing_methods/my_KPConv/doc/visualization_guide.md new file mode 100644 index 00000000..b722a34a --- /dev/null +++ b/competing_methods/my_KPConv/doc/visualization_guide.md @@ -0,0 +1,25 @@ + + +## Visualize kernel deformations + +### Intructions + +In order to visualize features you need a dataset and a pretrained model that uses deformable KPConv. + +To start this visualization run the script: + + python3 visualize_deformations.py + +### Details + +The visualization script runs the model runs the model on a batch of test examples (forward pass), and then show these +examples in an interactive window. Here is a list of all keyboard shortcuts: + +- 'b' / 'n': smaller or larger point size. +- 'g' / 'h': previous or next example in current batch. +- 'k': switch between the rigid kernel (original kernel points positions) and the deformed kernel (position of the +kernel points after shift are applied) +- 'z': Switch between the points displayed (input points, current layer points or both). +- '0': Saves the example and deformed kernel as ply files. +- mouse left click: select a point and show kernel at its location. +- exit window: compute next batch. diff --git a/competing_methods/my_KPConv/kernels/dispositions/k_015_center_3D.ply b/competing_methods/my_KPConv/kernels/dispositions/k_015_center_3D.ply new file mode 100644 index 00000000..4c1d948e Binary files /dev/null and b/competing_methods/my_KPConv/kernels/dispositions/k_015_center_3D.ply differ diff --git a/competing_methods/my_KPConv/kernels/kernel_points.py b/competing_methods/my_KPConv/kernels/kernel_points.py new file mode 100644 index 00000000..f109244b --- /dev/null +++ b/competing_methods/my_KPConv/kernels/kernel_points.py @@ -0,0 +1,490 @@ +# +# +# 0=================================0 +# | Kernel Point Convolutions | +# 0=================================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Functions handling the disposition of kernel points. +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 11/06/2018 +# + + +# ------------------------------------------------------------------------------------------ +# +# Imports and global variables +# \**********************************/ +# + + +# Import numpy package and name it "np" +import time +import numpy as np +import matplotlib.pyplot as plt +from matplotlib import cm +from os import makedirs +from os.path import join, exists + +from utils.ply import read_ply, write_ply +from utils.config import bcolors + + +# ------------------------------------------------------------------------------------------ +# +# Functions +# \***************/ +# +# + +def create_3D_rotations(axis, angle): + """ + Create rotation matrices from a list of axes and angles. Code from wikipedia on quaternions + :param axis: float32[N, 3] + :param angle: float32[N,] + :return: float32[N, 3, 3] + """ + + t1 = np.cos(angle) + t2 = 1 - t1 + t3 = axis[:, 0] * axis[:, 0] + t6 = t2 * axis[:, 0] + t7 = t6 * axis[:, 1] + t8 = np.sin(angle) + t9 = t8 * axis[:, 2] + t11 = t6 * axis[:, 2] + t12 = t8 * axis[:, 1] + t15 = axis[:, 1] * axis[:, 1] + t19 = t2 * axis[:, 1] * axis[:, 2] + t20 = t8 * axis[:, 0] + t24 = axis[:, 2] * axis[:, 2] + R = np.stack([t1 + t2 * t3, + t7 - t9, + t11 + t12, + t7 + t9, + t1 + t2 * t15, + t19 - t20, + t11 - t12, + t19 + t20, + t1 + t2 * t24], axis=1) + + return np.reshape(R, (-1, 3, 3)) + + +def spherical_Lloyd(radius, num_cells, dimension=3, fixed='center', approximation='monte-carlo', + approx_n=5000, max_iter=500, momentum=0.9, verbose=0): + """ + Creation of kernel point via Lloyd algorithm. We use an approximation of the algorithm, and compute the Voronoi + cell centers with discretization of space. The exact formula is not trivial with part of the sphere as sides. + :param radius: Radius of the kernels + :param num_cells: Number of cell (kernel points) in the Voronoi diagram. + :param dimension: dimension of the space + :param fixed: fix position of certain kernel points ('none', 'center' or 'verticals') + :param approximation: Approximation method for Lloyd's algorithm ('discretization', 'monte-carlo') + :param approx_n: Number of point used for approximation. + :param max_iter: Maximum nu;ber of iteration for the algorithm. + :param momentum: Momentum of the low pass filter smoothing kernel point positions + :param verbose: display option + :return: points [num_kernels, num_points, dimension] + """ + + ####################### + # Parameters definition + ####################### + + # Radius used for optimization (points are rescaled afterwards) + radius0 = 1.0 + + ####################### + # Kernel initialization + ####################### + + # Random kernel points (Uniform distribution in a sphere) + kernel_points = np.zeros((0, dimension)) + while kernel_points.shape[0] < num_cells: + new_points = np.random.rand(num_cells, dimension) * 2 * radius0 - radius0 + kernel_points = np.vstack((kernel_points, new_points)) + d2 = np.sum(np.power(kernel_points, 2), axis=1) + kernel_points = kernel_points[np.logical_and(d2 < radius0 ** 2, (0.9 * radius0) ** 2 < d2), :] + kernel_points = kernel_points[:num_cells, :].reshape((num_cells, -1)) + + # Optional fixing + if fixed == 'center': + kernel_points[0, :] *= 0 + if fixed == 'verticals': + kernel_points[:3, :] *= 0 + kernel_points[1, -1] += 2 * radius0 / 3 + kernel_points[2, -1] -= 2 * radius0 / 3 + + ############################## + # Approximation initialization + ############################## + + # Initialize figure + if verbose > 1: + fig = plt.figure() + + # Initialize discretization in this method is chosen + if approximation == 'discretization': + side_n = int(np.floor(approx_n ** (1. / dimension))) + dl = 2 * radius0 / side_n + coords = np.arange(-radius0 + dl/2, radius0, dl) + if dimension == 2: + x, y = np.meshgrid(coords, coords) + X = np.vstack((np.ravel(x), np.ravel(y))).T + elif dimension == 3: + x, y, z = np.meshgrid(coords, coords, coords) + X = np.vstack((np.ravel(x), np.ravel(y), np.ravel(z))).T + elif dimension == 4: + x, y, z, t = np.meshgrid(coords, coords, coords, coords) + X = np.vstack((np.ravel(x), np.ravel(y), np.ravel(z), np.ravel(t))).T + else: + raise ValueError('Unsupported dimension (max is 4)') + elif approximation == 'monte-carlo': + X = np.zeros((0, dimension)) + else: + raise ValueError('Wrong approximation method chosen: "{:s}"'.format(approximation)) + + # Only points inside the sphere are used + d2 = np.sum(np.power(X, 2), axis=1) + X = X[d2 < radius0 * radius0, :] + + ##################### + # Kernel optimization + ##################### + + # Warning if at least one kernel point has no cell + warning = False + + # moving vectors of kernel points saved to detect convergence + max_moves = np.zeros((0,)) + + for iter in range(max_iter): + + # In the case of monte-carlo, renew the sampled points + if approximation == 'monte-carlo': + X = np.random.rand(approx_n, dimension) * 2 * radius0 - radius0 + d2 = np.sum(np.power(X, 2), axis=1) + X = X[d2 < radius0 * radius0, :] + + # Get the distances matrix [n_approx, K, dim] + differences = np.expand_dims(X, 1) - kernel_points + sq_distances = np.sum(np.square(differences), axis=2) + + # Compute cell centers + cell_inds = np.argmin(sq_distances, axis=1) + centers = [] + for c in range(num_cells): + bool_c = (cell_inds == c) + num_c = np.sum(bool_c.astype(np.int32)) + if num_c > 0: + centers.append(np.sum(X[bool_c, :], axis=0) / num_c) + else: + warning = True + centers.append(kernel_points[c]) + + # Update kernel points with low pass filter to smooth mote carlo + centers = np.vstack(centers) + moves = (1 - momentum) * (centers - kernel_points) + kernel_points += moves + + # Check moves for convergence + max_moves = np.append(max_moves, np.max(np.linalg.norm(moves, axis=1))) + + # Optional fixing + if fixed == 'center': + kernel_points[0, :] *= 0 + if fixed == 'verticals': + kernel_points[0, :] *= 0 + kernel_points[:3, :-1] *= 0 + + if verbose: + print('iter {:5d} / max move = {:f}'.format(iter, np.max(np.linalg.norm(moves, axis=1)))) + if warning: + print('{:}WARNING: at least one point has no cell{:}'.format(bcolors.WARNING, bcolors.ENDC)) + if verbose > 1: + plt.clf() + plt.scatter(X[:, 0], X[:, 1], c=cell_inds, s=20.0, + marker='.', cmap=plt.get_cmap('tab20')) + #plt.scatter(kernel_points[:, 0], kernel_points[:, 1], c=np.arange(num_cells), s=100.0, + # marker='+', cmap=plt.get_cmap('tab20')) + plt.plot(kernel_points[:, 0], kernel_points[:, 1], 'k+') + circle = plt.Circle((0, 0), radius0, color='r', fill=False) + fig.axes[0].add_artist(circle) + fig.axes[0].set_xlim((-radius0 * 1.1, radius0 * 1.1)) + fig.axes[0].set_ylim((-radius0 * 1.1, radius0 * 1.1)) + fig.axes[0].set_aspect('equal') + plt.draw() + plt.pause(0.001) + plt.show(block=False) + + ################### + # User verification + ################### + + # Show the convergence to ask user if this kernel is correct + if verbose: + if dimension == 2: + fig, (ax1, ax2) = plt.subplots(1, 2, figsize=[10.4, 4.8]) + ax1.plot(max_moves) + ax2.scatter(X[:, 0], X[:, 1], c=cell_inds, s=20.0, + marker='.', cmap=plt.get_cmap('tab20')) + # plt.scatter(kernel_points[:, 0], kernel_points[:, 1], c=np.arange(num_cells), s=100.0, + # marker='+', cmap=plt.get_cmap('tab20')) + ax2.plot(kernel_points[:, 0], kernel_points[:, 1], 'k+') + circle = plt.Circle((0, 0), radius0, color='r', fill=False) + ax2.add_artist(circle) + ax2.set_xlim((-radius0 * 1.1, radius0 * 1.1)) + ax2.set_ylim((-radius0 * 1.1, radius0 * 1.1)) + ax2.set_aspect('equal') + plt.title('Check if kernel is correct.') + plt.draw() + plt.show() + + if dimension > 2: + plt.figure() + plt.plot(max_moves) + plt.title('Check if kernel is correct.') + plt.show() + + # Rescale kernels with real radius + return kernel_points * radius + + +def kernel_point_optimization_debug(radius, num_points, num_kernels=1, dimension=3, + fixed='center', ratio=0.66, verbose=0): + """ + Creation of kernel point via optimization of potentials. + :param radius: Radius of the kernels + :param num_points: points composing kernels + :param num_kernels: number of wanted kernels + :param dimension: dimension of the space + :param fixed: fix position of certain kernel points ('none', 'center' or 'verticals') + :param ratio: ratio of the radius where you want the kernels points to be placed + :param verbose: display option + :return: points [num_kernels, num_points, dimension] + """ + + ####################### + # Parameters definition + ####################### + + # Radius used for optimization (points are rescaled afterwards) + radius0 = 1 + diameter0 = 2 + + # Factor multiplicating gradients for moving points (~learning rate) + moving_factor = 1e-2 + continuous_moving_decay = 0.9995 + + # Gradient threshold to stop optimization + thresh = 1e-5 + + # Gradient clipping value + clip = 0.05 * radius0 + + ####################### + # Kernel initialization + ####################### + + # Random kernel points + kernel_points = np.random.rand(num_kernels * num_points - 1, dimension) * diameter0 - radius0 + while (kernel_points.shape[0] < num_kernels * num_points): + new_points = np.random.rand(num_kernels * num_points - 1, dimension) * diameter0 - radius0 + kernel_points = np.vstack((kernel_points, new_points)) + d2 = np.sum(np.power(kernel_points, 2), axis=1) + kernel_points = kernel_points[d2 < 0.5 * radius0 * radius0, :] + kernel_points = kernel_points[:num_kernels * num_points, :].reshape((num_kernels, num_points, -1)) + + # Optionnal fixing + if fixed == 'center': + kernel_points[:, 0, :] *= 0 + if fixed == 'verticals': + kernel_points[:, :3, :] *= 0 + kernel_points[:, 1, -1] += 2 * radius0 / 3 + kernel_points[:, 2, -1] -= 2 * radius0 / 3 + + ##################### + # Kernel optimization + ##################### + + # Initialize figure + if verbose>1: + fig = plt.figure() + + saved_gradient_norms = np.zeros((10000, num_kernels)) + old_gradient_norms = np.zeros((num_kernels, num_points)) + step = -1 + while step < 10000: + + # Increment + step += 1 + + # Compute gradients + # ***************** + + # Derivative of the sum of potentials of all points + A = np.expand_dims(kernel_points, axis=2) + B = np.expand_dims(kernel_points, axis=1) + interd2 = np.sum(np.power(A - B, 2), axis=-1) + inter_grads = (A - B) / (np.power(np.expand_dims(interd2, -1), 3/2) + 1e-6) + inter_grads = np.sum(inter_grads, axis=1) + + # Derivative of the radius potential + circle_grads = 10*kernel_points + + # All gradients + gradients = inter_grads + circle_grads + + if fixed == 'verticals': + gradients[:, 1:3, :-1] = 0 + + # Stop condition + # ************** + + # Compute norm of gradients + gradients_norms = np.sqrt(np.sum(np.power(gradients, 2), axis=-1)) + saved_gradient_norms[step, :] = np.max(gradients_norms, axis=1) + + # Stop if all moving points are gradients fixed (low gradients diff) + + if fixed == 'center' and np.max(np.abs(old_gradient_norms[:, 1:] - gradients_norms[:, 1:])) < thresh: + break + elif fixed == 'verticals' and np.max(np.abs(old_gradient_norms[:, 3:] - gradients_norms[:, 3:])) < thresh: + break + elif np.max(np.abs(old_gradient_norms - gradients_norms)) < thresh: + break + old_gradient_norms = gradients_norms + + # Move points + # *********** + + # Clip gradient to get moving dists + moving_dists = np.minimum(moving_factor * gradients_norms, clip) + + # Fix central point + if fixed == 'center': + moving_dists[:, 0] = 0 + if fixed == 'verticals': + moving_dists[:, 0] = 0 + + # Move points + kernel_points -= np.expand_dims(moving_dists, -1) * gradients / np.expand_dims(gradients_norms + 1e-6, -1) + + if verbose: + print('step {:5d} / max grad = {:f}'.format(step, np.max(gradients_norms[:, 3:]))) + if verbose > 1: + plt.clf() + plt.plot(kernel_points[0, :, 0], kernel_points[0, :, 1], '.') + circle = plt.Circle((0, 0), radius, color='r', fill=False) + fig.axes[0].add_artist(circle) + fig.axes[0].set_xlim((-radius*1.1, radius*1.1)) + fig.axes[0].set_ylim((-radius*1.1, radius*1.1)) + fig.axes[0].set_aspect('equal') + plt.draw() + plt.pause(0.001) + plt.show(block=False) + print(moving_factor) + + # moving factor decay + moving_factor *= continuous_moving_decay + + # Remove unused lines in the saved gradients + if step < 10000: + saved_gradient_norms = saved_gradient_norms[:step+1, :] + + # Rescale radius to fit the wanted ratio of radius + r = np.sqrt(np.sum(np.power(kernel_points, 2), axis=-1)) + kernel_points *= ratio / np.mean(r[:, 1:]) + + # Rescale kernels with real radius + return kernel_points * radius, saved_gradient_norms + + +def load_kernels(radius, num_kpoints, dimension, fixed, lloyd=False): + + # Kernel directory + kernel_dir = 'kernels/dispositions' + if not exists(kernel_dir): + makedirs(kernel_dir) + + # To many points switch to Lloyds + if num_kpoints > 30: + lloyd = True + + # Kernel_file + kernel_file = join(kernel_dir, 'k_{:03d}_{:s}_{:d}D.ply'.format(num_kpoints, fixed, dimension)) + + # Check if already done + if not exists(kernel_file): + if lloyd: + # Create kernels + kernel_points = spherical_Lloyd(1.0, + num_kpoints, + dimension=dimension, + fixed=fixed, + verbose=0) + + else: + # Create kernels + kernel_points, grad_norms = kernel_point_optimization_debug(1.0, + num_kpoints, + num_kernels=100, + dimension=dimension, + fixed=fixed, + verbose=0) + + # Find best candidate + best_k = np.argmin(grad_norms[-1, :]) + + # Save points + kernel_points = kernel_points[best_k, :, :] + + write_ply(kernel_file, kernel_points, ['x', 'y', 'z']) + + else: + data = read_ply(kernel_file) + kernel_points = np.vstack((data['x'], data['y'], data['z'])).T + + # Random roations for the kernel + # N.B. 4D random rotations not supported yet + R = np.eye(dimension) + theta = np.random.rand() * 2 * np.pi + if dimension == 2: + if fixed != 'vertical': + c, s = np.cos(theta), np.sin(theta) + R = np.array([[c, -s], [s, c]], dtype=np.float32) + + elif dimension == 3: + if fixed != 'vertical': + c, s = np.cos(theta), np.sin(theta) + R = np.array([[c, -s, 0], [s, c, 0], [0, 0, 1]], dtype=np.float32) + + else: + phi = (np.random.rand() - 0.5) * np.pi + + # Create the first vector in carthesian coordinates + u = np.array([np.cos(theta) * np.cos(phi), np.sin(theta) * np.cos(phi), np.sin(phi)]) + + # Choose a random rotation angle + alpha = np.random.rand() * 2 * np.pi + + # Create the rotation matrix with this vector and angle + R = create_3D_rotations(np.reshape(u, (1, -1)), np.reshape(alpha, (1, -1)))[0] + + R = R.astype(np.float32) + + # Add a small noise + kernel_points = kernel_points + np.random.normal(scale=0.01, size=kernel_points.shape) + + # Scale kernels + kernel_points = radius * kernel_points + + # Rotate kernels + kernel_points = np.matmul(kernel_points, R) + + return kernel_points.astype(np.float32) \ No newline at end of file diff --git a/competing_methods/my_KPConv/models/architectures.py b/competing_methods/my_KPConv/models/architectures.py new file mode 100644 index 00000000..e8cb2546 --- /dev/null +++ b/competing_methods/my_KPConv/models/architectures.py @@ -0,0 +1,415 @@ +# +# +# 0=================================0 +# | Kernel Point Convolutions | +# 0=================================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Define network architectures +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 06/03/2020 +# + +from models.blocks import * +import numpy as np + + +def p2p_fitting_regularizer(net): + + fitting_loss = 0 + repulsive_loss = 0 + + for m in net.modules(): + + if isinstance(m, KPConv) and m.deformable: + + ############## + # Fitting loss + ############## + + # Get the distance to closest input point and normalize to be independant from layers + KP_min_d2 = m.min_d2 / (m.KP_extent ** 2) + + # Loss will be the square distance to closest input point. We use L1 because dist is already squared + fitting_loss += net.l1(KP_min_d2, torch.zeros_like(KP_min_d2)) + + ################ + # Repulsive loss + ################ + + # Normalized KP locations + KP_locs = m.deformed_KP / m.KP_extent + + # Point should not be close to each other + for i in range(net.K): + other_KP = torch.cat([KP_locs[:, :i, :], KP_locs[:, i + 1:, :]], dim=1).detach() + distances = torch.sqrt(torch.sum((other_KP - KP_locs[:, i:i + 1, :]) ** 2, dim=2)) + rep_loss = torch.sum(torch.clamp_max(distances - net.repulse_extent, max=0.0) ** 2, dim=1) + repulsive_loss += net.l1(rep_loss, torch.zeros_like(rep_loss)) / net.K + + return net.deform_fitting_power * (2 * fitting_loss + repulsive_loss) + + +class KPCNN(nn.Module): + """ + Class defining KPCNN + """ + + def __init__(self, config): + super(KPCNN, self).__init__() + + ##################### + # Network opperations + ##################### + + # Current radius of convolution and feature dimension + layer = 0 + r = config.first_subsampling_dl * config.conv_radius + in_dim = config.in_features_dim + out_dim = config.first_features_dim + self.K = config.num_kernel_points + + # Save all block operations in a list of modules + self.block_ops = nn.ModuleList() + + # Loop over consecutive blocks + block_in_layer = 0 + for block_i, block in enumerate(config.architecture): + + # Check equivariance + if ('equivariant' in block) and (not out_dim % 3 == 0): + raise ValueError('Equivariant block but features dimension is not a factor of 3') + + # Detect upsampling block to stop + if 'upsample' in block: + break + + # Apply the good block function defining tf ops + self.block_ops.append(block_decider(block, + r, + in_dim, + out_dim, + layer, + config)) + + + # Index of block in this layer + block_in_layer += 1 + + # Update dimension of input from output + if 'simple' in block: + in_dim = out_dim // 2 + else: + in_dim = out_dim + + + # Detect change to a subsampled layer + if 'pool' in block or 'strided' in block: + # Update radius and feature dimension for next layer + layer += 1 + r *= 2 + out_dim *= 2 + block_in_layer = 0 + + self.head_mlp = UnaryBlock(out_dim, 1024, False, 0) + self.head_softmax = UnaryBlock(1024, config.num_classes, False, 0) + + ################ + # Network Losses + ################ + + self.criterion = torch.nn.CrossEntropyLoss() + self.deform_fitting_mode = config.deform_fitting_mode + self.deform_fitting_power = config.deform_fitting_power + self.deform_lr_factor = config.deform_lr_factor + self.repulse_extent = config.repulse_extent + self.output_loss = 0 + self.reg_loss = 0 + self.l1 = nn.L1Loss() + + return + + def forward(self, batch, config): + + # Save all block operations in a list of modules + x = batch.features.clone().detach() + + # Loop over consecutive blocks + for block_op in self.block_ops: + x = block_op(x, batch) + + # Head of network + x = self.head_mlp(x, batch) + x = self.head_softmax(x, batch) + + return x + + def loss(self, outputs, labels): + """ + Runs the loss on outputs of the model + :param outputs: logits + :param labels: labels + :return: loss + """ + + # Cross entropy loss + self.output_loss = self.criterion(outputs, labels) + + # Regularization of deformable offsets + if self.deform_fitting_mode == 'point2point': + self.reg_loss = p2p_fitting_regularizer(self) + elif self.deform_fitting_mode == 'point2plane': + raise ValueError('point2plane fitting mode not implemented yet.') + else: + raise ValueError('Unknown fitting mode: ' + self.deform_fitting_mode) + + # Combined loss + return self.output_loss + self.reg_loss + + @staticmethod + def accuracy(outputs, labels): + """ + Computes accuracy of the current batch + :param outputs: logits predicted by the network + :param labels: labels + :return: accuracy value + """ + + predicted = torch.argmax(outputs.data, dim=1) + total = labels.size(0) + correct = (predicted == labels).sum().item() + + return correct / total + + +class KPFCNN(nn.Module): + """ + Class defining KPFCNN + """ + + def __init__(self, config, lbl_values, ign_lbls): + super(KPFCNN, self).__init__() + + ############ + # Parameters + ############ + + # Current radius of convolution and feature dimension + layer = 0 + r = config.first_subsampling_dl * config.conv_radius + in_dim = config.in_features_dim + out_dim = config.first_features_dim + self.K = config.num_kernel_points + self.C = len(lbl_values) - len(ign_lbls) + + ##################### + # List Encoder blocks + ##################### + + # Save all block operations in a list of modules + self.encoder_blocks = nn.ModuleList() + self.encoder_skip_dims = [] + self.encoder_skips = [] + + # Loop over consecutive blocks + for block_i, block in enumerate(config.architecture): + + # Check equivariance + if ('equivariant' in block) and (not out_dim % 3 == 0): + raise ValueError('Equivariant block but features dimension is not a factor of 3') + + # Detect change to next layer for skip connection + if np.any([tmp in block for tmp in ['pool', 'strided', 'upsample', 'global']]): + self.encoder_skips.append(block_i) + self.encoder_skip_dims.append(in_dim) + + # Detect upsampling block to stop + if 'upsample' in block: + break + + # Apply the good block function defining tf ops + self.encoder_blocks.append(block_decider(block, + r, + in_dim, + out_dim, + layer, + config)) + + # Update dimension of input from output + if 'simple' in block: + in_dim = out_dim // 2 + else: + in_dim = out_dim + + # Detect change to a subsampled layer + if 'pool' in block or 'strided' in block: + # Update radius and feature dimension for next layer + layer += 1 + r *= 2 + out_dim *= 2 + + ##################### + # List Decoder blocks + ##################### + + # Save all block operations in a list of modules + self.decoder_blocks = nn.ModuleList() + self.decoder_concats = [] + + # Find first upsampling block + start_i = 0 + for block_i, block in enumerate(config.architecture): + if 'upsample' in block: + start_i = block_i + break + + # Loop over consecutive blocks + for block_i, block in enumerate(config.architecture[start_i:]): + + # Add dimension of skip connection concat + if block_i > 0 and 'upsample' in config.architecture[start_i + block_i - 1]: + in_dim += self.encoder_skip_dims[layer] + self.decoder_concats.append(block_i) + + # Apply the good block function defining tf ops + self.decoder_blocks.append(block_decider(block, + r, + in_dim, + out_dim, + layer, + config)) + + # Update dimension of input from output + in_dim = out_dim + + # Detect change to a subsampled layer + if 'upsample' in block: + # Update radius and feature dimension for next layer + layer -= 1 + r *= 0.5 + out_dim = out_dim // 2 + + self.head_mlp = UnaryBlock(out_dim, config.first_features_dim, False, 0) + self.head_softmax = UnaryBlock(config.first_features_dim, self.C, False, 0) + + ################ + # Network Losses + ################ + + # List of valid labels (those not ignored in loss) + self.valid_labels = np.sort([c for c in lbl_values if c not in ign_lbls]) + + # Choose segmentation loss + if len(config.class_w) > 0: + class_w = torch.from_numpy(np.array(config.class_w, dtype=np.float32)) + self.criterion = torch.nn.CrossEntropyLoss(weight=class_w, ignore_index=-1) + else: + self.criterion = torch.nn.CrossEntropyLoss(ignore_index=-1) + self.deform_fitting_mode = config.deform_fitting_mode + self.deform_fitting_power = config.deform_fitting_power + self.deform_lr_factor = config.deform_lr_factor + self.repulse_extent = config.repulse_extent + self.output_loss = 0 + self.reg_loss = 0 + self.l1 = nn.L1Loss() + + return + + def forward(self, batch, config): + + # Get input features + x = batch.features.clone().detach() + + # Loop over consecutive blocks + skip_x = [] + for block_i, block_op in enumerate(self.encoder_blocks): + if block_i in self.encoder_skips: + skip_x.append(x) + x = block_op(x, batch) + + for block_i, block_op in enumerate(self.decoder_blocks): + if block_i in self.decoder_concats: + x = torch.cat([x, skip_x.pop()], dim=1) + x = block_op(x, batch) + + # Head of network + x = self.head_mlp(x, batch) + x = self.head_softmax(x, batch) + + return x + + def loss(self, outputs, labels): + """ + Runs the loss on outputs of the model + :param outputs: logits + :param labels: labels + :return: loss + """ + + # Set all ignored labels to -1 and correct the other label to be in [0, C-1] range + target = - torch.ones_like(labels) + for i, c in enumerate(self.valid_labels): + target[labels == c] = i + + # Reshape to have a minibatch size of 1 + outputs = torch.transpose(outputs, 0, 1) + outputs = outputs.unsqueeze(0) + target = target.unsqueeze(0) + + # Cross entropy loss + self.output_loss = self.criterion(outputs, target) + + # Regularization of deformable offsets + if self.deform_fitting_mode == 'point2point': + self.reg_loss = p2p_fitting_regularizer(self) + elif self.deform_fitting_mode == 'point2plane': + raise ValueError('point2plane fitting mode not implemented yet.') + else: + raise ValueError('Unknown fitting mode: ' + self.deform_fitting_mode) + + # Combined loss + return self.output_loss + self.reg_loss + + def accuracy(self, outputs, labels): + """ + Computes accuracy of the current batch + :param outputs: logits predicted by the network + :param labels: labels + :return: accuracy value + """ + + # Set all ignored labels to -1 and correct the other label to be in [0, C-1] range + target = - torch.ones_like(labels) + for i, c in enumerate(self.valid_labels): + target[labels == c] = i + + predicted = torch.argmax(outputs.data, dim=1) + total = target.size(0) + correct = (predicted == target).sum().item() + + return correct / total + + + + + + + + + + + + + + + + + + + + + diff --git a/competing_methods/my_KPConv/models/blocks.py b/competing_methods/my_KPConv/models/blocks.py new file mode 100644 index 00000000..86b04a34 --- /dev/null +++ b/competing_methods/my_KPConv/models/blocks.py @@ -0,0 +1,694 @@ +# +# +# 0=================================0 +# | Kernel Point Convolutions | +# 0=================================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Define network blocks +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 06/03/2020 +# + + +import time +import math +import torch +import torch.nn as nn +from torch.nn.parameter import Parameter +from torch.nn.init import kaiming_uniform_ +from kernels.kernel_points import load_kernels + +from utils.ply import write_ply + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Simple functions +# \**********************/ +# + + +def gather(x, idx, method=2): + """ + implementation of a custom gather operation for faster backwards. + :param x: input with shape [N, D_1, ... D_d] + :param idx: indexing with shape [n_1, ..., n_m] + :param method: Choice of the method + :return: x[idx] with shape [n_1, ..., n_m, D_1, ... D_d] + """ + + if method == 0: + return x[idx] + elif method == 1: + x = x.unsqueeze(1) + x = x.expand((-1, idx.shape[-1], -1)) + idx = idx.unsqueeze(2) + idx = idx.expand((-1, -1, x.shape[-1])) + return x.gather(0, idx) + elif method == 2: + for i, ni in enumerate(idx.size()[1:]): + x = x.unsqueeze(i+1) + new_s = list(x.size()) + new_s[i+1] = ni + x = x.expand(new_s) + n = len(idx.size()) + for i, di in enumerate(x.size()[n:]): + idx = idx.unsqueeze(i+n) + new_s = list(idx.size()) + new_s[i+n] = di + idx = idx.expand(new_s) + return x.gather(0, idx) + else: + raise ValueError('Unkown method') + + +def radius_gaussian(sq_r, sig, eps=1e-9): + """ + Compute a radius gaussian (gaussian of distance) + :param sq_r: input radiuses [dn, ..., d1, d0] + :param sig: extents of gaussians [d1, d0] or [d0] or float + :return: gaussian of sq_r [dn, ..., d1, d0] + """ + return torch.exp(-sq_r / (2 * sig**2 + eps)) + + +def closest_pool(x, inds): + """ + Pools features from the closest neighbors. WARNING: this function assumes the neighbors are ordered. + :param x: [n1, d] features matrix + :param inds: [n2, max_num] Only the first column is used for pooling + :return: [n2, d] pooled features matrix + """ + + # Add a last row with minimum features for shadow pools + x = torch.cat((x, torch.zeros_like(x[:1, :])), 0) + + # Get features for each pooling location [n2, d] + return gather(x, inds[:, 0]) + + +def max_pool(x, inds): + """ + Pools features with the maximum values. + :param x: [n1, d] features matrix + :param inds: [n2, max_num] pooling indices + :return: [n2, d] pooled features matrix + """ + + # Add a last row with minimum features for shadow pools + x = torch.cat((x, torch.zeros_like(x[:1, :])), 0) + + # Get all features for each pooling location [n2, max_num, d] + pool_features = gather(x, inds) + + # Pool the maximum [n2, d] + max_features, _ = torch.max(pool_features, 1) + return max_features + + +def global_average(x, batch_lengths): + """ + Block performing a global average over batch pooling + :param x: [N, D] input features + :param batch_lengths: [B] list of batch lengths + :return: [B, D] averaged features + """ + + # Loop over the clouds of the batch + averaged_features = [] + i0 = 0 + for b_i, length in enumerate(batch_lengths): + + # Average features for each batch cloud + averaged_features.append(torch.mean(x[i0:i0 + length], dim=0)) + + # Increment for next cloud + i0 += length + + # Average features in each batch + return torch.stack(averaged_features) + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# KPConv class +# \******************/ +# + + +class KPConv(nn.Module): + + def __init__(self, kernel_size, p_dim, in_channels, out_channels, KP_extent, radius, + fixed_kernel_points='center', KP_influence='linear', aggregation_mode='sum', + deformable=False, modulated=False): + """ + Initialize parameters for KPConvDeformable. + :param kernel_size: Number of kernel points. + :param p_dim: dimension of the point space. + :param in_channels: dimension of input features. + :param out_channels: dimension of output features. + :param KP_extent: influence radius of each kernel point. + :param radius: radius used for kernel point init. Even for deformable, use the config.conv_radius + :param fixed_kernel_points: fix position of certain kernel points ('none', 'center' or 'verticals'). + :param KP_influence: influence function of the kernel points ('constant', 'linear', 'gaussian'). + :param aggregation_mode: choose to sum influences, or only keep the closest ('closest', 'sum'). + :param deformable: choose deformable or not + :param modulated: choose if kernel weights are modulated in addition to deformed + """ + super(KPConv, self).__init__() + + # Save parameters + self.K = kernel_size + self.p_dim = p_dim + self.in_channels = in_channels + self.out_channels = out_channels + self.radius = radius + self.KP_extent = KP_extent + self.fixed_kernel_points = fixed_kernel_points + self.KP_influence = KP_influence + self.aggregation_mode = aggregation_mode + self.deformable = deformable + self.modulated = modulated + + # Running variable containing deformed KP distance to input points. (used in regularization loss) + self.min_d2 = None + self.deformed_KP = None + self.offset_features = None + + # Initialize weights + self.weights = Parameter(torch.zeros((self.K, in_channels, out_channels), dtype=torch.float32), + requires_grad=True) + + # Initiate weights for offsets + if deformable: + if modulated: + self.offset_dim = (self.p_dim + 1) * self.K + else: + self.offset_dim = self.p_dim * self.K + self.offset_conv = KPConv(self.K, + self.p_dim, + self.in_channels, + self.offset_dim, + KP_extent, + radius, + fixed_kernel_points=fixed_kernel_points, + KP_influence=KP_influence, + aggregation_mode=aggregation_mode) + self.offset_bias = Parameter(torch.zeros(self.offset_dim, dtype=torch.float32), requires_grad=True) + + else: + self.offset_dim = None + self.offset_conv = None + self.offset_bias = None + + # Reset parameters + self.reset_parameters() + + # Initialize kernel points + self.kernel_points = self.init_KP() + + return + + def reset_parameters(self): + kaiming_uniform_(self.weights, a=math.sqrt(5)) + if self.deformable: + nn.init.zeros_(self.offset_bias) + return + + def init_KP(self): + """ + Initialize the kernel point positions in a sphere + :return: the tensor of kernel points + """ + + # Create one kernel disposition (as numpy array). Choose the KP distance to center thanks to the KP extent + K_points_numpy = load_kernels(self.radius, + self.K, + dimension=self.p_dim, + fixed=self.fixed_kernel_points) + + return Parameter(torch.tensor(K_points_numpy, dtype=torch.float32), + requires_grad=False) + + def forward(self, q_pts, s_pts, neighb_inds, x): + + ################### + # Offset generation + ################### + + if self.deformable: + + # Get offsets with a KPConv that only takes part of the features + self.offset_features = self.offset_conv(q_pts, s_pts, neighb_inds, x) + self.offset_bias + + if self.modulated: + + # Get offset (in normalized scale) from features + unscaled_offsets = self.offset_features[:, :self.p_dim * self.K] + unscaled_offsets = unscaled_offsets.view(-1, self.K, self.p_dim) + + # Get modulations + modulations = 2 * torch.sigmoid(self.offset_features[:, self.p_dim * self.K:]) + + else: + + # Get offset (in normalized scale) from features + unscaled_offsets = self.offset_features.view(-1, self.K, self.p_dim) + + # No modulations + modulations = None + + # Rescale offset for this layer + offsets = unscaled_offsets * self.KP_extent + + else: + offsets = None + modulations = None + + ###################### + # Deformed convolution + ###################### + + # Add a fake point in the last row for shadow neighbors + s_pts = torch.cat((s_pts, torch.zeros_like(s_pts[:1, :]) + 1e6), 0) + + # Get neighbor points [n_points, n_neighbors, dim] + neighbors = s_pts[neighb_inds, :] + + # Center every neighborhood + neighbors = neighbors - q_pts.unsqueeze(1) + + # Apply offsets to kernel points [n_points, n_kpoints, dim] + if self.deformable: + self.deformed_KP = offsets + self.kernel_points + deformed_K_points = self.deformed_KP.unsqueeze(1) + else: + deformed_K_points = self.kernel_points + + # Get all difference matrices [n_points, n_neighbors, n_kpoints, dim] + neighbors.unsqueeze_(2) + differences = neighbors - deformed_K_points + + # Get the square distances [n_points, n_neighbors, n_kpoints] + sq_distances = torch.sum(differences ** 2, dim=3) + + # Optimization by ignoring points outside a deformed KP range + if self.deformable: + + # Save distances for loss + self.min_d2, _ = torch.min(sq_distances, dim=1) + + # Boolean of the neighbors in range of a kernel point [n_points, n_neighbors] + in_range = torch.any(sq_distances < self.KP_extent ** 2, dim=2).type(torch.int32) + + # New value of max neighbors + new_max_neighb = torch.max(torch.sum(in_range, dim=1)) + + # For each row of neighbors, indices of the ones that are in range [n_points, new_max_neighb] + neighb_row_bool, neighb_row_inds = torch.topk(in_range, new_max_neighb.item(), dim=1) + + # Gather new neighbor indices [n_points, new_max_neighb] + new_neighb_inds = neighb_inds.gather(1, neighb_row_inds, sparse_grad=False) + + # Gather new distances to KP [n_points, new_max_neighb, n_kpoints] + neighb_row_inds.unsqueeze_(2) + neighb_row_inds = neighb_row_inds.expand(-1, -1, self.K) + sq_distances = sq_distances.gather(1, neighb_row_inds, sparse_grad=False) + + # New shadow neighbors have to point to the last shadow point + new_neighb_inds *= neighb_row_bool + new_neighb_inds -= (neighb_row_bool.type(torch.int64) - 1) * int(s_pts.shape[0] - 1) + else: + new_neighb_inds = neighb_inds + + # Get Kernel point influences [n_points, n_kpoints, n_neighbors] + if self.KP_influence == 'constant': + # Every point get an influence of 1. + all_weights = torch.ones_like(sq_distances) + all_weights = torch.transpose(all_weights, 1, 2) + + elif self.KP_influence == 'linear': + # Influence decrease linearly with the distance, and get to zero when d = KP_extent. + all_weights = torch.clamp(1 - torch.sqrt(sq_distances) / self.KP_extent, min=0.0) + all_weights = torch.transpose(all_weights, 1, 2) + + elif self.KP_influence == 'gaussian': + # Influence in gaussian of the distance. + sigma = self.KP_extent * 0.3 + all_weights = radius_gaussian(sq_distances, sigma) + all_weights = torch.transpose(all_weights, 1, 2) + else: + raise ValueError('Unknown influence function type (config.KP_influence)') + + # In case of closest mode, only the closest KP can influence each point + if self.aggregation_mode == 'closest': + neighbors_1nn = torch.argmin(sq_distances, dim=2) + all_weights *= torch.transpose(nn.functional.one_hot(neighbors_1nn, self.K), 1, 2) + + elif self.aggregation_mode != 'sum': + raise ValueError("Unknown convolution mode. Should be 'closest' or 'sum'") + + # Add a zero feature for shadow neighbors + x = torch.cat((x, torch.zeros_like(x[:1, :])), 0) + + # Get the features of each neighborhood [n_points, n_neighbors, in_fdim] + neighb_x = gather(x, new_neighb_inds) + + # Apply distance weights [n_points, n_kpoints, in_fdim] + weighted_features = torch.matmul(all_weights, neighb_x) + + # Apply modulations + if self.deformable and self.modulated: + weighted_features *= modulations.unsqueeze(2) + + # Apply network weights [n_kpoints, n_points, out_fdim] + weighted_features = weighted_features.permute((1, 0, 2)) + kernel_outputs = torch.matmul(weighted_features, self.weights) + + # Convolution sum [n_points, out_fdim] + return torch.sum(kernel_outputs, dim=0) + + def __repr__(self): + return 'KPConv(radius: {:.2f}, in_feat: {:d}, out_feat: {:d})'.format(self.radius, + self.in_channels, + self.out_channels) + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Complex blocks +# \********************/ +# + +def block_decider(block_name, + radius, + in_dim, + out_dim, + layer_ind, + config): + + if block_name == 'unary': + return UnaryBlock(in_dim, out_dim, config.use_batch_norm, config.batch_norm_momentum) + + elif block_name in ['simple', + 'simple_deformable', + 'simple_invariant', + 'simple_equivariant', + 'simple_strided', + 'simple_deformable_strided', + 'simple_invariant_strided', + 'simple_equivariant_strided']: + return SimpleBlock(block_name, in_dim, out_dim, radius, layer_ind, config) + + elif block_name in ['resnetb', + 'resnetb_invariant', + 'resnetb_equivariant', + 'resnetb_deformable', + 'resnetb_strided', + 'resnetb_deformable_strided', + 'resnetb_equivariant_strided', + 'resnetb_invariant_strided']: + return ResnetBottleneckBlock(block_name, in_dim, out_dim, radius, layer_ind, config) + + elif block_name == 'max_pool' or block_name == 'max_pool_wide': + return MaxPoolBlock(layer_ind) + + elif block_name == 'global_average': + return GlobalAverageBlock() + + elif block_name == 'nearest_upsample': + return NearestUpsampleBlock(layer_ind) + + else: + raise ValueError('Unknown block name in the architecture definition : ' + block_name) + + +class BatchNormBlock(nn.Module): + + def __init__(self, in_dim, use_bn, bn_momentum): + """ + Initialize a batch normalization block. If network does not use batch normalization, replace with biases. + :param in_dim: dimension input features + :param use_bn: boolean indicating if we use Batch Norm + :param bn_momentum: Batch norm momentum + """ + super(BatchNormBlock, self).__init__() + self.bn_momentum = bn_momentum + self.use_bn = use_bn + self.in_dim = in_dim + if self.use_bn: + self.batch_norm = nn.BatchNorm1d(in_dim, momentum=bn_momentum) + #self.batch_norm = nn.InstanceNorm1d(in_dim, momentum=bn_momentum) + else: + self.bias = Parameter(torch.zeros(in_dim, dtype=torch.float32), requires_grad=True) + return + + def reset_parameters(self): + nn.init.zeros_(self.bias) + + def forward(self, x): + if self.use_bn: + + x = x.unsqueeze(2) + x = x.transpose(0, 2) + x = self.batch_norm(x) + x = x.transpose(0, 2) + return x.squeeze() + else: + return x + self.bias + + def __repr__(self): + return 'BatchNormBlock(in_feat: {:d}, momentum: {:.3f}, only_bias: {:s})'.format(self.in_dim, + self.bn_momentum, + str(not self.use_bn)) + + +class UnaryBlock(nn.Module): + + def __init__(self, in_dim, out_dim, use_bn, bn_momentum, no_relu=False): + """ + Initialize a standard unary block with its ReLU and BatchNorm. + :param in_dim: dimension input features + :param out_dim: dimension input features + :param use_bn: boolean indicating if we use Batch Norm + :param bn_momentum: Batch norm momentum + """ + + super(UnaryBlock, self).__init__() + self.bn_momentum = bn_momentum + self.use_bn = use_bn + self.no_relu = no_relu + self.in_dim = in_dim + self.out_dim = out_dim + self.mlp = nn.Linear(in_dim, out_dim, bias=False) + self.batch_norm = BatchNormBlock(out_dim, self.use_bn, self.bn_momentum) + if not no_relu: + self.leaky_relu = nn.LeakyReLU(0.1) + return + + def forward(self, x, batch=None): + x = self.mlp(x) + x = self.batch_norm(x) + if not self.no_relu: + x = self.leaky_relu(x) + return x + + def __repr__(self): + return 'UnaryBlock(in_feat: {:d}, out_feat: {:d}, BN: {:s}, ReLU: {:s})'.format(self.in_dim, + self.out_dim, + str(self.use_bn), + str(not self.no_relu)) + + +class SimpleBlock(nn.Module): + + def __init__(self, block_name, in_dim, out_dim, radius, layer_ind, config): + """ + Initialize a simple convolution block with its ReLU and BatchNorm. + :param in_dim: dimension input features + :param out_dim: dimension input features + :param radius: current radius of convolution + :param config: parameters + """ + super(SimpleBlock, self).__init__() + + # get KP_extent from current radius + current_extent = radius * config.KP_extent / config.conv_radius + + # Get other parameters + self.bn_momentum = config.batch_norm_momentum + self.use_bn = config.use_batch_norm + self.layer_ind = layer_ind + self.block_name = block_name + self.in_dim = in_dim + self.out_dim = out_dim + + # Define the KPConv class + self.KPConv = KPConv(config.num_kernel_points, + config.in_points_dim, + in_dim, + out_dim // 2, + current_extent, + radius, + fixed_kernel_points=config.fixed_kernel_points, + KP_influence=config.KP_influence, + aggregation_mode=config.aggregation_mode, + deformable='deform' in block_name, + modulated=config.modulated) + + # Other opperations + self.batch_norm = BatchNormBlock(out_dim // 2, self.use_bn, self.bn_momentum) + self.leaky_relu = nn.LeakyReLU(0.1) + + return + + def forward(self, x, batch): + + if 'strided' in self.block_name: + q_pts = batch.points[self.layer_ind + 1] + s_pts = batch.points[self.layer_ind] + neighb_inds = batch.pools[self.layer_ind] + else: + q_pts = batch.points[self.layer_ind] + s_pts = batch.points[self.layer_ind] + neighb_inds = batch.neighbors[self.layer_ind] + + x = self.KPConv(q_pts, s_pts, neighb_inds, x) + return self.leaky_relu(self.batch_norm(x)) + + +class ResnetBottleneckBlock(nn.Module): + + def __init__(self, block_name, in_dim, out_dim, radius, layer_ind, config): + """ + Initialize a resnet bottleneck block. + :param in_dim: dimension input features + :param out_dim: dimension input features + :param radius: current radius of convolution + :param config: parameters + """ + super(ResnetBottleneckBlock, self).__init__() + + # get KP_extent from current radius + current_extent = radius * config.KP_extent / config.conv_radius + + # Get other parameters + self.bn_momentum = config.batch_norm_momentum + self.use_bn = config.use_batch_norm + self.block_name = block_name + self.layer_ind = layer_ind + self.in_dim = in_dim + self.out_dim = out_dim + + # First downscaling mlp + if in_dim != out_dim // 4: + self.unary1 = UnaryBlock(in_dim, out_dim // 4, self.use_bn, self.bn_momentum) + else: + self.unary1 = nn.Identity() + + # KPConv block + self.KPConv = KPConv(config.num_kernel_points, + config.in_points_dim, + out_dim // 4, + out_dim // 4, + current_extent, + radius, + fixed_kernel_points=config.fixed_kernel_points, + KP_influence=config.KP_influence, + aggregation_mode=config.aggregation_mode, + deformable='deform' in block_name, + modulated=config.modulated) + self.batch_norm_conv = BatchNormBlock(out_dim // 4, self.use_bn, self.bn_momentum) + + # Second upscaling mlp + self.unary2 = UnaryBlock(out_dim // 4, out_dim, self.use_bn, self.bn_momentum, no_relu=True) + + # Shortcut optional mpl + if in_dim != out_dim: + self.unary_shortcut = UnaryBlock(in_dim, out_dim, self.use_bn, self.bn_momentum, no_relu=True) + else: + self.unary_shortcut = nn.Identity() + + # Other operations + self.leaky_relu = nn.LeakyReLU(0.1) + + return + + def forward(self, features, batch): + + if 'strided' in self.block_name: + q_pts = batch.points[self.layer_ind + 1] + s_pts = batch.points[self.layer_ind] + neighb_inds = batch.pools[self.layer_ind] + else: + q_pts = batch.points[self.layer_ind] + s_pts = batch.points[self.layer_ind] + neighb_inds = batch.neighbors[self.layer_ind] + + # First downscaling mlp + x = self.unary1(features) + + # Convolution + x = self.KPConv(q_pts, s_pts, neighb_inds, x) + x = self.leaky_relu(self.batch_norm_conv(x)) + + # Second upscaling mlp + x = self.unary2(x) + + # Shortcut + if 'strided' in self.block_name: + shortcut = max_pool(features, neighb_inds) + else: + shortcut = features + shortcut = self.unary_shortcut(shortcut) + + return self.leaky_relu(x + shortcut) + + +class GlobalAverageBlock(nn.Module): + + def __init__(self): + """ + Initialize a global average block with its ReLU and BatchNorm. + """ + super(GlobalAverageBlock, self).__init__() + return + + def forward(self, x, batch): + return global_average(x, batch.lengths[-1]) + + +class NearestUpsampleBlock(nn.Module): + + def __init__(self, layer_ind): + """ + Initialize a nearest upsampling block with its ReLU and BatchNorm. + """ + super(NearestUpsampleBlock, self).__init__() + self.layer_ind = layer_ind + return + + def forward(self, x, batch): + return closest_pool(x, batch.upsamples[self.layer_ind - 1]) + + def __repr__(self): + return 'NearestUpsampleBlock(layer: {:d} -> {:d})'.format(self.layer_ind, + self.layer_ind - 1) + + +class MaxPoolBlock(nn.Module): + + def __init__(self, layer_ind): + """ + Initialize a max pooling block with its ReLU and BatchNorm. + """ + super(MaxPoolBlock, self).__init__() + self.layer_ind = layer_ind + return + + def forward(self, x, batch): + return max_pool(x, batch.pools[self.layer_ind + 1]) + diff --git a/competing_methods/my_KPConv/plot_convergence.py b/competing_methods/my_KPConv/plot_convergence.py new file mode 100644 index 00000000..fa031055 --- /dev/null +++ b/competing_methods/my_KPConv/plot_convergence.py @@ -0,0 +1,810 @@ +# +# +# 0=================================0 +# | Kernel Point Convolutions | +# 0=================================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Callable script to test any model on any dataset +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 11/06/2018 +# + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Imports and global variables +# \**********************************/ +# + +# Common libs +import os +import torch +import numpy as np +import matplotlib.pyplot as plt +from os.path import isfile, join, exists +from os import listdir, remove, getcwd +from sklearn.metrics import confusion_matrix +import time + +# My libs +from utils.config import Config +from utils.metrics import IoU_from_confusions, smooth_metrics, fast_confusion +from utils.ply import read_ply + +# Datasets +from datasets.ModelNet40 import ModelNet40Dataset +from datasets.S3DIS import S3DISDataset +from datasets.SemanticKitti import SemanticKittiDataset + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Utility functions +# \***********************/ +# + + +def running_mean(signal, n, axis=0, stride=1): + signal = np.array(signal) + torch_conv = torch.nn.Conv1d(1, 1, kernel_size=2*n+1, stride=stride, bias=False) + torch_conv.weight.requires_grad_(False) + torch_conv.weight *= 0 + torch_conv.weight += 1 / (2*n+1) + if signal.ndim == 1: + torch_signal = torch.from_numpy(signal.reshape([1, 1, -1]).astype(np.float32)) + return torch_conv(torch_signal).squeeze().numpy() + + elif signal.ndim == 2: + print('TODO implement with torch and stride here') + smoothed = np.empty(signal.shape) + if axis == 0: + for i, sig in enumerate(signal): + sig_sum = np.convolve(sig, np.ones((2*n+1,)), mode='same') + sig_num = np.convolve(sig*0+1, np.ones((2*n+1,)), mode='same') + smoothed[i, :] = sig_sum / sig_num + elif axis == 1: + for i, sig in enumerate(signal.T): + sig_sum = np.convolve(sig, np.ones((2*n+1,)), mode='same') + sig_num = np.convolve(sig*0+1, np.ones((2*n+1,)), mode='same') + smoothed[:, i] = sig_sum / sig_num + else: + print('wrong axis') + return smoothed + + else: + print('wrong dimensions') + return None + + +def IoU_class_metrics(all_IoUs, smooth_n): + + # Get mean IoU per class for consecutive epochs to directly get a mean without further smoothing + smoothed_IoUs = [] + for epoch in range(len(all_IoUs)): + i0 = max(epoch - smooth_n, 0) + i1 = min(epoch + smooth_n + 1, len(all_IoUs)) + smoothed_IoUs += [np.mean(np.vstack(all_IoUs[i0:i1]), axis=0)] + smoothed_IoUs = np.vstack(smoothed_IoUs) + smoothed_mIoUs = np.mean(smoothed_IoUs, axis=1) + + return smoothed_IoUs, smoothed_mIoUs + + +def load_confusions(filename, n_class): + + with open(filename, 'r') as f: + lines = f.readlines() + + confs = np.zeros((len(lines), n_class, n_class)) + for i, line in enumerate(lines): + C = np.array([int(value) for value in line.split()]) + confs[i, :, :] = C.reshape((n_class, n_class)) + + return confs + + +def load_training_results(path): + + filename = join(path, 'training.txt') + with open(filename, 'r') as f: + lines = f.readlines() + + epochs = [] + steps = [] + L_out = [] + L_p = [] + acc = [] + t = [] + for line in lines[1:]: + line_info = line.split() + if (len(line) > 0): + epochs += [int(line_info[0])] + steps += [int(line_info[1])] + L_out += [float(line_info[2])] + L_p += [float(line_info[3])] + acc += [float(line_info[4])] + t += [float(line_info[5])] + else: + break + + return epochs, steps, L_out, L_p, acc, t + + +def load_single_IoU(filename, n_parts): + + with open(filename, 'r') as f: + lines = f.readlines() + + # Load all IoUs + all_IoUs = [] + for i, line in enumerate(lines): + all_IoUs += [np.reshape([float(IoU) for IoU in line.split()], [-1, n_parts])] + return all_IoUs + + +def load_snap_clouds(path, dataset, only_last=False): + + cloud_folders = np.array([join(path, f) for f in listdir(path) if f.startswith('val_preds')]) + cloud_epochs = np.array([int(f.split('_')[-1]) for f in cloud_folders]) + epoch_order = np.argsort(cloud_epochs) + cloud_epochs = cloud_epochs[epoch_order] + cloud_folders = cloud_folders[epoch_order] + + Confs = np.zeros((len(cloud_epochs), dataset.num_classes, dataset.num_classes), dtype=np.int32) + for c_i, cloud_folder in enumerate(cloud_folders): + if only_last and c_i < len(cloud_epochs) - 1: + continue + + # Load confusion if previously saved + conf_file = join(cloud_folder, 'conf.txt') + if isfile(conf_file): + Confs[c_i] += np.loadtxt(conf_file, dtype=np.int32) + + else: + for f in listdir(cloud_folder): + if f.endswith('.ply') and not f.endswith('sub.ply'): + data = read_ply(join(cloud_folder, f)) + labels = data['class'] + preds = data['preds'] + Confs[c_i] += fast_confusion(labels, preds, dataset.label_values).astype(np.int32) + + np.savetxt(conf_file, Confs[c_i], '%12d') + + # Erase ply to save disk memory + if c_i < len(cloud_folders) - 1: + for f in listdir(cloud_folder): + if f.endswith('.ply'): + remove(join(cloud_folder, f)) + + # Remove ignored labels from confusions + for l_ind, label_value in reversed(list(enumerate(dataset.label_values))): + if label_value in dataset.ignored_labels: + Confs = np.delete(Confs, l_ind, axis=1) + Confs = np.delete(Confs, l_ind, axis=2) + + return cloud_epochs, IoU_from_confusions(Confs) + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Plot functions +# \********************/ +# + + +def compare_trainings(list_of_paths, list_of_labels=None): + + # Parameters + # ********** + + plot_lr = False + smooth_epochs = 0.5 + stride = 2 + + if list_of_labels is None: + list_of_labels = [str(i) for i in range(len(list_of_paths))] + + # Read Training Logs + # ****************** + + all_epochs = [] + all_loss = [] + all_lr = [] + all_times = [] + all_RAMs = [] + + for path in list_of_paths: + + print(path) + + if ('val_IoUs.txt' in [f for f in listdir(path)]) or ('val_confs.txt' in [f for f in listdir(path)]): + config = Config() + config.load(path) + else: + continue + + # Load results + epochs, steps, L_out, L_p, acc, t = load_training_results(path) + epochs = np.array(epochs, dtype=np.int32) + epochs_d = np.array(epochs, dtype=np.float32) + steps = np.array(steps, dtype=np.float32) + + # Compute number of steps per epoch + max_e = np.max(epochs) + first_e = np.min(epochs) + epoch_n = [] + for i in range(first_e, max_e): + bool0 = epochs == i + e_n = np.sum(bool0) + epoch_n.append(e_n) + epochs_d[bool0] += steps[bool0] / e_n + smooth_n = int(np.mean(epoch_n) * smooth_epochs) + smooth_loss = running_mean(L_out, smooth_n, stride=stride) + all_loss += [smooth_loss] + all_epochs += [epochs_d[smooth_n:-smooth_n:stride]] + all_times += [t[smooth_n:-smooth_n:stride]] + + # Learning rate + if plot_lr: + lr_decay_v = np.array([lr_d for ep, lr_d in config.lr_decays.items()]) + lr_decay_e = np.array([ep for ep, lr_d in config.lr_decays.items()]) + max_e = max(np.max(all_epochs[-1]) + 1, np.max(lr_decay_e) + 1) + lr_decays = np.ones(int(np.ceil(max_e)), dtype=np.float32) + lr_decays[0] = float(config.learning_rate) + lr_decays[lr_decay_e] = lr_decay_v + lr = np.cumprod(lr_decays) + all_lr += [lr[np.floor(all_epochs[-1]).astype(np.int32)]] + + # Plots learning rate + # ******************* + + + if plot_lr: + # Figure + fig = plt.figure('lr') + for i, label in enumerate(list_of_labels): + plt.plot(all_epochs[i], all_lr[i], linewidth=1, label=label) + + # Set names for axes + plt.xlabel('epochs') + plt.ylabel('lr') + plt.yscale('log') + + # Display legends and title + plt.legend(loc=1) + + # Customize the graph + ax = fig.gca() + ax.grid(linestyle='-.', which='both') + # ax.set_yticks(np.arange(0.8, 1.02, 0.02)) + + # Plots loss + # ********** + + # Figure + fig = plt.figure('loss') + for i, label in enumerate(list_of_labels): + plt.plot(all_epochs[i], all_loss[i], linewidth=1, label=label) + + # Set names for axes + plt.xlabel('epochs') + plt.ylabel('loss') + plt.yscale('log') + + # Display legends and title + plt.legend(loc=1) + plt.title('Losses compare') + + # Customize the graph + ax = fig.gca() + ax.grid(linestyle='-.', which='both') + # ax.set_yticks(np.arange(0.8, 1.02, 0.02)) + + # Plot Times + # ********** + + # Figure + fig = plt.figure('time') + for i, label in enumerate(list_of_labels): + plt.plot(all_epochs[i], np.array(all_times[i]) / 3600, linewidth=1, label=label) + + # Set names for axes + plt.xlabel('epochs') + plt.ylabel('time') + # plt.yscale('log') + + # Display legends and title + plt.legend(loc=0) + + # Customize the graph + ax = fig.gca() + ax.grid(linestyle='-.', which='both') + # ax.set_yticks(np.arange(0.8, 1.02, 0.02)) + + # Show all + plt.show() + + +def compare_convergences_segment(dataset, list_of_paths, list_of_names=None): + + # Parameters + # ********** + + smooth_n = 10 + + if list_of_names is None: + list_of_names = [str(i) for i in range(len(list_of_paths))] + + # Read Logs + # ********* + + all_pred_epochs = [] + all_mIoUs = [] + all_class_IoUs = [] + all_snap_epochs = [] + all_snap_IoUs = [] + + # Load parameters + config = Config() + config.load(list_of_paths[0]) + + class_list = [dataset.label_to_names[label] for label in dataset.label_values + if label not in dataset.ignored_labels] + + s = '{:^10}|'.format('mean') + for c in class_list: + s += '{:^10}'.format(c) + print(s) + print(10*'-' + '|' + 10*config.num_classes*'-') + for path in list_of_paths: + + # Get validation IoUs + file = join(path, 'val_IoUs.txt') + val_IoUs = load_single_IoU(file, config.num_classes) + + # Get mean IoU + class_IoUs, mIoUs = IoU_class_metrics(val_IoUs, smooth_n) + + # Aggregate results + all_pred_epochs += [np.array([i for i in range(len(val_IoUs))])] + all_mIoUs += [mIoUs] + all_class_IoUs += [class_IoUs] + + s = '{:^10.1f}|'.format(100*mIoUs[-1]) + for IoU in class_IoUs[-1]: + s += '{:^10.1f}'.format(100*IoU) + print(s) + + # Get optional full validation on clouds + snap_epochs, snap_IoUs = load_snap_clouds(path, dataset) + all_snap_epochs += [snap_epochs] + all_snap_IoUs += [snap_IoUs] + + print(10*'-' + '|' + 10*config.num_classes*'-') + for snap_IoUs in all_snap_IoUs: + if len(snap_IoUs) > 0: + s = '{:^10.1f}|'.format(100*np.mean(snap_IoUs[-1])) + for IoU in snap_IoUs[-1]: + s += '{:^10.1f}'.format(100*IoU) + else: + s = '{:^10s}'.format('-') + for _ in range(config.num_classes): + s += '{:^10s}'.format('-') + print(s) + + # Plots + # ***** + + # Figure + fig = plt.figure('mIoUs') + for i, name in enumerate(list_of_names): + p = plt.plot(all_pred_epochs[i], all_mIoUs[i], '--', linewidth=1, label=name) + plt.plot(all_snap_epochs[i], np.mean(all_snap_IoUs[i], axis=1), linewidth=1, color=p[-1].get_color()) + plt.xlabel('epochs') + plt.ylabel('IoU') + + # Set limits for y axis + #plt.ylim(0.55, 0.95) + + # Display legends and title + plt.legend(loc=4) + + # Customize the graph + ax = fig.gca() + ax.grid(linestyle='-.', which='both') + #ax.set_yticks(np.arange(0.8, 1.02, 0.02)) + + displayed_classes = [0, 1, 2, 3, 4, 5, 6, 7] + displayed_classes = [] + for c_i, c_name in enumerate(class_list): + if c_i in displayed_classes: + + # Figure + fig = plt.figure(c_name + ' IoU') + for i, name in enumerate(list_of_names): + plt.plot(all_pred_epochs[i], all_class_IoUs[i][:, c_i], linewidth=1, label=name) + plt.xlabel('epochs') + plt.ylabel('IoU') + + # Set limits for y axis + #plt.ylim(0.8, 1) + + # Display legends and title + plt.legend(loc=4) + + # Customize the graph + ax = fig.gca() + ax.grid(linestyle='-.', which='both') + #ax.set_yticks(np.arange(0.8, 1.02, 0.02)) + + # Show all + plt.show() + + +def compare_convergences_classif(list_of_paths, list_of_labels=None): + + # Parameters + # ********** + + steps_per_epoch = 0 + smooth_n = 12 + + if list_of_labels is None: + list_of_labels = [str(i) for i in range(len(list_of_paths))] + + # Read Logs + # ********* + + all_pred_epochs = [] + all_val_OA = [] + all_train_OA = [] + all_vote_OA = [] + all_vote_confs = [] + + + for path in list_of_paths: + + # Load parameters + config = Config() + config.load(list_of_paths[0]) + + # Get the number of classes + n_class = config.num_classes + + # Load epochs + epochs, _, _, _, _, _ = load_training_results(path) + first_e = np.min(epochs) + + # Get validation confusions + file = join(path, 'val_confs.txt') + val_C1 = load_confusions(file, n_class) + val_PRE, val_REC, val_F1, val_IoU, val_ACC = smooth_metrics(val_C1, smooth_n=smooth_n) + + # Get vote confusions + file = join(path, 'vote_confs.txt') + if exists(file): + vote_C2 = load_confusions(file, n_class) + vote_PRE, vote_REC, vote_F1, vote_IoU, vote_ACC = smooth_metrics(vote_C2, smooth_n=2) + else: + vote_C2 = val_C1 + vote_PRE, vote_REC, vote_F1, vote_IoU, vote_ACC = (val_PRE, val_REC, val_F1, val_IoU, val_ACC) + + # Aggregate results + all_pred_epochs += [np.array([i+first_e for i in range(len(val_ACC))])] + all_val_OA += [val_ACC] + all_vote_OA += [vote_ACC] + all_vote_confs += [vote_C2] + + print() + + # Best scores + # *********** + + for i, label in enumerate(list_of_labels): + + print('\n' + label + '\n' + '*' * len(label) + '\n') + print(list_of_paths[i]) + + best_epoch = np.argmax(all_vote_OA[i]) + print('Best Accuracy : {:.1f} % (epoch {:d})'.format(100 * all_vote_OA[i][best_epoch], best_epoch)) + + confs = all_vote_confs[i] + + """ + s = '' + for cc in confs[best_epoch]: + for c in cc: + s += '{:.0f} '.format(c) + s += '\n' + print(s) + """ + + TP_plus_FN = np.sum(confs, axis=-1, keepdims=True) + class_avg_confs = confs.astype(np.float32) / TP_plus_FN.astype(np.float32) + diags = np.diagonal(class_avg_confs, axis1=-2, axis2=-1) + class_avg_ACC = np.sum(diags, axis=-1) / np.sum(class_avg_confs, axis=(-1, -2)) + + print('Corresponding mAcc : {:.1f} %'.format(100 * class_avg_ACC[best_epoch])) + + # Plots + # ***** + + for fig_name, OA in zip(['Validation', 'Vote'], [all_val_OA, all_vote_OA]): + + # Figure + fig = plt.figure(fig_name) + for i, label in enumerate(list_of_labels): + plt.plot(all_pred_epochs[i], OA[i], linewidth=1, label=label) + plt.xlabel('epochs') + plt.ylabel(fig_name + ' Accuracy') + + # Set limits for y axis + #plt.ylim(0.55, 0.95) + + # Display legends and title + plt.legend(loc=4) + + # Customize the graph + ax = fig.gca() + ax.grid(linestyle='-.', which='both') + #ax.set_yticks(np.arange(0.8, 1.02, 0.02)) + + #for i, label in enumerate(list_of_labels): + # print(label, np.max(all_train_OA[i]), np.max(all_val_OA[i])) + + # Show all + plt.show() + + +def compare_convergences_SLAM(dataset, list_of_paths, list_of_names=None): + + # Parameters + # ********** + + smooth_n = 10 + + if list_of_names is None: + list_of_names = [str(i) for i in range(len(list_of_paths))] + + # Read Logs + # ********* + + all_pred_epochs = [] + all_val_mIoUs = [] + all_val_class_IoUs = [] + all_subpart_mIoUs = [] + all_subpart_class_IoUs = [] + + # Load parameters + config = Config() + config.load(list_of_paths[0]) + + class_list = [dataset.label_to_names[label] for label in dataset.label_values + if label not in dataset.ignored_labels] + + s = '{:^6}|'.format('mean') + for c in class_list: + s += '{:^6}'.format(c[:4]) + print(s) + print(6*'-' + '|' + 6*config.num_classes*'-') + for path in list_of_paths: + + # Get validation IoUs + nc_model = dataset.num_classes - len(dataset.ignored_labels) + file = join(path, 'val_IoUs.txt') + val_IoUs = load_single_IoU(file, nc_model) + + # Get Subpart IoUs + file = join(path, 'subpart_IoUs.txt') + subpart_IoUs = load_single_IoU(file, nc_model) + + # Get mean IoU + val_class_IoUs, val_mIoUs = IoU_class_metrics(val_IoUs, smooth_n) + subpart_class_IoUs, subpart_mIoUs = IoU_class_metrics(subpart_IoUs, smooth_n) + + # Aggregate results + all_pred_epochs += [np.array([i for i in range(len(val_IoUs))])] + all_val_mIoUs += [val_mIoUs] + all_val_class_IoUs += [val_class_IoUs] + all_subpart_mIoUs += [subpart_mIoUs] + all_subpart_class_IoUs += [subpart_class_IoUs] + + s = '{:^6.1f}|'.format(100*subpart_mIoUs[-1]) + for IoU in subpart_class_IoUs[-1]: + s += '{:^6.1f}'.format(100*IoU) + print(s) + + print(6*'-' + '|' + 6*config.num_classes*'-') + for snap_IoUs in all_val_class_IoUs: + if len(snap_IoUs) > 0: + s = '{:^6.1f}|'.format(100*np.mean(snap_IoUs[-1])) + for IoU in snap_IoUs[-1]: + s += '{:^6.1f}'.format(100*IoU) + else: + s = '{:^6s}'.format('-') + for _ in range(config.num_classes): + s += '{:^6s}'.format('-') + print(s) + + # Plots + # ***** + + # Figure + fig = plt.figure('mIoUs') + for i, name in enumerate(list_of_names): + p = plt.plot(all_pred_epochs[i], all_subpart_mIoUs[i], '--', linewidth=1, label=name) + plt.plot(all_pred_epochs[i], all_val_mIoUs[i], linewidth=1, color=p[-1].get_color()) + plt.xlabel('epochs') + plt.ylabel('IoU') + + # Set limits for y axis + #plt.ylim(0.55, 0.95) + + # Display legends and title + plt.legend(loc=4) + + # Customize the graph + ax = fig.gca() + ax.grid(linestyle='-.', which='both') + #ax.set_yticks(np.arange(0.8, 1.02, 0.02)) + + displayed_classes = [0, 1, 2, 3, 4, 5, 6, 7] + #displayed_classes = [] + for c_i, c_name in enumerate(class_list): + if c_i in displayed_classes: + + # Figure + fig = plt.figure(c_name + ' IoU') + for i, name in enumerate(list_of_names): + plt.plot(all_pred_epochs[i], all_val_class_IoUs[i][:, c_i], linewidth=1, label=name) + plt.xlabel('epochs') + plt.ylabel('IoU') + + # Set limits for y axis + #plt.ylim(0.8, 1) + + # Display legends and title + plt.legend(loc=4) + + # Customize the graph + ax = fig.gca() + ax.grid(linestyle='-.', which='both') + #ax.set_yticks(np.arange(0.8, 1.02, 0.02)) + + + + # Show all + plt.show() + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Experiments +# \*****************/ +# + + +def experiment_name_1(): + """ + In this function you choose the results you want to plot together, to compare them as an experiment. + Just return the list of log paths (like 'results/Log_2020-04-04_10-04-42' for example), and the associated names + of these logs. + Below an example of how to automatically gather all logs between two dates, and name them. + """ + + # Using the dates of the logs, you can easily gather consecutive ones. All logs should be of the same dataset. + start = 'Log_2020-04-22_11-52-58' + end = 'Log_2020-05-22_11-52-58' + + # Name of the result path + res_path = 'results' + + # Gather logs and sort by date + logs = np.sort([join(res_path, l) for l in listdir(res_path) if start <= l <= end]) + + # Give names to the logs (for plot legends) + logs_names = ['name_log_1', + 'name_log_2', + 'name_log_3'] + + # safe check log names + logs_names = np.array(logs_names[:len(logs)]) + + return logs, logs_names + + +def experiment_name_2(): + """ + In this function you choose the results you want to plot together, to compare them as an experiment. + Just return the list of log paths (like 'results/Log_2020-04-04_10-04-42' for example), and the associated names + of these logs. + Below an example of how to automatically gather all logs between two dates, and name them. + """ + + # Using the dates of the logs, you can easily gather consecutive ones. All logs should be of the same dataset. + start = 'Log_2020-04-22_11-52-58' + end = 'Log_2020-05-22_11-52-58' + + # Name of the result path + res_path = 'results' + + # Gather logs and sort by date + logs = np.sort([join(res_path, l) for l in listdir(res_path) if start <= l <= end]) + + # Optionally add a specific log at a specific place in the log list + logs = logs.astype(' 'last_XXX': Automatically retrieve the last trained model on dataset XXX + # > '(old_)results/Log_YYYY-MM-DD_HH-MM-SS': Directly provide the path of a trained model + + chosen_log = 'results/Log_2021-05-12_09-38-12' # => ModelNet40 + + # Choose the index of the checkpoint to load OR None if you want to load the current checkpoint + chkp_idx = None + + # Choose to test on validation or test split + on_val = False #True + + # Deal with 'last_XXXXXX' choices + chosen_log = model_choice(chosen_log) + + ############################ + # Initialize the environment + ############################ + + # Set which gpu is going to be used + GPU_ID = '0' + + # Set GPU visible device + os.environ['CUDA_VISIBLE_DEVICES'] = GPU_ID + + ############### + # Previous chkp + ############### + + # Find all checkpoints in the chosen training folder + chkp_path = os.path.join(chosen_log, 'checkpoints') + chkps = [f for f in os.listdir(chkp_path) if f[:4] == 'chkp'] + + # Find which snapshot to restore + if chkp_idx is None: + chosen_chkp = 'current_chkp.tar' + else: + chosen_chkp = np.sort(chkps)[chkp_idx] + chosen_chkp = os.path.join(chosen_log, 'checkpoints', chosen_chkp) + + # Initialize configuration class + config = Config() + config.load(chosen_log) + + ################################## + # Change model parameters for test + ################################## + + # Change parameters for the test here. For example, you can stop augmenting the input data. + + #config.augment_noise = 0.0001 + #config.augment_symmetries = False + config.batch_num = 6 + #config.in_radius = 4 + config.validation_size = 200 + config.input_threads = 10 + ############## + # Prepare Data + ############## + + print() + print('Data Preparation') + print('****************') + + if on_val: + set = 'validation' + else: + set = 'test' + + # Initiate dataset + if config.dataset == 'ModelNet40': + test_dataset = ModelNet40Dataset(config, train=False) + test_sampler = ModelNet40Sampler(test_dataset) + collate_fn = ModelNet40Collate + elif config.dataset == 'S3DIS': + test_dataset = S3DISDataset(config, set='validation', use_potentials=True) + test_sampler = S3DISSampler(test_dataset) + collate_fn = S3DISCollate + elif config.dataset == 'SemanticKitti': + test_dataset = SemanticKittiDataset(config, set=set, balance_classes=False) + test_sampler = SemanticKittiSampler(test_dataset) + collate_fn = SemanticKittiCollate + elif config.dataset == 'UrbanMesh': + test_dataset = UrbanMeshDataset(config, set=set, use_potentials=True) + test_sampler = UrbanMeshSampler(test_dataset) + collate_fn = UrbanMeshCollate + elif config.dataset == 'Hessigsim3D': + test_dataset = Hessigsim3DDataset(config, set=set, use_potentials=True) + test_sampler = Hessigsim3DSampler(test_dataset) + collate_fn = Hessigsim3DCollate + else: + raise ValueError('Unsupported dataset : ' + config.dataset) + + # Data loader + test_loader = DataLoader(test_dataset, + batch_size=6, + sampler=test_sampler, + collate_fn=collate_fn, + num_workers=config.input_threads, + pin_memory=True, + drop_last=True) + + # Calibrate samplers + test_sampler.calibration(test_loader, verbose=True) + + print('\nModel Preparation') + print('*****************') + + # Define network model + t1 = time.time() + if config.dataset_task == 'classification': + net = KPCNN(config) + elif config.dataset_task in ['cloud_segmentation', 'slam_segmentation']: + net = KPFCNN(config, test_dataset.label_values, test_dataset.ignored_labels) + else: + raise ValueError('Unsupported dataset_task for testing: ' + config.dataset_task) + + # Define a visualizer class + tester = ModelTester(net, chkp_path=chosen_chkp) + print('Done in {:.1f}s\n'.format(time.time() - t1)) + + print('\nStart test') + print('**********\n') + + # Training + if config.dataset_task == 'classification': + tester.classification_test(net, test_loader, config) + elif config.dataset_task == 'cloud_segmentation': + tester.cloud_segmentation_test(net, test_loader, config) + elif config.dataset_task == 'slam_segmentation': + tester.slam_segmentation_test(net, test_loader, config) + else: + raise ValueError('Unsupported dataset_task for testing: ' + config.dataset_task) \ No newline at end of file diff --git a/competing_methods/my_KPConv/train_Hessigsim3D.py b/competing_methods/my_KPConv/train_Hessigsim3D.py new file mode 100644 index 00000000..c75ed391 --- /dev/null +++ b/competing_methods/my_KPConv/train_Hessigsim3D.py @@ -0,0 +1,306 @@ +# +# +# 0=================================0 +# | Kernel Point Convolutions | +# 0=================================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Callable script to start a training on Hessigsim3D dataset +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 06/03/2020 +# + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Imports and global variables +# \**********************************/ +# + +# Common libs +import signal +import os + +# Dataset +from datasets.Hessigsim3D import * +from torch.utils.data import DataLoader + +from utils.config import Config +from utils.trainer import ModelTrainer +from models.architectures import KPFCNN + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Config Class +# \******************/ +# + +class Hessigsim3DConfig(Config): + """ + Override the parameters you want to modify for this dataset + """ + + #################### + # Dataset parameters + #################### + + # Dataset name + dataset = 'Hessigsim3D' + + # Number of classes in the dataset (This value is overwritten by dataset class when Initializating dataset). + num_classes = None + + # Type of task performed on this dataset (also overwritten) + dataset_task = '' + + # Number of CPU threads for the input pipeline + input_threads = 8 + + ######################### + # Architecture definition + ######################### + + # Define layers + architecture = ['simple', + 'resnetb', + 'resnetb_strided', + 'resnetb', + 'resnetb_strided', + 'resnetb', + 'resnetb_strided', + 'resnetb', + 'resnetb_strided', + 'resnetb', + 'nearest_upsample', + 'unary', + 'nearest_upsample', + 'unary', + 'nearest_upsample', + 'unary', + 'nearest_upsample', + 'unary'] + + ################### + # KPConv parameters + ################### + + # Radius of the input sphere + in_radius = 5.0 # in_radius = 50 * first_subsampling_dl + + # Number of kernel points + num_kernel_points = 15 + + # Size of the first subsampling grid in meter + first_subsampling_dl = 0.1 #SUM:1.0, H3D:0.04 + + # Radius of convolution in "number grid cell". (2.5 is the standard value) + conv_radius = 2.5 + + # Radius of deformable convolution in "number grid cell". Larger so that deformed kernel can spread out + deform_radius = 6.0 + + # Radius of the area of influence of each kernel point in "number grid cell". (1.0 is the standard value) + KP_extent = 1.2 + + # Behavior of convolutions in ('constant', 'linear', 'gaussian') + KP_influence = 'linear' + + # Aggregation function of KPConv in ('closest', 'sum') + aggregation_mode = 'sum' + + # Choice of input features + first_features_dim = 128 + in_features_dim = 8 + + # Can the network learn modulations + modulated = False + + # Batch normalization parameters + use_batch_norm = True + batch_norm_momentum = 0.02 + + # Deformable offset loss + # 'point2point' fitting geometry by penalizing distance from deform point to input points + # 'point2plane' fitting geometry by penalizing distance from deform point to input point triplet (not implemented) + deform_fitting_mode = 'point2point' + deform_fitting_power = 1.0 # Multiplier for the fitting/repulsive loss + deform_lr_factor = 0.1 # Multiplier for learning rate applied to the deformations + repulse_extent = 1.2 # Distance of repulsion for deformed kernel points + + ##################### + # Training parameters + ##################### + + # Maximal number of epochs + max_epoch = 500 # 500 + max_test_epoch = 100 # 100 + + # Learning rate management + learning_rate = 1e-2 + momentum = 0.98 + lr_decays = {i: 0.1 ** (1 / 150) for i in range(1, max_epoch)} + grad_clip_norm = 100.0 + + # Number of batch + batch_num = 2 # 6 + + # Number of steps per epochs + epoch_steps = 500 # 500 + + # Number of validation examples per epoch + validation_size = 50 # 50 + + # Number of epoch between each checkpoint + checkpoint_gap = 50 # 50 + + # Augmentations + augment_scale_anisotropic = True + augment_symmetries = [True, False, False] + augment_rotation = 'vertical' + augment_scale_min = 0.8 + augment_scale_max = 1.2 + augment_noise = 0.001 + augment_color = 0.8 + + # The way we balance segmentation loss + # > 'none': Each point in the whole batch has the same contribution. + # > 'class': Each class has the same contribution (points are weighted according to class balance) + # > 'batch': Each cloud in the batch has the same contribution (points are weighted according cloud sizes) + segloss_balance = 'class' #'none' + + # Do we nee to save convergence + saving = True + saving_path = None + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Main Call +# \***************/ +# + +if __name__ == '__main__': + + ############################ + # Initialize the environment + ############################ + + # Set which gpu is going to be used + GPU_ID = '0' + + # Set GPU visible device + os.environ['CUDA_VISIBLE_DEVICES'] = GPU_ID + + ############### + # Previous chkp + ############### + + # Choose here if you want to start training from a previous snapshot (None for new training) + # previous_training_path = 'Log_2020-03-19_19-53-27' + previous_training_path = '' + + # Choose index of checkpoint to start from. If None, uses the latest chkp + chkp_idx = None + if previous_training_path: + + # Find all snapshot in the chosen training folder + chkp_path = os.path.join('results', previous_training_path, 'checkpoints') + chkps = [f for f in os.listdir(chkp_path) if f[:4] == 'chkp'] + + # Find which snapshot to restore + if chkp_idx is None: + chosen_chkp = 'current_chkp.tar' + else: + chosen_chkp = np.sort(chkps)[chkp_idx] + chosen_chkp = os.path.join('results', previous_training_path, 'checkpoints', chosen_chkp) + + else: + chosen_chkp = None + + ############## + # Prepare Data + ############## + + print() + print('Data Preparation') + print('****************') + + # Initialize configuration class + config = Hessigsim3DConfig() + if previous_training_path: + config.load(os.path.join('results', previous_training_path)) + config.saving_path = None + + # Get path from argument if given + if len(sys.argv) > 1: + config.saving_path = sys.argv[1] + + # Initialize datasets + training_dataset = Hessigsim3DDataset(config, set='training', use_potentials=True) + test_dataset = Hessigsim3DDataset(config, set='validation', use_potentials=True) + + # Initialize samplers + training_sampler = Hessigsim3DSampler(training_dataset) + test_sampler = Hessigsim3DSampler(test_dataset) + + # Initialize the dataloader + training_loader = DataLoader(training_dataset, + batch_size=1, + sampler=training_sampler, + collate_fn=Hessigsim3DCollate, + num_workers=config.input_threads, + pin_memory=True, + drop_last=True) + test_loader = DataLoader(test_dataset, + batch_size=1, + sampler=test_sampler, + collate_fn=Hessigsim3DCollate, + num_workers=config.input_threads, + pin_memory=True, + drop_last=True) + + # Calibrate samplers + training_sampler.calibration(training_loader, verbose=True) + test_sampler.calibration(test_loader, verbose=True) + + # Optional debug functions + # debug_timing(training_dataset, training_loader) + # debug_timing(test_dataset, test_loader) + # debug_upsampling(training_dataset, training_loader) + + print('\nModel Preparation') + print('*****************') + + # Define network model + t1 = time.time() + net = KPFCNN(config, training_dataset.label_values, training_dataset.ignored_labels) + + debug = False + if debug: + print('\n*************************************\n') + print(net) + print('\n*************************************\n') + for param in net.parameters(): + if param.requires_grad: + print(param.shape) + print('\n*************************************\n') + print("Model size %i" % sum(param.numel() for param in net.parameters() if param.requires_grad)) + print('\n*************************************\n') + + # Define a trainer class + trainer = ModelTrainer(net, config, chkp_path=chosen_chkp) + print('Done in {:.1f}s\n'.format(time.time() - t1)) + + print('\nStart training') + print('**************') + + # Training + trainer.train(net, training_loader, test_loader, config) + + print('Forcing exit now') + os.kill(os.getpid(), signal.SIGINT) diff --git a/competing_methods/my_KPConv/train_ModelNet40.py b/competing_methods/my_KPConv/train_ModelNet40.py new file mode 100644 index 00000000..feaa0488 --- /dev/null +++ b/competing_methods/my_KPConv/train_ModelNet40.py @@ -0,0 +1,292 @@ +# +# +# 0=================================0 +# | Kernel Point Convolutions | +# 0=================================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Callable script to start a training on ModelNet40 dataset +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 06/03/2020 +# + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Imports and global variables +# \**********************************/ +# + +# Common libs +import signal +import os +import numpy as np +import sys +import torch + +# Dataset +from datasets.ModelNet40 import * +from torch.utils.data import DataLoader + +from utils.config import Config +from utils.trainer import ModelTrainer +from models.architectures import KPCNN + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Config Class +# \******************/ +# + +class Modelnet40Config(Config): + """ + Override the parameters you want to modify for this dataset + """ + + #################### + # Dataset parameters + #################### + + # Dataset name + dataset = 'ModelNet40' + + # Number of classes in the dataset (This value is overwritten by dataset class when Initializating dataset). + num_classes = None + + # Type of task performed on this dataset (also overwritten) + dataset_task = '' + + # Number of CPU threads for the input pipeline + input_threads = 10 + + ######################### + # Architecture definition + ######################### + + # Define layers + architecture = ['simple', + 'resnetb', + 'resnetb_strided', + 'resnetb', + 'resnetb', + 'resnetb_strided', + 'resnetb', + 'resnetb', + 'resnetb_strided', + 'resnetb', + 'resnetb', + 'resnetb_strided', + 'resnetb', + 'resnetb', + 'global_average'] + + ################### + # KPConv parameters + ################### + + # Number of kernel points + num_kernel_points = 15 + + # Size of the first subsampling grid in meter + first_subsampling_dl = 0.02 + + # Radius of convolution in "number grid cell". (2.5 is the standard value) + conv_radius = 2.5 + + # Radius of deformable convolution in "number grid cell". Larger so that deformed kernel can spread out + deform_radius = 6.0 + + # Radius of the area of influence of each kernel point in "number grid cell". (1.0 is the standard value) + KP_extent = 1.2 + + # Behavior of convolutions in ('constant', 'linear', 'gaussian') + KP_influence = 'linear' + + # Aggregation function of KPConv in ('closest', 'sum') + aggregation_mode = 'sum' + + # Choice of input features + in_features_dim = 1 + + # Can the network learn modulations + modulated = True + + # Batch normalization parameters + use_batch_norm = True + batch_norm_momentum = 0.05 + + # Deformable offset loss + # 'point2point' fitting geometry by penalizing distance from deform point to input points + # 'point2plane' fitting geometry by penalizing distance from deform point to input point triplet (not implemented) + deform_fitting_mode = 'point2point' + deform_fitting_power = 1.0 # Multiplier for the fitting/repulsive loss + deform_lr_factor = 0.1 # Multiplier for learning rate applied to the deformations + repulse_extent = 1.2 # Distance of repulsion for deformed kernel points + + ##################### + # Training parameters + ##################### + + # Maximal number of epochs + max_epoch = 500 + + # Learning rate management + learning_rate = 1e-2 + momentum = 0.98 + lr_decays = {i: 0.1**(1/100) for i in range(1, max_epoch)} + grad_clip_norm = 100.0 + + # Number of batch + batch_num = 10 + + # Number of steps per epochs + epoch_steps = 300 + + # Number of validation examples per epoch + validation_size = 30 + + # Number of epoch between each checkpoint + checkpoint_gap = 50 + + # Augmentations + augment_scale_anisotropic = True + augment_symmetries = [True, True, True] + augment_rotation = 'none' + augment_scale_min = 0.8 + augment_scale_max = 1.2 + augment_noise = 0.001 + augment_color = 1.0 + + # The way we balance segmentation loss + # > 'none': Each point in the whole batch has the same contribution. + # > 'class': Each class has the same contribution (points are weighted according to class balance) + # > 'batch': Each cloud in the batch has the same contribution (points are weighted according cloud sizes) + segloss_balance = 'none' + + # Do we nee to save convergence + saving = True + saving_path = None + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Main Call +# \***************/ +# + +if __name__ == '__main__': + + ############################ + # Initialize the environment + ############################ + + # Set which gpu is going to be used + GPU_ID = '0' + + # Set GPU visible device + os.environ['CUDA_VISIBLE_DEVICES'] = GPU_ID + + ############### + # Previous chkp + ############### + + # Choose here if you want to start training from a previous snapshot (None for new training) + #previous_training_path = 'Log_2020-03-19_19-53-27' + previous_training_path = '' + + # Choose index of checkpoint to start from. If None, uses the latest chkp + chkp_idx = None + if previous_training_path: + + # Find all snapshot in the chosen training folder + chkp_path = os.path.join('results', previous_training_path, 'checkpoints') + chkps = [f for f in os.listdir(chkp_path) if f[:4] == 'chkp'] + + # Find which snapshot to restore + if chkp_idx is None: + chosen_chkp = 'current_chkp.tar' + else: + chosen_chkp = np.sort(chkps)[chkp_idx] + chosen_chkp = os.path.join('results', previous_training_path, 'checkpoints', chosen_chkp) + + else: + chosen_chkp = None + + ############## + # Prepare Data + ############## + + print() + print('Data Preparation') + print('****************') + + # Initialize configuration class + config = Modelnet40Config() + if previous_training_path: + config.load(os.path.join('results', previous_training_path)) + config.saving_path = None + + # Get path from argument if given + if len(sys.argv) > 1: + config.saving_path = sys.argv[1] + + # Initialize datasets + training_dataset = ModelNet40Dataset(config, train=True) + test_dataset = ModelNet40Dataset(config, train=False) + + # Initialize samplers + training_sampler = ModelNet40Sampler(training_dataset, balance_labels=True) + test_sampler = ModelNet40Sampler(test_dataset, balance_labels=True) + + # Initialize the dataloader + training_loader = DataLoader(training_dataset, + batch_size=1, + sampler=training_sampler, + collate_fn=ModelNet40Collate, + num_workers=config.input_threads, + pin_memory=True) + test_loader = DataLoader(test_dataset, + batch_size=1, + sampler=test_sampler, + collate_fn=ModelNet40Collate, + num_workers=config.input_threads, + pin_memory=True) + + # Calibrate samplers + training_sampler.calibration(training_loader) + test_sampler.calibration(test_loader) + + #debug_timing(test_dataset, test_sampler, test_loader) + #debug_show_clouds(training_dataset, training_sampler, training_loader) + + print('\nModel Preparation') + print('*****************') + + # Define network model + t1 = time.time() + net = KPCNN(config) + + # Define a trainer class + trainer = ModelTrainer(net, config, chkp_path=chosen_chkp) + print('Done in {:.1f}s\n'.format(time.time() - t1)) + + print('\nStart training') + print('**************') + + # Training + try: + trainer.train(net, training_loader, test_loader, config) + except: + print('Caught an error') + os.kill(os.getpid(), signal.SIGINT) + + print('Forcing exit now') + os.kill(os.getpid(), signal.SIGINT) + + + diff --git a/competing_methods/my_KPConv/train_S3DIS.py b/competing_methods/my_KPConv/train_S3DIS.py new file mode 100644 index 00000000..674a92ac --- /dev/null +++ b/competing_methods/my_KPConv/train_S3DIS.py @@ -0,0 +1,307 @@ +# +# +# 0=================================0 +# | Kernel Point Convolutions | +# 0=================================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Callable script to start a training on S3DIS dataset +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 06/03/2020 +# + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Imports and global variables +# \**********************************/ +# + +# Common libs +import signal +import os + +# Dataset +from datasets.S3DIS import * +from torch.utils.data import DataLoader + +from utils.config import Config +from utils.trainer import ModelTrainer +from models.architectures import KPFCNN + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Config Class +# \******************/ +# + +class S3DISConfig(Config): + """ + Override the parameters you want to modify for this dataset + """ + + #################### + # Dataset parameters + #################### + + # Dataset name + dataset = 'S3DIS' + + # Number of classes in the dataset (This value is overwritten by dataset class when Initializating dataset). + num_classes = None + + # Type of task performed on this dataset (also overwritten) + dataset_task = '' + + # Number of CPU threads for the input pipeline + input_threads = 10 + + ######################### + # Architecture definition + ######################### + + # Define layers + architecture = ['simple', + 'resnetb', + 'resnetb_strided', + 'resnetb', + 'resnetb', + 'resnetb_strided', + 'resnetb_deformable', + 'resnetb_deformable', + 'resnetb_deformable_strided', + 'resnetb_deformable', + 'resnetb_deformable', + 'resnetb_deformable_strided', + 'resnetb_deformable', + 'resnetb_deformable', + 'nearest_upsample', + 'unary', + 'nearest_upsample', + 'unary', + 'nearest_upsample', + 'unary', + 'nearest_upsample', + 'unary'] + + ################### + # KPConv parameters + ################### + + # Radius of the input sphere + in_radius = 1.5 + + # Number of kernel points + num_kernel_points = 15 + + # Size of the first subsampling grid in meter + first_subsampling_dl = 0.03 + + # Radius of convolution in "number grid cell". (2.5 is the standard value) + conv_radius = 2.5 + + # Radius of deformable convolution in "number grid cell". Larger so that deformed kernel can spread out + deform_radius = 6.0 + + # Radius of the area of influence of each kernel point in "number grid cell". (1.0 is the standard value) + KP_extent = 1.2 + + # Behavior of convolutions in ('constant', 'linear', 'gaussian') + KP_influence = 'linear' + + # Aggregation function of KPConv in ('closest', 'sum') + aggregation_mode = 'sum' + + # Choice of input features + first_features_dim = 128 + in_features_dim = 5 + + # Can the network learn modulations + modulated = False + + # Batch normalization parameters + use_batch_norm = True + batch_norm_momentum = 0.02 + + # Deformable offset loss + # 'point2point' fitting geometry by penalizing distance from deform point to input points + # 'point2plane' fitting geometry by penalizing distance from deform point to input point triplet (not implemented) + deform_fitting_mode = 'point2point' + deform_fitting_power = 1.0 # Multiplier for the fitting/repulsive loss + deform_lr_factor = 0.1 # Multiplier for learning rate applied to the deformations + repulse_extent = 1.2 # Distance of repulsion for deformed kernel points + + ##################### + # Training parameters + ##################### + + # Maximal number of epochs + max_epoch = 500 + + # Learning rate management + learning_rate = 1e-2 + momentum = 0.98 + lr_decays = {i: 0.1 ** (1 / 150) for i in range(1, max_epoch)} + grad_clip_norm = 100.0 + + # Number of batch + batch_num = 6 + + # Number of steps per epochs + epoch_steps = 500 + + # Number of validation examples per epoch + validation_size = 50 + + # Number of epoch between each checkpoint + checkpoint_gap = 50 + + # Augmentations + augment_scale_anisotropic = True + augment_symmetries = [True, False, False] + augment_rotation = 'vertical' + augment_scale_min = 0.8 + augment_scale_max = 1.2 + augment_noise = 0.001 + augment_color = 0.8 + + # The way we balance segmentation loss + # > 'none': Each point in the whole batch has the same contribution. + # > 'class': Each class has the same contribution (points are weighted according to class balance) + # > 'batch': Each cloud in the batch has the same contribution (points are weighted according cloud sizes) + segloss_balance = 'none' + + # Do we nee to save convergence + saving = True + saving_path = None + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Main Call +# \***************/ +# + +if __name__ == '__main__': + + ############################ + # Initialize the environment + ############################ + + # Set which gpu is going to be used + GPU_ID = '0' + + # Set GPU visible device + os.environ['CUDA_VISIBLE_DEVICES'] = GPU_ID + + ############### + # Previous chkp + ############### + + # Choose here if you want to start training from a previous snapshot (None for new training) + # previous_training_path = 'Log_2020-03-19_19-53-27' + previous_training_path = '' + + # Choose index of checkpoint to start from. If None, uses the latest chkp + chkp_idx = None + if previous_training_path: + + # Find all snapshot in the chosen training folder + chkp_path = os.path.join('results', previous_training_path, 'checkpoints') + chkps = [f for f in os.listdir(chkp_path) if f[:4] == 'chkp'] + + # Find which snapshot to restore + if chkp_idx is None: + chosen_chkp = 'current_chkp.tar' + else: + chosen_chkp = np.sort(chkps)[chkp_idx] + chosen_chkp = os.path.join('results', previous_training_path, 'checkpoints', chosen_chkp) + + else: + chosen_chkp = None + + ############## + # Prepare Data + ############## + + print() + print('Data Preparation') + print('****************') + + # Initialize configuration class + config = S3DISConfig() + if previous_training_path: + config.load(os.path.join('results', previous_training_path)) + config.saving_path = None + + # Get path from argument if given + if len(sys.argv) > 1: + config.saving_path = sys.argv[1] + + # Initialize datasets + training_dataset = S3DISDataset(config, set='training', use_potentials=True) + test_dataset = S3DISDataset(config, set='validation', use_potentials=True) + + # Initialize samplers + training_sampler = S3DISSampler(training_dataset) + test_sampler = S3DISSampler(test_dataset) + + # Initialize the dataloader + training_loader = DataLoader(training_dataset, + batch_size=1, + sampler=training_sampler, + collate_fn=S3DISCollate, + num_workers=config.input_threads, + pin_memory=True) + test_loader = DataLoader(test_dataset, + batch_size=1, + sampler=test_sampler, + collate_fn=S3DISCollate, + num_workers=config.input_threads, + pin_memory=True) + + # Calibrate samplers + training_sampler.calibration(training_loader, verbose=True) + test_sampler.calibration(test_loader, verbose=True) + + # Optional debug functions + # debug_timing(training_dataset, training_loader) + # debug_timing(test_dataset, test_loader) + # debug_upsampling(training_dataset, training_loader) + + print('\nModel Preparation') + print('*****************') + + # Define network model + t1 = time.time() + net = KPFCNN(config, training_dataset.label_values, training_dataset.ignored_labels) + + debug = False + if debug: + print('\n*************************************\n') + print(net) + print('\n*************************************\n') + for param in net.parameters(): + if param.requires_grad: + print(param.shape) + print('\n*************************************\n') + print("Model size %i" % sum(param.numel() for param in net.parameters() if param.requires_grad)) + print('\n*************************************\n') + + # Define a trainer class + trainer = ModelTrainer(net, config, chkp_path=chosen_chkp) + print('Done in {:.1f}s\n'.format(time.time() - t1)) + + print('\nStart training') + print('**************') + + # Training + trainer.train(net, training_loader, test_loader, config) + + print('Forcing exit now') + os.kill(os.getpid(), signal.SIGINT) diff --git a/competing_methods/my_KPConv/train_SemanticKitti.py b/competing_methods/my_KPConv/train_SemanticKitti.py new file mode 100644 index 00000000..cff2f461 --- /dev/null +++ b/competing_methods/my_KPConv/train_SemanticKitti.py @@ -0,0 +1,327 @@ +# +# +# 0=================================0 +# | Kernel Point Convolutions | +# 0=================================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Callable script to start a training on SemanticKitti dataset +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 06/03/2020 +# + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Imports and global variables +# \**********************************/ +# + +# Common libs +import signal +import os +import numpy as np +import sys +import torch + +# Dataset +from datasets.SemanticKitti import * +from torch.utils.data import DataLoader + +from utils.config import Config +from utils.trainer import ModelTrainer +from models.architectures import KPFCNN + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Config Class +# \******************/ +# + +class SemanticKittiConfig(Config): + """ + Override the parameters you want to modify for this dataset + """ + + #################### + # Dataset parameters + #################### + + # Dataset name + dataset = 'SemanticKitti' + + # Number of classes in the dataset (This value is overwritten by dataset class when Initializating dataset). + num_classes = None + + # Type of task performed on this dataset (also overwritten) + dataset_task = '' + + # Number of CPU threads for the input pipeline + input_threads = 10 + + ######################### + # Architecture definition + ######################### + + # Define layers + architecture = ['simple', + 'resnetb', + 'resnetb_strided', + 'resnetb', + 'resnetb', + 'resnetb_strided', + 'resnetb', + 'resnetb', + 'resnetb_strided', + 'resnetb', + 'resnetb', + 'resnetb_strided', + 'resnetb', + 'nearest_upsample', + 'unary', + 'nearest_upsample', + 'unary', + 'nearest_upsample', + 'unary', + 'nearest_upsample', + 'unary'] + + ################### + # KPConv parameters + ################### + + # Radius of the input sphere + in_radius = 4.0 + val_radius = 4.0 + n_frames = 1 + max_in_points = 100000 + max_val_points = 100000 + + # Number of batch + batch_num = 8 + val_batch_num = 8 + + # Number of kernel points + num_kernel_points = 15 + + # Size of the first subsampling grid in meter + first_subsampling_dl = 0.06 + + # Radius of convolution in "number grid cell". (2.5 is the standard value) + conv_radius = 2.5 + + # Radius of deformable convolution in "number grid cell". Larger so that deformed kernel can spread out + deform_radius = 6.0 + + # Radius of the area of influence of each kernel point in "number grid cell". (1.0 is the standard value) + KP_extent = 1.2 + + # Behavior of convolutions in ('constant', 'linear', 'gaussian') + KP_influence = 'linear' + + # Aggregation function of KPConv in ('closest', 'sum') + aggregation_mode = 'sum' + + # Choice of input features + first_features_dim = 128 + in_features_dim = 2 + + # Can the network learn modulations + modulated = False + + # Batch normalization parameters + use_batch_norm = True + batch_norm_momentum = 0.02 + + # Deformable offset loss + # 'point2point' fitting geometry by penalizing distance from deform point to input points + # 'point2plane' fitting geometry by penalizing distance from deform point to input point triplet (not implemented) + deform_fitting_mode = 'point2point' + deform_fitting_power = 1.0 # Multiplier for the fitting/repulsive loss + deform_lr_factor = 0.1 # Multiplier for learning rate applied to the deformations + repulse_extent = 1.2 # Distance of repulsion for deformed kernel points + + ##################### + # Training parameters + ##################### + + # Maximal number of epochs + max_epoch = 800 + + # Learning rate management + learning_rate = 1e-2 + momentum = 0.98 + lr_decays = {i: 0.1 ** (1 / 150) for i in range(1, max_epoch)} + grad_clip_norm = 100.0 + + # Number of steps per epochs + epoch_steps = 500 + + # Number of validation examples per epoch + validation_size = 200 + + # Number of epoch between each checkpoint + checkpoint_gap = 50 + + # Augmentations + augment_scale_anisotropic = True + augment_symmetries = [True, False, False] + augment_rotation = 'vertical' + augment_scale_min = 0.8 + augment_scale_max = 1.2 + augment_noise = 0.001 + augment_color = 0.8 + + # Choose weights for class (used in segmentation loss). Empty list for no weights + # class proportion for R=10.0 and dl=0.08 (first is unlabeled) + # 19.1 48.9 0.5 1.1 5.6 3.6 0.7 0.6 0.9 193.2 17.7 127.4 6.7 132.3 68.4 283.8 7.0 78.5 3.3 0.8 + # + # + + # sqrt(Inverse of proportion * 100) + # class_w = [1.430, 14.142, 9.535, 4.226, 5.270, 11.952, 12.910, 10.541, 0.719, + # 2.377, 0.886, 3.863, 0.869, 1.209, 0.594, 3.780, 1.129, 5.505, 11.180] + + # sqrt(Inverse of proportion * 100) capped (0.5 < X < 5) + # class_w = [1.430, 5.000, 5.000, 4.226, 5.000, 5.000, 5.000, 5.000, 0.719, 2.377, + # 0.886, 3.863, 0.869, 1.209, 0.594, 3.780, 1.129, 5.000, 5.000] + + # Do we nee to save convergence + saving = True + saving_path = None + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Main Call +# \***************/ +# + +if __name__ == '__main__': + + ############################ + # Initialize the environment + ############################ + + # Set which gpu is going to be used + GPU_ID = '0' + + # Set GPU visible device + os.environ['CUDA_VISIBLE_DEVICES'] = GPU_ID + + ############### + # Previous chkp + ############### + + # Choose here if you want to start training from a previous snapshot (None for new training) + # previous_training_path = 'Log_2020-03-19_19-53-27' + previous_training_path = '' + + # Choose index of checkpoint to start from. If None, uses the latest chkp + chkp_idx = None + if previous_training_path: + + # Find all snapshot in the chosen training folder + chkp_path = os.path.join('results', previous_training_path, 'checkpoints') + chkps = [f for f in os.listdir(chkp_path) if f[:4] == 'chkp'] + + # Find which snapshot to restore + if chkp_idx is None: + chosen_chkp = 'current_chkp.tar' + else: + chosen_chkp = np.sort(chkps)[chkp_idx] + chosen_chkp = os.path.join('results', previous_training_path, 'checkpoints', chosen_chkp) + + else: + chosen_chkp = None + + ############## + # Prepare Data + ############## + + print() + print('Data Preparation') + print('****************') + + # Initialize configuration class + config = SemanticKittiConfig() + if previous_training_path: + config.load(os.path.join('results', previous_training_path)) + config.saving_path = None + + # Get path from argument if given + if len(sys.argv) > 1: + config.saving_path = sys.argv[1] + + # Initialize datasets + training_dataset = SemanticKittiDataset(config, set='training', + balance_classes=True) + test_dataset = SemanticKittiDataset(config, set='validation', + balance_classes=False) + + # Initialize samplers + training_sampler = SemanticKittiSampler(training_dataset) + test_sampler = SemanticKittiSampler(test_dataset) + + # Initialize the dataloader + training_loader = DataLoader(training_dataset, + batch_size=1, + sampler=training_sampler, + collate_fn=SemanticKittiCollate, + num_workers=config.input_threads, + pin_memory=True) + test_loader = DataLoader(test_dataset, + batch_size=1, + sampler=test_sampler, + collate_fn=SemanticKittiCollate, + num_workers=config.input_threads, + pin_memory=True) + + # Calibrate max_in_point value + training_sampler.calib_max_in(config, training_loader, verbose=True) + test_sampler.calib_max_in(config, test_loader, verbose=True) + + # Calibrate samplers + training_sampler.calibration(training_loader, verbose=True) + test_sampler.calibration(test_loader, verbose=True) + + # debug_timing(training_dataset, training_loader) + # debug_timing(test_dataset, test_loader) + # debug_class_w(training_dataset, training_loader) + + print('\nModel Preparation') + print('*****************') + + # Define network model + t1 = time.time() + net = KPFCNN(config, training_dataset.label_values, training_dataset.ignored_labels) + + debug = False + if debug: + print('\n*************************************\n') + print(net) + print('\n*************************************\n') + for param in net.parameters(): + if param.requires_grad: + print(param.shape) + print('\n*************************************\n') + print("Model size %i" % sum(param.numel() for param in net.parameters() if param.requires_grad)) + print('\n*************************************\n') + + # Define a trainer class + trainer = ModelTrainer(net, config, chkp_path=chosen_chkp) + print('Done in {:.1f}s\n'.format(time.time() - t1)) + + print('\nStart training') + print('**************') + + # Training + trainer.train(net, training_loader, test_loader, config) + + print('Forcing exit now') + os.kill(os.getpid(), signal.SIGINT) diff --git a/competing_methods/my_KPConv/train_UrbanMesh.py b/competing_methods/my_KPConv/train_UrbanMesh.py new file mode 100644 index 00000000..e911fa6d --- /dev/null +++ b/competing_methods/my_KPConv/train_UrbanMesh.py @@ -0,0 +1,306 @@ +# +# +# 0=================================0 +# | Kernel Point Convolutions | +# 0=================================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Callable script to start a training on UrbanMesh dataset +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 06/03/2020 +# + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Imports and global variables +# \**********************************/ +# + +# Common libs +import signal +import os + +# Dataset +from datasets.UrbanMesh import * +from torch.utils.data import DataLoader + +from utils.config import Config +from utils.trainer import ModelTrainer +from models.architectures import KPFCNN + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Config Class +# \******************/ +# + +class UrbanMeshConfig(Config): + """ + Override the parameters you want to modify for this dataset + """ + + #################### + # Dataset parameters + #################### + + # Dataset name + dataset = 'UrbanMesh' + + # Number of classes in the dataset (This value is overwritten by dataset class when Initializating dataset). + num_classes = None + + # Type of task performed on this dataset (also overwritten) + dataset_task = '' + + # Number of CPU threads for the input pipeline + input_threads = 8 + + ######################### + # Architecture definition + ######################### + + # Define layers + architecture = ['simple', + 'resnetb', + 'resnetb_strided', + 'resnetb', + 'resnetb_strided', + 'resnetb', + 'resnetb_strided', + 'resnetb', + 'resnetb_strided', + 'resnetb', + 'nearest_upsample', + 'unary', + 'nearest_upsample', + 'unary', + 'nearest_upsample', + 'unary', + 'nearest_upsample', + 'unary'] + + ################### + # KPConv parameters + ################### + + # Radius of the input sphere + in_radius = 5.0 # in_radius = 50 * first_subsampling_dl + + # Number of kernel points + num_kernel_points = 15 + + # Size of the first subsampling grid in meter + first_subsampling_dl = 0.1 + + # Radius of convolution in "number grid cell". (2.5 is the standard value) + conv_radius = 2.5 + + # Radius of deformable convolution in "number grid cell". Larger so that deformed kernel can spread out + deform_radius = 6.0 + + # Radius of the area of influence of each kernel point in "number grid cell". (1.0 is the standard value) + KP_extent = 1.2 + + # Behavior of convolutions in ('constant', 'linear', 'gaussian') + KP_influence = 'linear' + + # Aggregation function of KPConv in ('closest', 'sum') + aggregation_mode = 'sum' + + # Choice of input features + first_features_dim = 128 + in_features_dim = 5 + + # Can the network learn modulations + modulated = False + + # Batch normalization parameters + use_batch_norm = True + batch_norm_momentum = 0.02 + + # Deformable offset loss + # 'point2point' fitting geometry by penalizing distance from deform point to input points + # 'point2plane' fitting geometry by penalizing distance from deform point to input point triplet (not implemented) + deform_fitting_mode = 'point2point' + deform_fitting_power = 1.0 # Multiplier for the fitting/repulsive loss + deform_lr_factor = 0.1 # Multiplier for learning rate applied to the deformations + repulse_extent = 1.2 # Distance of repulsion for deformed kernel points + + ##################### + # Training parameters + ##################### + + # Maximal number of epochs + max_epoch = 500 # 500 + max_test_epoch = 100 + + # Learning rate management + learning_rate = 1e-2 + momentum = 0.98 + lr_decays = {i: 0.1 ** (1 / 150) for i in range(1, max_epoch)} + grad_clip_norm = 100.0 + + # Number of batch + batch_num = 6 # 6 + + # Number of steps per epochs + epoch_steps = 500 # 500 + + # Number of validation examples per epoch + validation_size = 50 # 50 + + # Number of epoch between each checkpoint + checkpoint_gap = 50 # 50 + + # Augmentations + augment_scale_anisotropic = True + augment_symmetries = [True, False, False] + augment_rotation = 'vertical' + augment_scale_min = 0.8 + augment_scale_max = 1.2 + augment_noise = 0.001 + augment_color = 0.8 + + # The way we balance segmentation loss + # > 'none': Each point in the whole batch has the same contribution. + # > 'class': Each class has the same contribution (points are weighted according to class balance) + # > 'batch': Each cloud in the batch has the same contribution (points are weighted according cloud sizes) + segloss_balance = 'none' + + # Do we nee to save convergence + saving = True + saving_path = None + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Main Call +# \***************/ +# + +if __name__ == '__main__': + + ############################ + # Initialize the environment + ############################ + + # Set which gpu is going to be used + GPU_ID = '0' + + # Set GPU visible device + os.environ['CUDA_VISIBLE_DEVICES'] = GPU_ID + + ############### + # Previous chkp + ############### + + # Choose here if you want to start training from a previous snapshot (None for new training) + # previous_training_path = 'Log_2020-03-19_19-53-27' + previous_training_path = '' + + # Choose index of checkpoint to start from. If None, uses the latest chkp + chkp_idx = None + if previous_training_path: + + # Find all snapshot in the chosen training folder + chkp_path = os.path.join('results', previous_training_path, 'checkpoints') + chkps = [f for f in os.listdir(chkp_path) if f[:4] == 'chkp'] + + # Find which snapshot to restore + if chkp_idx is None: + chosen_chkp = 'current_chkp.tar' + else: + chosen_chkp = np.sort(chkps)[chkp_idx] + chosen_chkp = os.path.join('results', previous_training_path, 'checkpoints', chosen_chkp) + + else: + chosen_chkp = None + + ############## + # Prepare Data + ############## + + print() + print('Data Preparation') + print('****************') + + # Initialize configuration class + config = UrbanMeshConfig() + if previous_training_path: + config.load(os.path.join('results', previous_training_path)) + config.saving_path = None + + # Get path from argument if given + if len(sys.argv) > 1: + config.saving_path = sys.argv[1] + + # Initialize datasets + training_dataset = UrbanMeshDataset(config, set='training', use_potentials=True) + test_dataset = UrbanMeshDataset(config, set='validation', use_potentials=True) + + # Initialize samplers + training_sampler = UrbanMeshSampler(training_dataset) + test_sampler = UrbanMeshSampler(test_dataset) + + # Initialize the dataloader + training_loader = DataLoader(training_dataset, + batch_size=6, + sampler=training_sampler, + collate_fn=UrbanMeshCollate, + num_workers=config.input_threads, + pin_memory=True, + drop_last=True) + test_loader = DataLoader(test_dataset, + batch_size=1, + sampler=test_sampler, + collate_fn=UrbanMeshCollate, + num_workers=config.input_threads, + pin_memory=True, + drop_last=True) + + # Calibrate samplers + training_sampler.calibration(training_loader, verbose=True) + test_sampler.calibration(test_loader, verbose=True) + + # Optional debug functions + # debug_timing(training_dataset, training_loader) + # debug_timing(test_dataset, test_loader) + # debug_upsampling(training_dataset, training_loader) + + print('\nModel Preparation') + print('*****************') + + # Define network model + t1 = time.time() + net = KPFCNN(config, training_dataset.label_values, training_dataset.ignored_labels) + + debug = False + if debug: + print('\n*************************************\n') + print(net) + print('\n*************************************\n') + for param in net.parameters(): + if param.requires_grad: + print(param.shape) + print('\n*************************************\n') + print("Model size %i" % sum(param.numel() for param in net.parameters() if param.requires_grad)) + print('\n*************************************\n') + + # Define a trainer class + trainer = ModelTrainer(net, config, chkp_path=chosen_chkp) + print('Done in {:.1f}s\n'.format(time.time() - t1)) + + print('\nStart training') + print('**************') + + # Training + trainer.train(net, training_loader, test_loader, config) + + print('Forcing exit now') + os.kill(os.getpid(), signal.SIGINT) diff --git a/competing_methods/my_KPConv/utils/config.py b/competing_methods/my_KPConv/utils/config.py new file mode 100644 index 00000000..78e97cfd --- /dev/null +++ b/competing_methods/my_KPConv/utils/config.py @@ -0,0 +1,386 @@ +# +# +# 0=================================0 +# | Kernel Point Convolutions | +# 0=================================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Configuration class +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 11/06/2018 +# + + +from os.path import join +import numpy as np + + +# Colors for printing +class bcolors: + HEADER = '\033[95m' + OKBLUE = '\033[94m' + OKGREEN = '\033[92m' + WARNING = '\033[93m' + FAIL = '\033[91m' + ENDC = '\033[0m' + BOLD = '\033[1m' + UNDERLINE = '\033[4m' + + +class Config: + """ + Class containing the parameters you want to modify for this dataset + """ + + ################## + # Input parameters + ################## + + # Dataset name + dataset = '' + + # Type of network model + dataset_task = '' + + # Number of classes in the dataset + num_classes = 0 + + # Dimension of input points + in_points_dim = 3 + + # Dimension of input features + in_features_dim = 1 + + # Radius of the input sphere (ignored for models, only used for point clouds) + in_radius = 1.0 + + # Number of CPU threads for the input pipeline + input_threads = 8 + + ################## + # Model parameters + ################## + + # Architecture definition. List of blocks + architecture = [] + + # Decide the mode of equivariance and invariance + equivar_mode = '' + invar_mode = '' + + # Dimension of the first feature maps + first_features_dim = 64 + + # Batch normalization parameters + use_batch_norm = True + batch_norm_momentum = 0.99 + + # For segmentation models : ratio between the segmented area and the input area + segmentation_ratio = 1.0 + + ################### + # KPConv parameters + ################### + + # Number of kernel points + num_kernel_points = 15 + + # Size of the first subsampling grid in meter + first_subsampling_dl = 0.02 + + # Radius of convolution in "number grid cell". (2.5 is the standard value) + conv_radius = 2.5 + + # Radius of deformable convolution in "number grid cell". Larger so that deformed kernel can spread out + deform_radius = 5.0 + + # Kernel point influence radius + KP_extent = 1.0 + + # Influence function when d < KP_extent. ('constant', 'linear', 'gaussian') When d > KP_extent, always zero + KP_influence = 'linear' + + # Aggregation function of KPConv in ('closest', 'sum') + # Decide if you sum all kernel point influences, or if you only take the influence of the closest KP + aggregation_mode = 'sum' + + # Fixed points in the kernel : 'none', 'center' or 'verticals' + fixed_kernel_points = 'center' + + # Use modulateion in deformable convolutions + modulated = False + + # For SLAM datasets like SemanticKitti number of frames used (minimum one) + n_frames = 1 + + # For SLAM datasets like SemanticKitti max number of point in input cloud + validation + max_in_points = 0 + val_radius = 51.0 + max_val_points = 50000 + + ##################### + # Training parameters + ##################### + + # Network optimizer parameters (learning rate and momentum) + learning_rate = 1e-3 + momentum = 0.9 + + # Learning rate decays. Dictionary of all decay values with their epoch {epoch: decay}. + lr_decays = {200: 0.2, 300: 0.2} + + # Gradient clipping value (negative means no clipping) + grad_clip_norm = 100.0 + + # Augmentation parameters + augment_scale_anisotropic = True + augment_scale_min = 0.9 + augment_scale_max = 1.1 + augment_symmetries = [False, False, False] + augment_rotation = 'vertical' + augment_noise = 0.005 + augment_color = 0.7 + + # Augment with occlusions (not implemented yet) + augment_occlusion = 'none' + augment_occlusion_ratio = 0.2 + augment_occlusion_num = 1 + + # Regularization loss importance + weight_decay = 1e-3 + + # The way we balance segmentation loss DEPRECATED + segloss_balance = 'none' + + # Choose weights for class (used in segmentation loss). Empty list for no weights + class_w = [] + + # Deformable offset loss + # 'point2point' fitting geometry by penalizing distance from deform point to input points + # 'point2plane' fitting geometry by penalizing distance from deform point to input point triplet (not implemented) + deform_fitting_mode = 'point2point' + deform_fitting_power = 1.0 # Multiplier for the fitting/repulsive loss + deform_lr_factor = 0.1 # Multiplier for learning rate applied to the deformations + repulse_extent = 1.0 # Distance of repulsion for deformed kernel points + + # Number of batch + batch_num = 10 + val_batch_num = 10 + + # Maximal number of epochs + max_epoch = 1000 + + # Maxinal number of test epochs + max_test_epoch = 100 + + # Number of steps per epochs + epoch_steps = 1000 + + # Number of validation examples per epoch + validation_size = 100 + + # Number of epoch between each checkpoint + checkpoint_gap = 50 + + # Do we nee to save convergence + saving = True + saving_path = None + + def __init__(self): + """ + Class Initialyser + """ + + # Number of layers + self.num_layers = len([block for block in self.architecture if 'pool' in block or 'strided' in block]) + 1 + + ################### + # Deform layer list + ################### + # + # List of boolean indicating which layer has a deformable convolution + # + + layer_blocks = [] + self.deform_layers = [] + arch = self.architecture + for block_i, block in enumerate(arch): + + # Get all blocks of the layer + if not ('pool' in block or 'strided' in block or 'global' in block or 'upsample' in block): + layer_blocks += [block] + continue + + # Convolution neighbors indices + # ***************************** + + deform_layer = False + if layer_blocks: + if np.any(['deformable' in blck for blck in layer_blocks]): + deform_layer = True + + if 'pool' in block or 'strided' in block: + if 'deformable' in block: + deform_layer = True + + self.deform_layers += [deform_layer] + layer_blocks = [] + + # Stop when meeting a global pooling or upsampling + if 'global' in block or 'upsample' in block: + break + + def load(self, path): + + filename = join(path, 'parameters.txt') + with open(filename, 'r') as f: + lines = f.readlines() + + # Class variable dictionary + for line in lines: + line_info = line.split() + if len(line_info) > 2 and line_info[0] != '#': + + if line_info[2] == 'None': + setattr(self, line_info[0], None) + + elif line_info[0] == 'lr_decay_epochs': + self.lr_decays = {int(b.split(':')[0]): float(b.split(':')[1]) for b in line_info[2:]} + + elif line_info[0] == 'architecture': + self.architecture = [b for b in line_info[2:]] + + elif line_info[0] == 'augment_symmetries': + self.augment_symmetries = [bool(int(b)) for b in line_info[2:]] + + elif line_info[0] == 'num_classes': + if len(line_info) > 3: + self.num_classes = [int(c) for c in line_info[2:]] + else: + self.num_classes = int(line_info[2]) + + elif line_info[0] == 'class_w': + self.class_w = [float(w) for w in line_info[2:]] + + elif hasattr(self, line_info[0]): + attr_type = type(getattr(self, line_info[0])) + if attr_type == bool: + setattr(self, line_info[0], attr_type(int(line_info[2]))) + else: + setattr(self, line_info[0], attr_type(line_info[2])) + + self.saving = True + self.saving_path = path + self.__init__() + + def save(self): + + with open(join(self.saving_path, 'parameters.txt'), "w") as text_file: + + text_file.write('# -----------------------------------#\n') + text_file.write('# Parameters of the training session #\n') + text_file.write('# -----------------------------------#\n\n') + + # Input parameters + text_file.write('# Input parameters\n') + text_file.write('# ****************\n\n') + text_file.write('dataset = {:s}\n'.format(self.dataset)) + text_file.write('dataset_task = {:s}\n'.format(self.dataset_task)) + if type(self.num_classes) is list: + text_file.write('num_classes =') + for n in self.num_classes: + text_file.write(' {:d}'.format(n)) + text_file.write('\n') + else: + text_file.write('num_classes = {:d}\n'.format(self.num_classes)) + text_file.write('in_points_dim = {:d}\n'.format(self.in_points_dim)) + text_file.write('in_features_dim = {:d}\n'.format(self.in_features_dim)) + text_file.write('in_radius = {:.6f}\n'.format(self.in_radius)) + text_file.write('input_threads = {:d}\n\n'.format(self.input_threads)) + + # Model parameters + text_file.write('# Model parameters\n') + text_file.write('# ****************\n\n') + + text_file.write('architecture =') + for a in self.architecture: + text_file.write(' {:s}'.format(a)) + text_file.write('\n') + text_file.write('equivar_mode = {:s}\n'.format(self.equivar_mode)) + text_file.write('invar_mode = {:s}\n'.format(self.invar_mode)) + text_file.write('num_layers = {:d}\n'.format(self.num_layers)) + text_file.write('first_features_dim = {:d}\n'.format(self.first_features_dim)) + text_file.write('use_batch_norm = {:d}\n'.format(int(self.use_batch_norm))) + text_file.write('batch_norm_momentum = {:.6f}\n\n'.format(self.batch_norm_momentum)) + text_file.write('segmentation_ratio = {:.6f}\n\n'.format(self.segmentation_ratio)) + + # KPConv parameters + text_file.write('# KPConv parameters\n') + text_file.write('# *****************\n\n') + + text_file.write('first_subsampling_dl = {:.6f}\n'.format(self.first_subsampling_dl)) + text_file.write('num_kernel_points = {:d}\n'.format(self.num_kernel_points)) + text_file.write('conv_radius = {:.6f}\n'.format(self.conv_radius)) + text_file.write('deform_radius = {:.6f}\n'.format(self.deform_radius)) + text_file.write('fixed_kernel_points = {:s}\n'.format(self.fixed_kernel_points)) + text_file.write('KP_extent = {:.6f}\n'.format(self.KP_extent)) + text_file.write('KP_influence = {:s}\n'.format(self.KP_influence)) + text_file.write('aggregation_mode = {:s}\n'.format(self.aggregation_mode)) + text_file.write('modulated = {:d}\n'.format(int(self.modulated))) + text_file.write('n_frames = {:d}\n'.format(self.n_frames)) + text_file.write('max_in_points = {:d}\n\n'.format(self.max_in_points)) + text_file.write('max_val_points = {:d}\n\n'.format(self.max_val_points)) + text_file.write('val_radius = {:.6f}\n\n'.format(self.val_radius)) + + # Training parameters + text_file.write('# Training parameters\n') + text_file.write('# *******************\n\n') + + text_file.write('learning_rate = {:f}\n'.format(self.learning_rate)) + text_file.write('momentum = {:f}\n'.format(self.momentum)) + text_file.write('lr_decay_epochs =') + for e, d in self.lr_decays.items(): + text_file.write(' {:d}:{:f}'.format(e, d)) + text_file.write('\n') + text_file.write('grad_clip_norm = {:f}\n\n'.format(self.grad_clip_norm)) + + + text_file.write('augment_symmetries =') + for a in self.augment_symmetries: + text_file.write(' {:d}'.format(int(a))) + text_file.write('\n') + text_file.write('augment_rotation = {:s}\n'.format(self.augment_rotation)) + text_file.write('augment_noise = {:f}\n'.format(self.augment_noise)) + text_file.write('augment_occlusion = {:s}\n'.format(self.augment_occlusion)) + text_file.write('augment_occlusion_ratio = {:.6f}\n'.format(self.augment_occlusion_ratio)) + text_file.write('augment_occlusion_num = {:d}\n'.format(self.augment_occlusion_num)) + text_file.write('augment_scale_anisotropic = {:d}\n'.format(int(self.augment_scale_anisotropic))) + text_file.write('augment_scale_min = {:.6f}\n'.format(self.augment_scale_min)) + text_file.write('augment_scale_max = {:.6f}\n'.format(self.augment_scale_max)) + text_file.write('augment_color = {:.6f}\n\n'.format(self.augment_color)) + + text_file.write('weight_decay = {:f}\n'.format(self.weight_decay)) + text_file.write('segloss_balance = {:s}\n'.format(self.segloss_balance)) + text_file.write('class_w =') + for a in self.class_w: + text_file.write(' {:.6f}'.format(a)) + text_file.write('\n') + text_file.write('deform_fitting_mode = {:s}\n'.format(self.deform_fitting_mode)) + text_file.write('deform_fitting_power = {:.6f}\n'.format(self.deform_fitting_power)) + text_file.write('deform_lr_factor = {:.6f}\n'.format(self.deform_lr_factor)) + text_file.write('repulse_extent = {:.6f}\n'.format(self.repulse_extent)) + text_file.write('batch_num = {:d}\n'.format(self.batch_num)) + text_file.write('val_batch_num = {:d}\n'.format(self.val_batch_num)) + text_file.write('max_epoch = {:d}\n'.format(self.max_epoch)) + text_file.write('max_test_epoch = {:d}\n'.format(self.max_test_epoch)) + if self.epoch_steps is None: + text_file.write('epoch_steps = None\n') + else: + text_file.write('epoch_steps = {:d}\n'.format(self.epoch_steps)) + text_file.write('validation_size = {:d}\n'.format(self.validation_size)) + text_file.write('checkpoint_gap = {:d}\n'.format(self.checkpoint_gap)) + diff --git a/competing_methods/my_KPConv/utils/mayavi_visu.py b/competing_methods/my_KPConv/utils/mayavi_visu.py new file mode 100644 index 00000000..b1c3821d --- /dev/null +++ b/competing_methods/my_KPConv/utils/mayavi_visu.py @@ -0,0 +1,436 @@ +# +# +# 0=================================0 +# | Kernel Point Convolutions | +# 0=================================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Script for various visualization with mayavi +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 11/06/2018 +# + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Imports and global variables +# \**********************************/ +# + + +# Basic libs +import torch +import numpy as np +from sklearn.neighbors import KDTree +from os import makedirs, remove, rename, listdir +from os.path import exists, join +import time + +import sys + +# PLY reader +from utils.ply import write_ply, read_ply + +# Configuration class +from utils.config import Config + + +def show_ModelNet_models(all_points): + from mayavi import mlab + + ########################### + # Interactive visualization + ########################### + + # Create figure for features + fig1 = mlab.figure('Models', bgcolor=(1, 1, 1), size=(1000, 800)) + fig1.scene.parallel_projection = False + + # Indices + global file_i + file_i = 0 + + def update_scene(): + + # clear figure + mlab.clf(fig1) + + # Plot new data feature + points = all_points[file_i] + + # Rescale points for visu + points = (points * 1.5 + np.array([1.0, 1.0, 1.0])) * 50.0 + + # Show point clouds colorized with activations + activations = mlab.points3d(points[:, 0], + points[:, 1], + points[:, 2], + points[:, 2], + scale_factor=3.0, + scale_mode='none', + figure=fig1) + + # New title + mlab.title(str(file_i), color=(0, 0, 0), size=0.3, height=0.01) + text = '<--- (press g for previous)' + 50 * ' ' + '(press h for next) --->' + mlab.text(0.01, 0.01, text, color=(0, 0, 0), width=0.98) + mlab.orientation_axes() + + return + + def keyboard_callback(vtk_obj, event): + global file_i + + if vtk_obj.GetKeyCode() in ['g', 'G']: + + file_i = (file_i - 1) % len(all_points) + update_scene() + + elif vtk_obj.GetKeyCode() in ['h', 'H']: + + file_i = (file_i + 1) % len(all_points) + update_scene() + + return + + # Draw a first plot + update_scene() + fig1.scene.interactor.add_observer('KeyPressEvent', keyboard_callback) + mlab.show() + + +def show_ModelNet_examples(clouds, cloud_normals=None, cloud_labels=None): + from mayavi import mlab + + ########################### + # Interactive visualization + ########################### + + # Create figure for features + fig1 = mlab.figure('Models', bgcolor=(1, 1, 1), size=(1000, 800)) + fig1.scene.parallel_projection = False + + if cloud_labels is None: + cloud_labels = [points[:, 2] for points in clouds] + + # Indices + global file_i, show_normals + file_i = 0 + show_normals = True + + def update_scene(): + + # clear figure + mlab.clf(fig1) + + # Plot new data feature + points = clouds[file_i] + labels = cloud_labels[file_i] + if cloud_normals is not None: + normals = cloud_normals[file_i] + else: + normals = None + + # Rescale points for visu + points = (points * 1.5 + np.array([1.0, 1.0, 1.0])) * 50.0 + + # Show point clouds colorized with activations + activations = mlab.points3d(points[:, 0], + points[:, 1], + points[:, 2], + labels, + scale_factor=3.0, + scale_mode='none', + figure=fig1) + if normals is not None and show_normals: + activations = mlab.quiver3d(points[:, 0], + points[:, 1], + points[:, 2], + normals[:, 0], + normals[:, 1], + normals[:, 2], + scale_factor=10.0, + scale_mode='none', + figure=fig1) + + # New title + mlab.title(str(file_i), color=(0, 0, 0), size=0.3, height=0.01) + text = '<--- (press g for previous)' + 50 * ' ' + '(press h for next) --->' + mlab.text(0.01, 0.01, text, color=(0, 0, 0), width=0.98) + mlab.orientation_axes() + + return + + def keyboard_callback(vtk_obj, event): + global file_i, show_normals + + if vtk_obj.GetKeyCode() in ['g', 'G']: + file_i = (file_i - 1) % len(clouds) + update_scene() + + elif vtk_obj.GetKeyCode() in ['h', 'H']: + file_i = (file_i + 1) % len(clouds) + update_scene() + + elif vtk_obj.GetKeyCode() in ['n', 'N']: + show_normals = not show_normals + update_scene() + + return + + # Draw a first plot + update_scene() + fig1.scene.interactor.add_observer('KeyPressEvent', keyboard_callback) + mlab.show() + + +def show_neighbors(query, supports, neighbors): + from mayavi import mlab + + ########################### + # Interactive visualization + ########################### + + # Create figure for features + fig1 = mlab.figure('Models', bgcolor=(1, 1, 1), size=(1000, 800)) + fig1.scene.parallel_projection = False + + # Indices + global file_i + file_i = 0 + + def update_scene(): + + # clear figure + mlab.clf(fig1) + + # Rescale points for visu + p1 = (query * 1.5 + np.array([1.0, 1.0, 1.0])) * 50.0 + p2 = (supports * 1.5 + np.array([1.0, 1.0, 1.0])) * 50.0 + + l1 = p1[:, 2]*0 + l1[file_i] = 1 + + l2 = p2[:, 2]*0 + 2 + l2[neighbors[file_i]] = 3 + + # Show point clouds colorized with activations + activations = mlab.points3d(p1[:, 0], + p1[:, 1], + p1[:, 2], + l1, + scale_factor=2.0, + scale_mode='none', + vmin=0.0, + vmax=3.0, + figure=fig1) + + activations = mlab.points3d(p2[:, 0], + p2[:, 1], + p2[:, 2], + l2, + scale_factor=3.0, + scale_mode='none', + vmin=0.0, + vmax=3.0, + figure=fig1) + + # New title + mlab.title(str(file_i), color=(0, 0, 0), size=0.3, height=0.01) + text = '<--- (press g for previous)' + 50 * ' ' + '(press h for next) --->' + mlab.text(0.01, 0.01, text, color=(0, 0, 0), width=0.98) + mlab.orientation_axes() + + return + + def keyboard_callback(vtk_obj, event): + global file_i + + if vtk_obj.GetKeyCode() in ['g', 'G']: + + file_i = (file_i - 1) % len(query) + update_scene() + + elif vtk_obj.GetKeyCode() in ['h', 'H']: + + file_i = (file_i + 1) % len(query) + update_scene() + + return + + # Draw a first plot + update_scene() + fig1.scene.interactor.add_observer('KeyPressEvent', keyboard_callback) + mlab.show() + + +def show_input_batch(batch): + from mayavi import mlab + + ########################### + # Interactive visualization + ########################### + + # Create figure for features + fig1 = mlab.figure('Input', bgcolor=(1, 1, 1), size=(1000, 800)) + fig1.scene.parallel_projection = False + + # Unstack batch + all_points = batch.unstack_points() + all_neighbors = batch.unstack_neighbors() + all_pools = batch.unstack_pools() + + # Indices + global b_i, l_i, neighb_i, show_pools + b_i = 0 + l_i = 0 + neighb_i = 0 + show_pools = False + + def update_scene(): + + # clear figure + mlab.clf(fig1) + + # Rescale points for visu + p = (all_points[l_i][b_i] * 1.5 + np.array([1.0, 1.0, 1.0])) * 50.0 + labels = p[:, 2]*0 + + if show_pools: + p2 = (all_points[l_i+1][b_i][neighb_i:neighb_i+1] * 1.5 + np.array([1.0, 1.0, 1.0])) * 50.0 + p = np.vstack((p, p2)) + labels = np.hstack((labels, np.ones((1,), dtype=np.int32)*3)) + pool_inds = all_pools[l_i][b_i][neighb_i] + pool_inds = pool_inds[pool_inds >= 0] + labels[pool_inds] = 2 + else: + neighb_inds = all_neighbors[l_i][b_i][neighb_i] + neighb_inds = neighb_inds[neighb_inds >= 0] + labels[neighb_inds] = 2 + labels[neighb_i] = 3 + + # Show point clouds colorized with activations + mlab.points3d(p[:, 0], + p[:, 1], + p[:, 2], + labels, + scale_factor=2.0, + scale_mode='none', + vmin=0.0, + vmax=3.0, + figure=fig1) + + + """ + mlab.points3d(p[-2:, 0], + p[-2:, 1], + p[-2:, 2], + labels[-2:]*0 + 3, + scale_factor=0.16 * 1.5 * 50, + scale_mode='none', + mode='cube', + vmin=0.0, + vmax=3.0, + figure=fig1) + mlab.points3d(p[-1:, 0], + p[-1:, 1], + p[-1:, 2], + labels[-1:]*0 + 2, + scale_factor=0.16 * 2 * 2.5 * 1.5 * 50, + scale_mode='none', + mode='sphere', + vmin=0.0, + vmax=3.0, + figure=fig1) + + """ + + # New title + title_str = '<([) b_i={:d} (])> <(,) l_i={:d} (.)> <(N) n_i={:d} (M)>'.format(b_i, l_i, neighb_i) + mlab.title(title_str, color=(0, 0, 0), size=0.3, height=0.90) + if show_pools: + text = 'pools (switch with G)' + else: + text = 'neighbors (switch with G)' + mlab.text(0.01, 0.01, text, color=(0, 0, 0), width=0.3) + mlab.orientation_axes() + + return + + def keyboard_callback(vtk_obj, event): + global b_i, l_i, neighb_i, show_pools + + if vtk_obj.GetKeyCode() in ['[', '{']: + b_i = (b_i - 1) % len(all_points[l_i]) + neighb_i = 0 + update_scene() + + elif vtk_obj.GetKeyCode() in [']', '}']: + b_i = (b_i + 1) % len(all_points[l_i]) + neighb_i = 0 + update_scene() + + elif vtk_obj.GetKeyCode() in [',', '<']: + if show_pools: + l_i = (l_i - 1) % (len(all_points) - 1) + else: + l_i = (l_i - 1) % len(all_points) + neighb_i = 0 + update_scene() + + elif vtk_obj.GetKeyCode() in ['.', '>']: + if show_pools: + l_i = (l_i + 1) % (len(all_points) - 1) + else: + l_i = (l_i + 1) % len(all_points) + neighb_i = 0 + update_scene() + + elif vtk_obj.GetKeyCode() in ['n', 'N']: + neighb_i = (neighb_i - 1) % all_points[l_i][b_i].shape[0] + update_scene() + + elif vtk_obj.GetKeyCode() in ['m', 'M']: + neighb_i = (neighb_i + 1) % all_points[l_i][b_i].shape[0] + update_scene() + + elif vtk_obj.GetKeyCode() in ['g', 'G']: + if l_i < len(all_points) - 1: + show_pools = not show_pools + neighb_i = 0 + update_scene() + + return + + # Draw a first plot + update_scene() + fig1.scene.interactor.add_observer('KeyPressEvent', keyboard_callback) + mlab.show() + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/competing_methods/my_KPConv/utils/metrics.py b/competing_methods/my_KPConv/utils/metrics.py new file mode 100644 index 00000000..7c166b68 --- /dev/null +++ b/competing_methods/my_KPConv/utils/metrics.py @@ -0,0 +1,232 @@ +# +# +# 0=================================0 +# | Kernel Point Convolutions | +# 0=================================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Metric utility functions +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 11/06/2018 +# + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Imports and global variables +# \**********************************/ +# + + +# Basic libs +import numpy as np + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Utilities +# \***************/ +# + +def fast_confusion(true, pred, label_values=None): + """ + Fast confusion matrix (100x faster than Scikit learn). But only works if labels are la + :param true: + :param false: + :param num_classes: + :return: + """ + + # Ensure data is in the right format + if true.shape[0] != 1: + true = np.squeeze(true) + if pred.shape[0] != 1: + pred = np.squeeze(pred) + if len(true.shape) != 1: + raise ValueError('Truth values are stored in a {:d}D array instead of 1D array'. format(len(true.shape))) + if len(pred.shape) != 1: + raise ValueError('Prediction values are stored in a {:d}D array instead of 1D array'. format(len(pred.shape))) + if true.dtype not in [np.int32, np.int64]: + raise ValueError('Truth values are {:s} instead of int32 or int64'.format(true.dtype)) + if pred.dtype not in [np.int32, np.int64]: + raise ValueError('Prediction values are {:s} instead of int32 or int64'.format(pred.dtype)) + true = true.astype(np.int32) + pred = pred.astype(np.int32) + + # Get the label values + if label_values is None: + # From data if they are not given + label_values = np.unique(np.hstack((true, pred))) + else: + # Ensure they are good if given + if label_values.dtype not in [np.int32, np.int64]: + raise ValueError('label values are {:s} instead of int32 or int64'.format(label_values.dtype)) + if len(np.unique(label_values)) < len(label_values): + raise ValueError('Given labels are not unique') + + # Sort labels + label_values = np.sort(label_values) + + # Get the number of classes + num_classes = len(label_values) + + #print(num_classes) + #print(label_values) + #print(np.max(true)) + #print(np.max(pred)) + #print(np.max(true * num_classes + pred)) + + # Start confusion computations + if label_values[0] == 0 and label_values[-1] == num_classes - 1: + + # Vectorized confusion + vec_conf = np.bincount(true * num_classes + pred) + + # Add possible missing values due to classes not being in pred or true + #print(vec_conf.shape) + if vec_conf.shape[0] < num_classes ** 2: + vec_conf = np.pad(vec_conf, (0, num_classes ** 2 - vec_conf.shape[0]), 'constant') + #print(vec_conf.shape) + + # Reshape confusion in a matrix + return vec_conf.reshape((num_classes, num_classes)) + + + else: + + # Ensure no negative classes + if label_values[0] < 0: + raise ValueError('Unsupported negative classes') + + # Get the data in [0,num_classes[ + label_map = np.zeros((label_values[-1] + 1,), dtype=np.int32) + for k, v in enumerate(label_values): + label_map[v] = k + + pred = label_map[pred] + true = label_map[true] + + # Vectorized confusion + vec_conf = np.bincount(true * num_classes + pred) + + # Add possible missing values due to classes not being in pred or true + if vec_conf.shape[0] < num_classes ** 2: + vec_conf = np.pad(vec_conf, (0, num_classes ** 2 - vec_conf.shape[0]), 'constant') + + # Reshape confusion in a matrix + return vec_conf.reshape((num_classes, num_classes)) + +def metrics(confusions, ignore_unclassified=False): + """ + Computes different metrics from confusion matrices. + :param confusions: ([..., n_c, n_c] np.int32). Can be any dimension, the confusion matrices should be described by + the last axes. n_c = number of classes + :param ignore_unclassified: (bool). True if the the first class should be ignored in the results + :return: ([..., n_c] np.float32) precision, recall, F1 score, IoU score + """ + + # If the first class (often "unclassified") should be ignored, erase it from the confusion. + if (ignore_unclassified): + confusions[..., 0, :] = 0 + confusions[..., :, 0] = 0 + + # Compute TP, FP, FN. This assume that the second to last axis counts the truths (like the first axis of a + # confusion matrix), and that the last axis counts the predictions (like the second axis of a confusion matrix) + TP = np.diagonal(confusions, axis1=-2, axis2=-1) + TP_plus_FP = np.sum(confusions, axis=-1) + TP_plus_FN = np.sum(confusions, axis=-2) + + # Compute precision and recall. This assume that the second to last axis counts the truths (like the first axis of + # a confusion matrix), and that the last axis counts the predictions (like the second axis of a confusion matrix) + PRE = TP / (TP_plus_FN + 1e-6) + REC = TP / (TP_plus_FP + 1e-6) + + # Compute Accuracy + ACC = np.sum(TP, axis=-1) / (np.sum(confusions, axis=(-2, -1)) + 1e-6) + + # Compute F1 score + F1 = 2 * TP / (TP_plus_FP + TP_plus_FN + 1e-6) + + # Compute IoU + IoU = F1 / (2 - F1) + + return PRE, REC, F1, IoU, ACC + + +def smooth_metrics(confusions, smooth_n=0, ignore_unclassified=False): + """ + Computes different metrics from confusion matrices. Smoothed over a number of epochs. + :param confusions: ([..., n_c, n_c] np.int32). Can be any dimension, the confusion matrices should be described by + the last axes. n_c = number of classes + :param smooth_n: (int). smooth extent + :param ignore_unclassified: (bool). True if the the first class should be ignored in the results + :return: ([..., n_c] np.float32) precision, recall, F1 score, IoU score + """ + + # If the first class (often "unclassified") should be ignored, erase it from the confusion. + if ignore_unclassified: + confusions[..., 0, :] = 0 + confusions[..., :, 0] = 0 + + # Sum successive confusions for smoothing + smoothed_confusions = confusions.copy() + if confusions.ndim > 2 and smooth_n > 0: + for epoch in range(confusions.shape[-3]): + i0 = max(epoch - smooth_n, 0) + i1 = min(epoch + smooth_n + 1, confusions.shape[-3]) + smoothed_confusions[..., epoch, :, :] = np.sum(confusions[..., i0:i1, :, :], axis=-3) + + # Compute TP, FP, FN. This assume that the second to last axis counts the truths (like the first axis of a + # confusion matrix), and that the last axis counts the predictions (like the second axis of a confusion matrix) + TP = np.diagonal(smoothed_confusions, axis1=-2, axis2=-1) + TP_plus_FP = np.sum(smoothed_confusions, axis=-2) + TP_plus_FN = np.sum(smoothed_confusions, axis=-1) + + # Compute precision and recall. This assume that the second to last axis counts the truths (like the first axis of + # a confusion matrix), and that the last axis counts the predictions (like the second axis of a confusion matrix) + PRE = TP / (TP_plus_FN + 1e-6) + REC = TP / (TP_plus_FP + 1e-6) + + # Compute Accuracy + ACC = np.sum(TP, axis=-1) / (np.sum(smoothed_confusions, axis=(-2, -1)) + 1e-6) + + # Compute F1 score + F1 = 2 * TP / (TP_plus_FP + TP_plus_FN + 1e-6) + + # Compute IoU + IoU = F1 / (2 - F1) + + return PRE, REC, F1, IoU, ACC + + +def IoU_from_confusions(confusions): + """ + Computes IoU from confusion matrices. + :param confusions: ([..., n_c, n_c] np.int32). Can be any dimension, the confusion matrices should be described by + the last axes. n_c = number of classes + :param ignore_unclassified: (bool). True if the the first class should be ignored in the results + :return: ([..., n_c] np.float32) IoU score + """ + + # Compute TP, FP, FN. This assume that the second to last axis counts the truths (like the first axis of a + # confusion matrix), and that the last axis counts the predictions (like the second axis of a confusion matrix) + TP = np.diagonal(confusions, axis1=-2, axis2=-1) + TP_plus_FN = np.sum(confusions, axis=-1) + TP_plus_FP = np.sum(confusions, axis=-2) + + # Compute IoU + IoU = TP / (TP_plus_FP + TP_plus_FN - TP + 1e-6) + + # Compute mIoU with only the actual classes + mask = TP_plus_FN < 1e-3 + counts = np.sum(1 - mask, axis=-1, keepdims=True) + mIoU = np.sum(IoU, axis=-1, keepdims=True) / (counts + 1e-6) + + # If class is absent, place mIoU in place of 0 IoU to get the actual mean later + IoU += mask * mIoU + + return IoU diff --git a/competing_methods/my_KPConv/utils/ply.py b/competing_methods/my_KPConv/utils/ply.py new file mode 100644 index 00000000..eb48ca42 --- /dev/null +++ b/competing_methods/my_KPConv/utils/ply.py @@ -0,0 +1,358 @@ +# +# +# 0===============================0 +# | PLY files reader/writer | +# 0===============================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# function to read/write .ply files +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 10/02/2017 +# + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Imports and global variables +# \**********************************/ +# + + +# Basic libs +import numpy as np +import sys + + +# Define PLY types +ply_dtypes = dict([ + (b'int8', 'i1'), + (b'char', 'i1'), + (b'uint8', 'u1'), + (b'uchar', 'u1'), + (b'int16', 'i2'), + (b'short', 'i2'), + (b'uint16', 'u2'), + (b'ushort', 'u2'), + (b'int32', 'i4'), + (b'int', 'i4'), + (b'uint32', 'u4'), + (b'uint', 'u4'), + # (b'float32', 'f4'), + # (b'float', 'f4'), + (b'float32', 'f8'), # keep precision + (b'float', 'f8'), + + (b'float64', 'f8'), + (b'double', 'f8') +]) + +# Numpy reader format +valid_formats = {'ascii': '', 'binary_big_endian': '>', + 'binary_little_endian': '<'} + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Functions +# \***************/ +# + + +def parse_header(plyfile, ext): + # Variables + line = [] + properties = [] + num_points = None + + while b'end_header' not in line and line != b'': + line = plyfile.readline() + + if b'element' in line: + line = line.split() + num_points = int(line[2]) + + elif b'property' in line: + line = line.split() + properties.append((line[2].decode(), ext + ply_dtypes[line[1]])) + + return num_points, properties + + +def parse_mesh_header(plyfile, ext): + # Variables + line = [] + vertex_properties = [] + num_points = None + num_faces = None + current_element = None + + + while b'end_header' not in line and line != b'': + line = plyfile.readline() + + # Find point element + if b'element vertex' in line: + current_element = 'vertex' + line = line.split() + num_points = int(line[2]) + + elif b'element face' in line: + current_element = 'face' + line = line.split() + num_faces = int(line[2]) + + elif b'property' in line: + if current_element == 'vertex': + line = line.split() + vertex_properties.append((line[2].decode(), ext + ply_dtypes[line[1]])) + elif current_element == 'vertex': + if not line.startswith('property list uchar int'): + raise ValueError('Unsupported faces property : ' + line) + + return num_points, num_faces, vertex_properties + + +def read_ply(filename, triangular_mesh=False): + """ + Read ".ply" files + + Parameters + ---------- + filename : string + the name of the file to read. + + Returns + ------- + result : array + data stored in the file + + Examples + -------- + Store data in file + + >>> points = np.random.rand(5, 3) + >>> values = np.random.randint(2, size=10) + >>> write_ply('example.ply', [points, values], ['x', 'y', 'z', 'values']) + + Read the file + + >>> data = read_ply('example.ply') + >>> values = data['values'] + array([0, 0, 1, 1, 0]) + + >>> points = np.vstack((data['x'], data['y'], data['z'])).T + array([[ 0.466 0.595 0.324] + [ 0.538 0.407 0.654] + [ 0.850 0.018 0.988] + [ 0.395 0.394 0.363] + [ 0.873 0.996 0.092]]) + + """ + + with open(filename, 'rb') as plyfile: + + + # Check if the file start with ply + if b'ply' not in plyfile.readline(): + raise ValueError('The file does not start whith the word ply') + + # get binary_little/big or ascii + fmt = plyfile.readline().split()[1].decode() + if fmt == "ascii": + raise ValueError('The file is not binary') + + # get extension for building the numpy dtypes + ext = valid_formats[fmt] + + # PointCloud reader vs mesh reader + if triangular_mesh: + + # Parse header + num_points, num_faces, properties = parse_mesh_header(plyfile, ext) + + # Get point data + vertex_data = np.fromfile(plyfile, dtype=properties, count=num_points) + + # Get face data + face_properties = [('k', ext + 'u1'), + ('v1', ext + 'i4'), + ('v2', ext + 'i4'), + ('v3', ext + 'i4')] + faces_data = np.fromfile(plyfile, dtype=face_properties, count=num_faces) + + # Return vertex data and concatenated faces + faces = np.vstack((faces_data['v1'], faces_data['v2'], faces_data['v3'])).T + data = [vertex_data, faces] + + else: + + # Parse header + num_points, properties = parse_header(plyfile, ext) + + # Get data + data = np.fromfile(plyfile, dtype=properties, count=num_points) + + return data + + +def header_properties(field_list, field_names): + + # List of lines to write + lines = [] + + # First line describing element vertex + lines.append('element vertex %d' % field_list[0].shape[0]) + + # Properties lines + i = 0 + for fields in field_list: + for field in fields.T: + lines.append('property %s %s' % (field.dtype.name, field_names[i])) + i += 1 + + return lines + + +def write_ply(filename, field_list, field_names, triangular_faces=None): + """ + Write ".ply" files + + Parameters + ---------- + filename : string + the name of the file to which the data is saved. A '.ply' extension will be appended to the + file name if it does no already have one. + + field_list : list, tuple, numpy array + the fields to be saved in the ply file. Either a numpy array, a list of numpy arrays or a + tuple of numpy arrays. Each 1D numpy array and each column of 2D numpy arrays are considered + as one field. + + field_names : list + the name of each fields as a list of strings. Has to be the same length as the number of + fields. + + Examples + -------- + >>> points = np.random.rand(10, 3) + >>> write_ply('example1.ply', points, ['x', 'y', 'z']) + + >>> values = np.random.randint(2, size=10) + >>> write_ply('example2.ply', [points, values], ['x', 'y', 'z', 'values']) + + >>> colors = np.random.randint(255, size=(10,3), dtype=np.uint8) + >>> field_names = ['x', 'y', 'z', 'red', 'green', 'blue', 'values'] + >>> write_ply('example3.ply', [points, colors, values], field_names) + + """ + + # Format list input to the right form + field_list = list(field_list) if (type(field_list) == list or type(field_list) == tuple) else list((field_list,)) + for i, field in enumerate(field_list): + if field.ndim < 2: + field_list[i] = field.reshape(-1, 1) + if field.ndim > 2: + print('fields have more than 2 dimensions') + return False + + # check all fields have the same number of data + n_points = [field.shape[0] for field in field_list] + if not np.all(np.equal(n_points, n_points[0])): + print('wrong field dimensions') + return False + + # Check if field_names and field_list have same nb of column + n_fields = np.sum([field.shape[1] for field in field_list]) + if (n_fields != len(field_names)): + print('wrong number of field names') + return False + + # Add extension if not there + if not filename.endswith('.ply'): + filename += '.ply' + + # open in text mode to write the header + with open(filename, 'w') as plyfile: + + # First magical word + header = ['ply'] + + # Encoding format + header.append('format binary_' + sys.byteorder + '_endian 1.0') + + # Points properties description + header.extend(header_properties(field_list, field_names)) + + # Add faces if needded + if triangular_faces is not None: + header.append('element face {:d}'.format(triangular_faces.shape[0])) + header.append('property list uchar int vertex_indices') + + # End of header + header.append('end_header') + + # Write all lines + for line in header: + plyfile.write("%s\n" % line) + + # open in binary/append to use tofile + with open(filename, 'ab') as plyfile: + + # Create a structured array + i = 0 + type_list = [] + for fields in field_list: + for field in fields.T: + type_list += [(field_names[i], field.dtype.str)] + i += 1 + data = np.empty(field_list[0].shape[0], dtype=type_list) + i = 0 + for fields in field_list: + for field in fields.T: + data[field_names[i]] = field + i += 1 + + data.tofile(plyfile) + + if triangular_faces is not None: + triangular_faces = triangular_faces.astype(np.int32) + type_list = [('k', 'uint8')] + [(str(ind), 'int32') for ind in range(3)] + data = np.empty(triangular_faces.shape[0], dtype=type_list) + data['k'] = np.full((triangular_faces.shape[0],), 3, dtype=np.uint8) + data['0'] = triangular_faces[:, 0] + data['1'] = triangular_faces[:, 1] + data['2'] = triangular_faces[:, 2] + data.tofile(plyfile) + + return True + + +def describe_element(name, df): + """ Takes the columns of the dataframe and builds a ply-like description + + Parameters + ---------- + name: str + df: pandas DataFrame + + Returns + ------- + element: list[str] + """ + property_formats = {'f': 'float', 'u': 'uchar', 'i': 'int'} + element = ['element ' + name + ' ' + str(len(df))] + + if name == 'face': + element.append("property list uchar int points_indices") + + else: + for i in range(len(df.columns)): + # get first letter of dtype to infer format + f = property_formats[str(df.dtypes[i])[0]] + element.append('property ' + f + ' ' + df.columns.values[i]) + + return element \ No newline at end of file diff --git a/competing_methods/my_KPConv/utils/tester.py b/competing_methods/my_KPConv/utils/tester.py new file mode 100644 index 00000000..f29d12a3 --- /dev/null +++ b/competing_methods/my_KPConv/utils/tester.py @@ -0,0 +1,840 @@ +# +# +# 0=================================0 +# | Kernel Point Convolutions | +# 0=================================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Class handling the test of any model +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 11/06/2018 +# + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Imports and global variables +# \**********************************/ +# + + +# Basic libs +import torch +import torch.nn as nn +import numpy as np +import os +from os import makedirs, listdir +from os.path import exists, join +import time +import json +from sklearn.neighbors import KDTree + +# PLY reader +from utils.ply import read_ply, write_ply + +# Metrics +from utils.metrics import IoU_from_confusions, fast_confusion +from sklearn.metrics import confusion_matrix + +#from utils.visualizer import show_ModelNet_models + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Tester Class +# \******************/ +# + + +from plyfile import PlyData, PlyElement +COLOR_MAP = np.asarray( + [ + [178, 203, 47], #C00 Low Vegetation + [183, 178, 170], #C01 Impervious Surface + [32, 151, 163], #C02 Vehicle + [168, 33, 107], #C03 Urban Furniture + [255, 122, 89], #C04 Roof + [255, 215, 136], #C05 Facade + [89, 125, 53], #C06 Shrub + [0, 128, 65], #C07 Tree + [170, 85, 0], #C08 Soil/Gravel + [252, 225, 5], #C09 Vertical Surface + [128, 0, 0], #C10 Chimney + ] +) + +def write_our_ply(filename, xyz, label): + """write into a ply file""" + prop = [('x', 'f8'), ('y', 'f8'), ('z', 'f8'), ('red', 'u1'), ('green', 'u1'), ('blue', 'u1'), ('Classification', 'u1')] + colors = COLOR_MAP[np.asarray(label)] + vertex_all = np.empty(len(xyz), dtype=prop) + for i_prop in range(0, 3): + vertex_all[prop[i_prop][0]] = xyz[:, i_prop] + for i_prop in range(0, 3): + vertex_all[prop[i_prop + 3][0]] = colors[:, i_prop] + vertex_all[prop[6][0]] = label + ply = PlyData([PlyElement.describe(vertex_all, 'vertex')], text=False) # True ascii + ply.write(filename) + +class ModelTester: + + # Initialization methods + # ------------------------------------------------------------------------------------------------------------------ + + def __init__(self, net, chkp_path=None, on_gpu=True): + + ############ + # Parameters + ############ + + # Choose to train on CPU or GPU + if on_gpu and torch.cuda.is_available(): + self.device = torch.device("cuda:0") + else: + self.device = torch.device("cpu") + net.to(self.device) + + ########################## + # Load previous checkpoint + ########################## + + checkpoint = torch.load(chkp_path) + net.load_state_dict(checkpoint['model_state_dict']) + self.epoch = checkpoint['epoch'] + net.eval() + print("Model and training state restored.") + + return + + # Test main methods + # ------------------------------------------------------------------------------------------------------------------ + + def classification_test(self, net, test_loader, config, num_votes=100, debug=False): + + ############ + # Initialize + ############ + + # Choose test smoothing parameter (0 for no smothing, 0.99 for big smoothing) + softmax = torch.nn.Softmax(1) + + # Number of classes including ignored labels + nc_tot = test_loader.dataset.num_classes + + # Number of classes predicted by the model + nc_model = config.num_classes + + # Initiate global prediction over test clouds + self.test_probs = np.zeros((test_loader.dataset.num_models, nc_model)) + self.test_counts = np.zeros((test_loader.dataset.num_models, nc_model)) + + t = [time.time()] + mean_dt = np.zeros(1) + last_display = time.time() + while np.min(self.test_counts) < num_votes: + + # Run model on all test examples + # ****************************** + + # Initiate result containers + probs = [] + targets = [] + obj_inds = [] + + # Start validation loop + for batch in test_loader: + + # New time + t = t[-1:] + t += [time.time()] + + if 'cuda' in self.device.type: + batch.to(self.device) + + # Forward pass + outputs = net(batch, config) + + # Get probs and labels + probs += [softmax(outputs).cpu().detach().numpy()] + targets += [batch.labels.cpu().numpy()] + obj_inds += [batch.model_inds.cpu().numpy()] + + if 'cuda' in self.device.type: + torch.cuda.synchronize(self.device) + + # Average timing + t += [time.time()] + mean_dt = 0.95 * mean_dt + 0.05 * (np.array(t[1:]) - np.array(t[:-1])) + + # Display + if (t[-1] - last_display) > 1.0: + last_display = t[-1] + message = 'Test vote {:.0f} : {:.1f}% (timings : {:4.2f} {:4.2f})' + print(message.format(np.min(self.test_counts), + 100 * len(obj_inds) / config.validation_size, + 1000 * (mean_dt[0]), + 1000 * (mean_dt[1]))) + # Stack all validation predictions + probs = np.vstack(probs) + targets = np.hstack(targets) + obj_inds = np.hstack(obj_inds) + + if np.any(test_loader.dataset.input_labels[obj_inds] != targets): + raise ValueError('wrong object indices') + + # Compute incremental average (predictions are always ordered) + self.test_counts[obj_inds] += 1 + self.test_probs[obj_inds] += (probs - self.test_probs[obj_inds]) / (self.test_counts[obj_inds]) + + # Save/Display temporary results + # ****************************** + + test_labels = np.array(test_loader.dataset.label_values) + + # Compute classification results + C1 = fast_confusion(test_loader.dataset.input_labels, + np.argmax(self.test_probs, axis=1), + test_labels) + + ACC = 100 * np.sum(np.diag(C1)) / (np.sum(C1) + 1e-6) + print('Test Accuracy = {:.1f}%'.format(ACC)) + + return + + def cloud_segmentation_test(self, net, test_loader, config, num_votes=100, debug=False): + """ + Test method for cloud segmentation models + """ + + ############ + # Initialize + ############ + + # Choose test smoothing parameter (0 for no smothing, 0.99 for big smoothing) + test_smooth = 0.95 + test_radius_ratio = 0.7 + softmax = torch.nn.Softmax(1) + + # Number of classes including ignored labels + nc_tot = test_loader.dataset.num_classes + + # Number of classes predicted by the model + nc_model = config.num_classes - len(test_loader.dataset.ignored_labels) + + # Initiate global prediction over test clouds + self.test_probs = [np.zeros((l.shape[0], nc_model)) for l in test_loader.dataset.input_labels] + + # Test saving path + if config.saving: + test_path = join('test', config.saving_path.split('/')[-1]) + if not exists(test_path): + makedirs(test_path) + if not exists(join(test_path, 'predictions')): + makedirs(join(test_path, 'predictions')) + if not exists(join(test_path, 'probs')): + makedirs(join(test_path, 'probs')) + if not exists(join(test_path, 'potentials')): + makedirs(join(test_path, 'potentials')) + else: + test_path = None + + # If on validation directly compute score + if test_loader.dataset.set == 'validation': + val_proportions = np.zeros(nc_model, dtype=np.float32) + i = 0 + for label_value in test_loader.dataset.label_values: + if label_value not in test_loader.dataset.ignored_labels: + val_proportions[i] = np.sum([np.sum(labels == label_value) + for labels in test_loader.dataset.validation_labels]) + i += 1 + else: + val_proportions = None + + ##################### + # Network predictions + ##################### + + test_epoch = 0 + last_min = -0.5 + + t = [time.time()] + last_display = time.time() + mean_dt = np.zeros(1) + + # Start test loop + while True: + print('Initialize workers') + for i, batch in enumerate(test_loader): + + # New time + t = t[-1:] + t += [time.time()] + + if i == 0: + print('Done in {:.1f}s'.format(t[1] - t[0])) + + if 'cuda' in self.device.type: + batch.to(self.device) + + # Forward pass + outputs = net(batch, config) + + t += [time.time()] + + # Get probs and labels + stacked_probs = softmax(outputs).cpu().detach().numpy() + s_points = batch.points[0].cpu().numpy() + lengths = batch.lengths[0].cpu().numpy() + in_inds = batch.input_inds.cpu().numpy() + cloud_inds = batch.cloud_inds.cpu().numpy() + torch.cuda.synchronize(self.device) + + # Get predictions and labels per instance + # *************************************** + + i0 = 0 + for b_i, length in enumerate(lengths): + + # Get prediction + points = s_points[i0:i0 + length] + probs = stacked_probs[i0:i0 + length] + inds = in_inds[i0:i0 + length] + c_i = cloud_inds[b_i] + + if 0 < test_radius_ratio < 1: + mask = np.sum(points ** 2, axis=1) < (test_radius_ratio * config.in_radius) ** 2 + inds = inds[mask] + probs = probs[mask] + + # Update current probs in whole cloud + self.test_probs[c_i][inds] = test_smooth * self.test_probs[c_i][inds] + (1 - test_smooth) * probs + i0 += length + + # Average timing + t += [time.time()] + if i < 2: + mean_dt = np.array(t[1:]) - np.array(t[:-1]) + else: + mean_dt = 0.9 * mean_dt + 0.1 * (np.array(t[1:]) - np.array(t[:-1])) + + # Display + if (t[-1] - last_display) > 1.0: + last_display = t[-1] + message = 'e{:03d}-i{:04d} => {:.1f}% (timings : {:4.2f} {:4.2f} {:4.2f})' + print(message.format(test_epoch, i, + 100 * i / config.validation_size, + 1000 * (mean_dt[0]), + 1000 * (mean_dt[1]), + 1000 * (mean_dt[2]))) + + # Update minimum od potentials + new_min = torch.min(test_loader.dataset.min_potentials) + print('Test epoch {:d}, end. Min potential = {:.1f}'.format(test_epoch, new_min)) + #print([np.mean(pots) for pots in test_loader.dataset.potentials]) + + # Save predicted cloud + if last_min + 1 < new_min or test_epoch >= config.max_test_epoch: + + # Update last_min + last_min += 1 + + # Show vote results (On subcloud so it is not the good values here) + if test_loader.dataset.set == 'validation': + print('\nConfusion on sub clouds') + Confs = [] + for i, file_path in enumerate(test_loader.dataset.files): + + # Insert false columns for ignored labels + probs = np.array(self.test_probs[i], copy=True) + for l_ind, label_value in enumerate(test_loader.dataset.label_values): + if label_value in test_loader.dataset.ignored_labels: + probs = np.insert(probs, l_ind, 0, axis=1) + + # Predicted labels + preds = test_loader.dataset.label_values[np.argmax(probs, axis=1)].astype(np.int32) + + # Targets + targets = test_loader.dataset.input_labels[i] + + # Confs + Confs += [fast_confusion(targets, preds, test_loader.dataset.label_values)] + + # Regroup confusions + C = np.sum(np.stack(Confs), axis=0).astype(np.float32) + + # Remove ignored labels from confusions + for l_ind, label_value in reversed(list(enumerate(test_loader.dataset.label_values))): + if label_value in test_loader.dataset.ignored_labels: + C = np.delete(C, l_ind, axis=0) + C = np.delete(C, l_ind, axis=1) + + # Rescale with the right number of point per class + C *= np.expand_dims(val_proportions / (np.sum(C, axis=1) + 1e-6), 1) + + # Compute IoUs + IoUs = IoU_from_confusions(C) + mIoU = np.mean(IoUs) + s = '{:5.2f} | '.format(100 * mIoU) + for IoU in IoUs: + s += '{:5.2f} '.format(100 * IoU) + print(s + '\n') + + # Save real IoU once in a while + if int(np.ceil(new_min)) % 10 == 0 or test_epoch >= config.max_test_epoch: + + # Project predictions + print('\nReproject Vote #{:d}'.format(int(np.floor(new_min)))) + t1 = time.time() + proj_probs = [] + for i, file_path in enumerate(test_loader.dataset.files): + + print(i, file_path, test_loader.dataset.test_proj[i].shape, self.test_probs[i].shape) + + print(test_loader.dataset.test_proj[i].dtype, np.max(test_loader.dataset.test_proj[i])) + print(test_loader.dataset.test_proj[i][:5]) + + # Reproject probs on the evaluations points + probs = self.test_probs[i][test_loader.dataset.test_proj[i], :] + proj_probs += [probs] + + t2 = time.time() + print('Done in {:.1f} s\n'.format(t2 - t1)) + + # Show vote results + if test_loader.dataset.set == 'validation': + print('Confusion on full clouds') + t1 = time.time() + Confs = [] + for i, file_path in enumerate(test_loader.dataset.files): + + # Insert false columns for ignored labels + for l_ind, label_value in enumerate(test_loader.dataset.label_values): + if label_value in test_loader.dataset.ignored_labels: + proj_probs[i] = np.insert(proj_probs[i], l_ind, 0, axis=1) + + # Get the predicted labels + preds = test_loader.dataset.label_values[np.argmax(proj_probs[i], axis=1)].astype(np.int32) + + # Confusion + targets = test_loader.dataset.validation_labels[i] + Confs += [fast_confusion(targets, preds, test_loader.dataset.label_values)] + + t2 = time.time() + print('Done in {:.1f} s\n'.format(t2 - t1)) + + # Regroup confusions + C = np.sum(np.stack(Confs), axis=0) + + # Remove ignored labels from confusions + for l_ind, label_value in reversed(list(enumerate(test_loader.dataset.label_values))): + if label_value in test_loader.dataset.ignored_labels: + C = np.delete(C, l_ind, axis=0) + C = np.delete(C, l_ind, axis=1) + + IoUs = IoU_from_confusions(C) + mIoU = np.mean(IoUs) + s = '{:5.2f} | '.format(100 * mIoU) + for IoU in IoUs: + s += '{:5.2f} '.format(100 * IoU) + print('-' * len(s)) + print(s) + print('-' * len(s) + '\n') + + # Save predictions + print('Saving clouds') + t1 = time.time() + for i, file_path in enumerate(test_loader.dataset.files): + + # Get file + points = test_loader.dataset.load_evaluation_points(file_path) + + # Get the predicted labels + preds = test_loader.dataset.label_values[np.argmax(proj_probs[i], axis=1)].astype(np.int32) + + # Save plys + cloud_name = os.path.splitext(os.path.basename(file_path))[0] # + ".ply" # file_path.split('/')[-1] + test_name = join(test_path, 'predictions', cloud_name) + + # for Hessigsim3D + # points[:, 0] += 513852.00 + # points[:, 1] += 5426490.00 + # points[:, 2] += 224.51 + # write_our_ply(test_name, points, preds) + + # for our SUM benchmark + write_ply(test_name, + [points, preds], + ['x', 'y', 'z', 'preds']) + # --------------------------------- + # test_name2 = join(test_path, 'probs', cloud_name) + # prob_names = ['_'.join(test_loader.dataset.label_to_names[label].split()) + # for label in test_loader.dataset.label_values] + # write_ply(test_name2, + # [points, proj_probs[i]], + # ['x', 'y', 'z'] + prob_names) + # + # # Save potentials + # pot_points = np.array(test_loader.dataset.pot_trees[i].data, copy=False) + # pot_name = join(test_path, 'potentials', cloud_name) + # pots = test_loader.dataset.potentials[i].numpy().astype(np.float32) + # write_ply(pot_name, + # [pot_points.astype(np.float32), pots], + # ['x', 'y', 'z', 'pots']) + + # Save ascii preds + if test_loader.dataset.set == 'test': + if test_loader.dataset.name.startswith('Semantic3D'): + ascii_name = join(test_path, 'predictions', test_loader.dataset.ascii_files[cloud_name]) + else: + ascii_name = join(test_path, 'predictions', cloud_name[:-4] + '.txt') + np.savetxt(ascii_name, preds, fmt='%d') + + t2 = time.time() + break + print('Done in {:.1f} s\n'.format(t2 - t1)) + + test_epoch += 1 + + # Break when reaching number of desired votes + print("last_min: " + str(last_min) + "; num_votes: " + str(num_votes)) + if last_min > num_votes: + break + + return + + def slam_segmentation_test(self, net, test_loader, config, num_votes=100, debug=True): + """ + Test method for slam segmentation models + """ + + ############ + # Initialize + ############ + + # Choose validation smoothing parameter (0 for no smothing, 0.99 for big smoothing) + test_smooth = 0.5 + last_min = -0.5 + softmax = torch.nn.Softmax(1) + + # Number of classes including ignored labels + nc_tot = test_loader.dataset.num_classes + nc_model = net.C + + # Test saving path + test_path = None + report_path = None + if config.saving: + test_path = join('test', config.saving_path.split('/')[-1]) + if not exists(test_path): + makedirs(test_path) + report_path = join(test_path, 'reports') + if not exists(report_path): + makedirs(report_path) + + if test_loader.dataset.set == 'validation': + for folder in ['val_predictions', 'val_probs']: + if not exists(join(test_path, folder)): + makedirs(join(test_path, folder)) + else: + for folder in ['predictions', 'probs']: + if not exists(join(test_path, folder)): + makedirs(join(test_path, folder)) + + # Init validation container + all_f_preds = [] + all_f_labels = [] + if test_loader.dataset.set == 'validation': + for i, seq_frames in enumerate(test_loader.dataset.frames): + all_f_preds.append([np.zeros((0,), dtype=np.int32) for _ in seq_frames]) + all_f_labels.append([np.zeros((0,), dtype=np.int32) for _ in seq_frames]) + + ##################### + # Network predictions + ##################### + + predictions = [] + targets = [] + test_epoch = 0 + + t = [time.time()] + last_display = time.time() + mean_dt = np.zeros(1) + + # Start test loop + while True: + print('Initialize workers') + for i, batch in enumerate(test_loader): + + # New time + t = t[-1:] + t += [time.time()] + + if i == 0: + print('Done in {:.1f}s'.format(t[1] - t[0])) + + if 'cuda' in self.device.type: + batch.to(self.device) + + # Forward pass + outputs = net(batch, config) + + # Get probs and labels + stk_probs = softmax(outputs).cpu().detach().numpy() + lengths = batch.lengths[0].cpu().numpy() + f_inds = batch.frame_inds.cpu().numpy() + r_inds_list = batch.reproj_inds + r_mask_list = batch.reproj_masks + labels_list = batch.val_labels + torch.cuda.synchronize(self.device) + + t += [time.time()] + + # Get predictions and labels per instance + # *************************************** + + i0 = 0 + for b_i, length in enumerate(lengths): + + # Get prediction + probs = stk_probs[i0:i0 + length] + proj_inds = r_inds_list[b_i] + proj_mask = r_mask_list[b_i] + frame_labels = labels_list[b_i] + s_ind = f_inds[b_i, 0] + f_ind = f_inds[b_i, 1] + + # Project predictions on the frame points + proj_probs = probs[proj_inds] + + # Safe check if only one point: + if proj_probs.ndim < 2: + proj_probs = np.expand_dims(proj_probs, 0) + + # Save probs in a binary file (uint8 format for lighter weight) + seq_name = test_loader.dataset.sequences[s_ind] + if test_loader.dataset.set == 'validation': + folder = 'val_probs' + pred_folder = 'val_predictions' + else: + folder = 'probs' + pred_folder = 'predictions' + filename = '{:s}_{:07d}.npy'.format(seq_name, f_ind) + filepath = join(test_path, folder, filename) + if exists(filepath): + frame_probs_uint8 = np.load(filepath) + else: + frame_probs_uint8 = np.zeros((proj_mask.shape[0], nc_model), dtype=np.uint8) + frame_probs = frame_probs_uint8[proj_mask, :].astype(np.float32) / 255 + frame_probs = test_smooth * frame_probs + (1 - test_smooth) * proj_probs + frame_probs_uint8[proj_mask, :] = (frame_probs * 255).astype(np.uint8) + np.save(filepath, frame_probs_uint8) + + # Save some prediction in ply format for visual + if test_loader.dataset.set == 'validation': + + # Insert false columns for ignored labels + frame_probs_uint8_bis = frame_probs_uint8.copy() + for l_ind, label_value in enumerate(test_loader.dataset.label_values): + if label_value in test_loader.dataset.ignored_labels: + frame_probs_uint8_bis = np.insert(frame_probs_uint8_bis, l_ind, 0, axis=1) + + # Predicted labels + frame_preds = test_loader.dataset.label_values[np.argmax(frame_probs_uint8_bis, + axis=1)].astype(np.int32) + + # Save some of the frame pots + if f_ind % 20 == 0: + seq_path = join(test_loader.dataset.path, 'sequences', test_loader.dataset.sequences[s_ind]) + velo_file = join(seq_path, 'velodyne', test_loader.dataset.frames[s_ind][f_ind] + '.bin') + frame_points = np.fromfile(velo_file, dtype=np.float32) + frame_points = frame_points.reshape((-1, 4)) + predpath = join(test_path, pred_folder, filename[:-4] + '.ply') + #pots = test_loader.dataset.f_potentials[s_ind][f_ind] + pots = np.zeros((0,)) + if pots.shape[0] > 0: + write_ply(predpath, + [frame_points[:, :3], frame_labels, frame_preds, pots], + ['x', 'y', 'z', 'gt', 'pre', 'pots']) + else: + write_ply(predpath, + [frame_points[:, :3], frame_labels, frame_preds], + ['x', 'y', 'z', 'gt', 'pre']) + + # Also Save lbl probabilities + probpath = join(test_path, folder, filename[:-4] + '_probs.ply') + lbl_names = [test_loader.dataset.label_to_names[l] + for l in test_loader.dataset.label_values + if l not in test_loader.dataset.ignored_labels] + write_ply(probpath, + [frame_points[:, :3], frame_probs_uint8], + ['x', 'y', 'z'] + lbl_names) + + # keep frame preds in memory + all_f_preds[s_ind][f_ind] = frame_preds + all_f_labels[s_ind][f_ind] = frame_labels + + else: + + # Save some of the frame preds + if f_inds[b_i, 1] % 100 == 0: + + # Insert false columns for ignored labels + for l_ind, label_value in enumerate(test_loader.dataset.label_values): + if label_value in test_loader.dataset.ignored_labels: + frame_probs_uint8 = np.insert(frame_probs_uint8, l_ind, 0, axis=1) + + # Predicted labels + frame_preds = test_loader.dataset.label_values[np.argmax(frame_probs_uint8, + axis=1)].astype(np.int32) + + # Load points + seq_path = join(test_loader.dataset.path, 'sequences', test_loader.dataset.sequences[s_ind]) + velo_file = join(seq_path, 'velodyne', test_loader.dataset.frames[s_ind][f_ind] + '.bin') + frame_points = np.fromfile(velo_file, dtype=np.float32) + frame_points = frame_points.reshape((-1, 4)) + predpath = join(test_path, pred_folder, filename[:-4] + '.ply') + #pots = test_loader.dataset.f_potentials[s_ind][f_ind] + pots = np.zeros((0,)) + if pots.shape[0] > 0: + write_ply(predpath, + [frame_points[:, :3], frame_preds, pots], + ['x', 'y', 'z', 'pre', 'pots']) + else: + write_ply(predpath, + [frame_points[:, :3], frame_preds], + ['x', 'y', 'z', 'pre']) + + # Stack all prediction for this epoch + i0 += length + + # Average timing + t += [time.time()] + mean_dt = 0.95 * mean_dt + 0.05 * (np.array(t[1:]) - np.array(t[:-1])) + + # Display + if (t[-1] - last_display) > 1.0: + last_display = t[-1] + message = 'e{:03d}-i{:04d} => {:.1f}% (timings : {:4.2f} {:4.2f} {:4.2f}) / pots {:d} => {:.1f}%' + min_pot = int(torch.floor(torch.min(test_loader.dataset.potentials))) + pot_num = torch.sum(test_loader.dataset.potentials > min_pot + 0.5).type(torch.int32).item() + current_num = pot_num + (i + 1 - config.validation_size) * config.val_batch_num + print(message.format(test_epoch, i, + 100 * i / config.validation_size, + 1000 * (mean_dt[0]), + 1000 * (mean_dt[1]), + 1000 * (mean_dt[2]), + min_pot, + 100.0 * current_num / len(test_loader.dataset.potentials))) + + + # Update minimum od potentials + new_min = torch.min(test_loader.dataset.potentials) + print('Test epoch {:d}, end. Min potential = {:.1f}'.format(test_epoch, new_min)) + + if last_min + 1 < new_min: + + # Update last_min + last_min += 1 + + if test_loader.dataset.set == 'validation' and last_min % 1 == 0: + + ##################################### + # Results on the whole validation set + ##################################### + + # Confusions for our subparts of validation set + Confs = np.zeros((len(predictions), nc_tot, nc_tot), dtype=np.int32) + for i, (preds, truth) in enumerate(zip(predictions, targets)): + + # Confusions + Confs[i, :, :] = fast_confusion(truth, preds, test_loader.dataset.label_values).astype(np.int32) + + + # Show vote results + print('\nCompute confusion') + + val_preds = [] + val_labels = [] + t1 = time.time() + for i, seq_frames in enumerate(test_loader.dataset.frames): + val_preds += [np.hstack(all_f_preds[i])] + val_labels += [np.hstack(all_f_labels[i])] + val_preds = np.hstack(val_preds) + val_labels = np.hstack(val_labels) + t2 = time.time() + C_tot = fast_confusion(val_labels, val_preds, test_loader.dataset.label_values) + t3 = time.time() + print(' Stacking time : {:.1f}s'.format(t2 - t1)) + print('Confusion time : {:.1f}s'.format(t3 - t2)) + + s1 = '\n' + for cc in C_tot: + for c in cc: + s1 += '{:7.0f} '.format(c) + s1 += '\n' + if debug: + print(s1) + + # Remove ignored labels from confusions + for l_ind, label_value in reversed(list(enumerate(test_loader.dataset.label_values))): + if label_value in test_loader.dataset.ignored_labels: + C_tot = np.delete(C_tot, l_ind, axis=0) + C_tot = np.delete(C_tot, l_ind, axis=1) + + # Objects IoU + val_IoUs = IoU_from_confusions(C_tot) + + # Compute IoUs + mIoU = np.mean(val_IoUs) + s2 = '{:5.2f} | '.format(100 * mIoU) + for IoU in val_IoUs: + s2 += '{:5.2f} '.format(100 * IoU) + print(s2 + '\n') + + # Save a report + report_file = join(report_path, 'report_{:04d}.txt'.format(int(np.floor(last_min)))) + str = 'Report of the confusion and metrics\n' + str += '***********************************\n\n\n' + str += 'Confusion matrix:\n\n' + str += s1 + str += '\nIoU values:\n\n' + str += s2 + str += '\n\n' + with open(report_file, 'w') as f: + f.write(str) + + test_epoch += 1 + + # Break when reaching number of desired votes + if last_min > num_votes: + break + + return + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/competing_methods/my_KPConv/utils/trainer.py b/competing_methods/my_KPConv/utils/trainer.py new file mode 100644 index 00000000..61e60200 --- /dev/null +++ b/competing_methods/my_KPConv/utils/trainer.py @@ -0,0 +1,934 @@ +# +# +# 0=================================0 +# | Kernel Point Convolutions | +# 0=================================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Class handling the training of any model +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 11/06/2018 +# + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Imports and global variables +# \**********************************/ +# + + +# Basic libs +import torch +import torch.nn as nn +import numpy as np +import pickle +import os +from os import makedirs, remove +from os.path import exists, join +import time +import sys + +# PLY reader +from utils.ply import read_ply, write_ply + +# Metrics +from utils.metrics import IoU_from_confusions, fast_confusion +from utils.config import Config +from sklearn.neighbors import KDTree + +from models.blocks import KPConv + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Trainer Class +# \*******************/ +# + + +class ModelTrainer: + + # Initialization methods + # ------------------------------------------------------------------------------------------------------------------ + + def __init__(self, net, config, chkp_path=None, finetune=False, on_gpu=True): + """ + Initialize training parameters and reload previous model for restore/finetune + :param net: network object + :param config: configuration object + :param chkp_path: path to the checkpoint that needs to be loaded (None for new training) + :param finetune: finetune from checkpoint (True) or restore training from checkpoint (False) + :param on_gpu: Train on GPU or CPU + """ + + ############ + # Parameters + ############ + + # Epoch index + self.epoch = 0 + self.step = 0 + + # Optimizer with specific learning rate for deformable KPConv + deform_params = [v for k, v in net.named_parameters() if 'offset' in k] + other_params = [v for k, v in net.named_parameters() if 'offset' not in k] + deform_lr = config.learning_rate * config.deform_lr_factor + self.optimizer = torch.optim.SGD([{'params': other_params}, + {'params': deform_params, 'lr': deform_lr}], + lr=config.learning_rate, + momentum=config.momentum, + weight_decay=config.weight_decay) + + # Choose to train on CPU or GPU + if on_gpu and torch.cuda.is_available(): + self.device = torch.device("cuda:0") + else: + self.device = torch.device("cpu") + net.to(self.device) + + ########################## + # Load previous checkpoint + ########################## + + if (chkp_path is not None): + if finetune: + checkpoint = torch.load(chkp_path) + net.load_state_dict(checkpoint['model_state_dict']) + net.train() + print("Model restored and ready for finetuning.") + else: + checkpoint = torch.load(chkp_path) + net.load_state_dict(checkpoint['model_state_dict']) + self.optimizer.load_state_dict(checkpoint['optimizer_state_dict']) + self.epoch = checkpoint['epoch'] + net.train() + print("Model and training state restored.") + + # Path of the result folder + if config.saving: + if config.saving_path is None: + config.saving_path = time.strftime('results/Log_%Y-%m-%d_%H-%M-%S', time.gmtime()) + if not exists(config.saving_path): + makedirs(config.saving_path) + config.save() + + return + + # Training main method + # ------------------------------------------------------------------------------------------------------------------ + + def train(self, net, training_loader, val_loader, config): + """ + Train the model on a particular dataset. + """ + + ################ + # Initialization + ################ + + if config.saving: + # Training log file + with open(join(config.saving_path, 'training.txt'), "w") as file: + file.write('epochs steps out_loss offset_loss train_accuracy time\n') + + # Killing file (simply delete this file when you want to stop the training) + PID_file = join(config.saving_path, 'running_PID.txt') + if not exists(PID_file): + with open(PID_file, "w") as file: + file.write('Launched with PyCharm') + + # Checkpoints directory + checkpoint_directory = join(config.saving_path, 'checkpoints') + if not exists(checkpoint_directory): + makedirs(checkpoint_directory) + else: + checkpoint_directory = None + PID_file = None + + # Loop variables + t0 = time.time() + t = [time.time()] + last_display = time.time() + mean_dt = np.zeros(1) + + # Start training loop + for epoch in range(config.max_epoch): + + # Remove File for kill signal + if epoch == config.max_epoch - 1 and exists(PID_file): + remove(PID_file) + + self.step = 0 + for batch in training_loader: + + # Check kill signal (running_PID.txt deleted) + if config.saving and not exists(PID_file): + continue + + ################## + # Processing batch + ################## + + # New time + t = t[-1:] + t += [time.time()] + + if 'cuda' in self.device.type: + batch.to(self.device) + + # zero the parameter gradients + self.optimizer.zero_grad() + + # Forward pass + outputs = net(batch, config) + loss = net.loss(outputs, batch.labels) + acc = net.accuracy(outputs, batch.labels) + + t += [time.time()] + + # Backward + optimize + loss.backward() + + if config.grad_clip_norm > 0: + #torch.nn.utils.clip_grad_norm_(net.parameters(), config.grad_clip_norm) + torch.nn.utils.clip_grad_value_(net.parameters(), config.grad_clip_norm) + self.optimizer.step() + torch.cuda.synchronize(self.device) + + t += [time.time()] + + # Average timing + if self.step < 2: + mean_dt = np.array(t[1:]) - np.array(t[:-1]) + else: + mean_dt = 0.9 * mean_dt + 0.1 * (np.array(t[1:]) - np.array(t[:-1])) + + # Console display (only one per second) + if (t[-1] - last_display) > 1.0: + last_display = t[-1] + message = 'e{:03d}-i{:04d} => L={:.3f} acc={:3.0f}% / t(ms): {:5.1f} {:5.1f} {:5.1f})' + print(message.format(self.epoch, self.step, + loss.item(), + 100*acc, + 1000 * mean_dt[0], + 1000 * mean_dt[1], + 1000 * mean_dt[2])) + + # Log file + if config.saving: + with open(join(config.saving_path, 'training.txt'), "a") as file: + message = '{:d} {:d} {:.3f} {:.3f} {:.3f} {:.3f}\n' + file.write(message.format(self.epoch, + self.step, + net.output_loss, + net.reg_loss, + acc, + t[-1] - t0)) + + + self.step += 1 + + ############## + # End of epoch + ############## + + # Check kill signal (running_PID.txt deleted) + if config.saving and not exists(PID_file): + break + + # Update learning rate + if self.epoch in config.lr_decays: + for param_group in self.optimizer.param_groups: + param_group['lr'] *= config.lr_decays[self.epoch] + + # Update epoch + self.epoch += 1 + + # Saving + if config.saving: + # Get current state dict + save_dict = {'epoch': self.epoch, + 'model_state_dict': net.state_dict(), + 'optimizer_state_dict': self.optimizer.state_dict(), + 'saving_path': config.saving_path} + + # Save current state of the network (for restoring purposes) + checkpoint_path = join(checkpoint_directory, 'current_chkp.tar') + torch.save(save_dict, checkpoint_path) + + # Save checkpoints occasionally + if (self.epoch + 1) % config.checkpoint_gap == 0: + checkpoint_path = join(checkpoint_directory, 'chkp_{:04d}.tar'.format(self.epoch + 1)) + torch.save(save_dict, checkpoint_path) + + # Validation + net.eval() + self.validation(net, val_loader, config) + net.train() + + print('Finished Training') + return + + # Validation methods + # ------------------------------------------------------------------------------------------------------------------ + + def validation(self, net, val_loader, config: Config): + + if config.dataset_task == 'classification': + self.object_classification_validation(net, val_loader, config) + elif config.dataset_task == 'segmentation': + self.object_segmentation_validation(net, val_loader, config) + elif config.dataset_task == 'cloud_segmentation': + self.cloud_segmentation_validation(net, val_loader, config) + elif config.dataset_task == 'slam_segmentation': + self.slam_segmentation_validation(net, val_loader, config) + else: + raise ValueError('No validation method implemented for this network type') + + def object_classification_validation(self, net, val_loader, config): + """ + Perform a round of validation and show/save results + :param net: network object + :param val_loader: data loader for validation set + :param config: configuration object + """ + + ############ + # Initialize + ############ + + # Choose validation smoothing parameter (0 for no smothing, 0.99 for big smoothing) + val_smooth = 0.95 + + # Number of classes predicted by the model + nc_model = config.num_classes + softmax = torch.nn.Softmax(1) + + # Initialize global prediction over all models + if not hasattr(self, 'val_probs'): + self.val_probs = np.zeros((val_loader.dataset.num_models, nc_model)) + + ##################### + # Network predictions + ##################### + + probs = [] + targets = [] + obj_inds = [] + + t = [time.time()] + last_display = time.time() + mean_dt = np.zeros(1) + + # Start validation loop + for batch in val_loader: + + # New time + t = t[-1:] + t += [time.time()] + + if 'cuda' in self.device.type: + batch.to(self.device) + + # Forward pass + outputs = net(batch, config) + + # Get probs and labels + probs += [softmax(outputs).cpu().detach().numpy()] + targets += [batch.labels.cpu().numpy()] + obj_inds += [batch.model_inds.cpu().numpy()] + torch.cuda.synchronize(self.device) + + # Average timing + t += [time.time()] + mean_dt = 0.95 * mean_dt + 0.05 * (np.array(t[1:]) - np.array(t[:-1])) + + # Display + if (t[-1] - last_display) > 1.0: + last_display = t[-1] + message = 'Validation : {:.1f}% (timings : {:4.2f} {:4.2f})' + print(message.format(100 * len(obj_inds) / config.validation_size, + 1000 * (mean_dt[0]), + 1000 * (mean_dt[1]))) + + # Stack all validation predictions + probs = np.vstack(probs) + targets = np.hstack(targets) + obj_inds = np.hstack(obj_inds) + + ################### + # Voting validation + ################### + + self.val_probs[obj_inds] = val_smooth * self.val_probs[obj_inds] + (1-val_smooth) * probs + + ############ + # Confusions + ############ + + validation_labels = np.array(val_loader.dataset.label_values) + + # Compute classification results + C1 = fast_confusion(targets, + np.argmax(probs, axis=1), + validation_labels) + + # Compute votes confusion + C2 = fast_confusion(val_loader.dataset.input_labels, + np.argmax(self.val_probs, axis=1), + validation_labels) + + + # Saving (optionnal) + if config.saving: + print("Save confusions") + conf_list = [C1, C2] + file_list = ['val_confs.txt', 'vote_confs.txt'] + for conf, conf_file in zip(conf_list, file_list): + test_file = join(config.saving_path, conf_file) + if exists(test_file): + with open(test_file, "a") as text_file: + for line in conf: + for value in line: + text_file.write('%d ' % value) + text_file.write('\n') + else: + with open(test_file, "w") as text_file: + for line in conf: + for value in line: + text_file.write('%d ' % value) + text_file.write('\n') + + val_ACC = 100 * np.sum(np.diag(C1)) / (np.sum(C1) + 1e-6) + vote_ACC = 100 * np.sum(np.diag(C2)) / (np.sum(C2) + 1e-6) + print('Accuracies : val = {:.1f}% / vote = {:.1f}%'.format(val_ACC, vote_ACC)) + + return C1 + + def cloud_segmentation_validation(self, net, val_loader, config, debug=False): + """ + Validation method for cloud segmentation models + """ + + ############ + # Initialize + ############ + + t0 = time.time() + + # Choose validation smoothing parameter (0 for no smothing, 0.99 for big smoothing) + val_smooth = 0.95 + softmax = torch.nn.Softmax(1) + + # Do not validate if dataset has no validation cloud + # if val_loader.dataset.filvalidation_split not in val_loader.dataset.all_splits: + # return + if len(val_loader.dataset.files) == 0: + return + + # Number of classes including ignored labels + nc_tot = val_loader.dataset.num_classes + + # Number of classes predicted by the model, not include ignored labels + nc_model = config.num_classes - len(val_loader.dataset.ignored_labels) + + #print(nc_tot) + #print(nc_model) + + # Initiate global prediction over validation clouds + if not hasattr(self, 'validation_probs'): + self.validation_probs = [np.zeros((l.shape[0], nc_model)) + for l in val_loader.dataset.input_labels] + + self.val_proportions = np.zeros(nc_model, dtype=np.float32) + i = 0 + for label_value in val_loader.dataset.label_values: + if label_value not in val_loader.dataset.ignored_labels: + self.val_proportions[i] = np.sum([np.sum(labels == label_value) + for labels in val_loader.dataset.validation_labels]) + i += 1 + + ##################### + # Network predictions + ##################### + + predictions = [] + targets = [] + + t = [time.time()] + last_display = time.time() + mean_dt = np.zeros(1) + + + t1 = time.time() + + # Start validation loop + for i, batch in enumerate(val_loader): + + # New time + t = t[-1:] + t += [time.time()] + + if 'cuda' in self.device.type: + batch.to(self.device) + + # Forward pass + outputs = net(batch, config) + + # Get probs and labels + stacked_probs = softmax(outputs).cpu().detach().numpy() + labels = batch.labels.cpu().numpy() + lengths = batch.lengths[0].cpu().numpy() + in_inds = batch.input_inds.cpu().numpy() + cloud_inds = batch.cloud_inds.cpu().numpy() + torch.cuda.synchronize(self.device) + + # Get predictions and labels per instance + # *************************************** + + i0 = 0 + for b_i, length in enumerate(lengths): + + # Get prediction + target = labels[i0:i0 + length] + probs = stacked_probs[i0:i0 + length] + inds = in_inds[i0:i0 + length] + c_i = cloud_inds[b_i] + + # Update current probs in whole cloud + self.validation_probs[c_i][inds] = val_smooth * self.validation_probs[c_i][inds] \ + + (1 - val_smooth) * probs + + # Stack all prediction for this epoch + predictions.append(probs) + targets.append(target) + i0 += length + + # Average timing + t += [time.time()] + mean_dt = 0.95 * mean_dt + 0.05 * (np.array(t[1:]) - np.array(t[:-1])) + + # Display + if (t[-1] - last_display) > 1.0: + last_display = t[-1] + message = 'Validation : {:.1f}% (timings : {:4.2f} {:4.2f})' + print(message.format(100 * i / config.validation_size, + 1000 * (mean_dt[0]), + 1000 * (mean_dt[1]))) + + t2 = time.time() + + # Confusions for our subparts of validation set + Confs = np.zeros((len(predictions), nc_tot, nc_tot), dtype=np.int32) + for i, (probs, truth) in enumerate(zip(predictions, targets)): + + # Insert false columns for ignored labels + for l_ind, label_value in enumerate(val_loader.dataset.label_values): + if label_value in val_loader.dataset.ignored_labels: + probs = np.insert(probs, l_ind, 0, axis=1) + + # Predicted labels + preds = val_loader.dataset.label_values[np.argmax(probs, axis=1)] + + # Confusions + Confs[i, :, :] = fast_confusion(truth, preds, val_loader.dataset.label_values).astype(np.int32) + + + t3 = time.time() + + # Sum all confusions + C = np.sum(Confs, axis=0).astype(np.float32) + + # Remove ignored labels from confusions + for l_ind, label_value in reversed(list(enumerate(val_loader.dataset.label_values))): + if label_value in val_loader.dataset.ignored_labels: + C = np.delete(C, l_ind, axis=0) + C = np.delete(C, l_ind, axis=1) + + # Balance with real validation proportions + C *= np.expand_dims(self.val_proportions / (np.sum(C, axis=1) + 1e-6), 1) + + + t4 = time.time() + + # Objects IoU + IoUs = IoU_from_confusions(C) + + t5 = time.time() + + # Saving (optionnal) + if config.saving: + + # Name of saving file + test_file = join(config.saving_path, 'val_IoUs.txt') + + # Line to write: + line = '' + for IoU in IoUs: + line += '{:.3f} '.format(IoU) + line = line + '\n' + + # Write in file + if exists(test_file): + with open(test_file, "a") as text_file: + text_file.write(line) + else: + with open(test_file, "w") as text_file: + text_file.write(line) + + # Save potentials + pot_path = join(config.saving_path, 'potentials') + if not exists(pot_path): + makedirs(pot_path) + files = val_loader.dataset.files + for i, file_path in enumerate(files): + pot_points = np.array(val_loader.dataset.pot_trees[i].data, copy=False) + cloud_name = os.path.splitext(os.path.basename(file_path))[0] #file_path.split('/')[-1] + pot_name = join(pot_path, cloud_name) + pots = val_loader.dataset.potentials[i].numpy().astype(np.float32) + # write_ply(pot_name, + # [pot_points.astype(np.float32), pots], + # ['x', 'y', 'z', 'pots']) + + t6 = time.time() + + # Print instance mean + mIoU = 100 * np.mean(IoUs) + print('{:s} mean IoU = {:.1f}%'.format(config.dataset, mIoU)) + + # Save predicted cloud occasionally + if config.saving and (self.epoch + 1) % config.checkpoint_gap == 0: + val_path = join(config.saving_path, 'val_preds_{:d}'.format(self.epoch + 1)) + if not exists(val_path): + makedirs(val_path) + files = val_loader.dataset.files + for i, file_path in enumerate(files): + + # Get points + points = val_loader.dataset.load_evaluation_points(file_path) + + # Get probs on our own ply points + sub_probs = self.validation_probs[i] + + # Insert false columns for ignored labels + for l_ind, label_value in enumerate(val_loader.dataset.label_values): + if label_value in val_loader.dataset.ignored_labels: + sub_probs = np.insert(sub_probs, l_ind, 0, axis=1) + + # Get the predicted labels + sub_preds = val_loader.dataset.label_values[np.argmax(sub_probs, axis=1).astype(np.int32)] + + # Reproject preds on the evaluations points + preds = (sub_preds[val_loader.dataset.test_proj[i]]).astype(np.int32) + + # Path of saved validation file + cloud_name = os.path.splitext(os.path.basename(file_path))[0] # file_path.split('/')[-1] + val_name = join(val_path, cloud_name) + + # Save file + labels = val_loader.dataset.validation_labels[i].astype(np.int32) + # write_ply(val_name, + # [points, preds, labels], + # ['x', 'y', 'z', 'preds', 'class']) + + # Display timings + t7 = time.time() + if debug: + print('\n************************\n') + print('Validation timings:') + print('Init ...... {:.1f}s'.format(t1 - t0)) + print('Loop ...... {:.1f}s'.format(t2 - t1)) + print('Confs ..... {:.1f}s'.format(t3 - t2)) + print('Confs bis . {:.1f}s'.format(t4 - t3)) + print('IoU ....... {:.1f}s'.format(t5 - t4)) + print('Save1 ..... {:.1f}s'.format(t6 - t5)) + print('Save2 ..... {:.1f}s'.format(t7 - t6)) + print('\n************************\n') + + return + + def slam_segmentation_validation(self, net, val_loader, config, debug=True): + """ + Validation method for slam segmentation models + """ + + ############ + # Initialize + ############ + + t0 = time.time() + + # Do not validate if dataset has no validation cloud + if val_loader is None: + return + + # Choose validation smoothing parameter (0 for no smothing, 0.99 for big smoothing) + val_smooth = 0.95 + softmax = torch.nn.Softmax(1) + + # Create folder for validation predictions + if not exists (join(config.saving_path, 'val_preds')): + makedirs(join(config.saving_path, 'val_preds')) + + # initiate the dataset validation containers + val_loader.dataset.val_points = [] + val_loader.dataset.val_labels = [] + + # Number of classes including ignored labels + nc_tot = val_loader.dataset.num_classes + + ##################### + # Network predictions + ##################### + + predictions = [] + targets = [] + inds = [] + val_i = 0 + + t = [time.time()] + last_display = time.time() + mean_dt = np.zeros(1) + + + t1 = time.time() + + # Start validation loop + for i, batch in enumerate(val_loader): + + # New time + t = t[-1:] + t += [time.time()] + + if 'cuda' in self.device.type: + batch.to(self.device) + + # Forward pass + outputs = net(batch, config) + + # Get probs and labels + stk_probs = softmax(outputs).cpu().detach().numpy() + lengths = batch.lengths[0].cpu().numpy() + f_inds = batch.frame_inds.cpu().numpy() + r_inds_list = batch.reproj_inds + r_mask_list = batch.reproj_masks + labels_list = batch.val_labels + torch.cuda.synchronize(self.device) + + # Get predictions and labels per instance + # *************************************** + + i0 = 0 + for b_i, length in enumerate(lengths): + + # Get prediction + probs = stk_probs[i0:i0 + length] + proj_inds = r_inds_list[b_i] + proj_mask = r_mask_list[b_i] + frame_labels = labels_list[b_i] + s_ind = f_inds[b_i, 0] + f_ind = f_inds[b_i, 1] + + # Project predictions on the frame points + proj_probs = probs[proj_inds] + + # Safe check if only one point: + if proj_probs.ndim < 2: + proj_probs = np.expand_dims(proj_probs, 0) + + # Insert false columns for ignored labels + for l_ind, label_value in enumerate(val_loader.dataset.label_values): + if label_value in val_loader.dataset.ignored_labels: + proj_probs = np.insert(proj_probs, l_ind, 0, axis=1) + + # Predicted labels + preds = val_loader.dataset.label_values[np.argmax(proj_probs, axis=1)] + + # Save predictions in a binary file + filename = '{:s}_{:07d}.npy'.format(val_loader.dataset.sequences[s_ind], f_ind) + filepath = join(config.saving_path, 'val_preds', filename) + if exists(filepath): + frame_preds = np.load(filepath) + else: + frame_preds = np.zeros(frame_labels.shape, dtype=np.uint8) + frame_preds[proj_mask] = preds.astype(np.uint8) + np.save(filepath, frame_preds) + + # Save some of the frame pots + if f_ind % 20 == 0: + seq_path = join(val_loader.dataset.path, 'sequences', val_loader.dataset.sequences[s_ind]) + velo_file = join(seq_path, 'velodyne', val_loader.dataset.frames[s_ind][f_ind] + '.bin') + frame_points = np.fromfile(velo_file, dtype=np.float32) + frame_points = frame_points.reshape((-1, 4)) + write_ply(filepath[:-4] + '_pots.ply', + [frame_points[:, :3], frame_labels, frame_preds], + ['x', 'y', 'z', 'gt', 'pre']) + + # Update validation confusions + frame_C = fast_confusion(frame_labels, + frame_preds.astype(np.int32), + val_loader.dataset.label_values) + val_loader.dataset.val_confs[s_ind][f_ind, :, :] = frame_C + + # Stack all prediction for this epoch + predictions += [preds] + targets += [frame_labels[proj_mask]] + inds += [f_inds[b_i, :]] + val_i += 1 + i0 += length + + # Average timing + t += [time.time()] + mean_dt = 0.95 * mean_dt + 0.05 * (np.array(t[1:]) - np.array(t[:-1])) + + # Display + if (t[-1] - last_display) > 1.0: + last_display = t[-1] + message = 'Validation : {:.1f}% (timings : {:4.2f} {:4.2f})' + print(message.format(100 * i / config.validation_size, + 1000 * (mean_dt[0]), + 1000 * (mean_dt[1]))) + + t2 = time.time() + + # Confusions for our subparts of validation set + Confs = np.zeros((len(predictions), nc_tot, nc_tot), dtype=np.int32) + for i, (preds, truth) in enumerate(zip(predictions, targets)): + + # Confusions + Confs[i, :, :] = fast_confusion(truth, preds, val_loader.dataset.label_values).astype(np.int32) + + t3 = time.time() + + ####################################### + # Results on this subpart of validation + ####################################### + + # Sum all confusions + C = np.sum(Confs, axis=0).astype(np.float32) + + # Balance with real validation proportions + C *= np.expand_dims(val_loader.dataset.class_proportions / (np.sum(C, axis=1) + 1e-6), 1) + + # Remove ignored labels from confusions + for l_ind, label_value in reversed(list(enumerate(val_loader.dataset.label_values))): + if label_value in val_loader.dataset.ignored_labels: + C = np.delete(C, l_ind, axis=0) + C = np.delete(C, l_ind, axis=1) + + # Objects IoU + IoUs = IoU_from_confusions(C) + + ##################################### + # Results on the whole validation set + ##################################### + + t4 = time.time() + + # Sum all validation confusions + C_tot = [np.sum(seq_C, axis=0) for seq_C in val_loader.dataset.val_confs if len(seq_C) > 0] + C_tot = np.sum(np.stack(C_tot, axis=0), axis=0) + + if debug: + s = '\n' + for cc in C_tot: + for c in cc: + s += '{:8.1f} '.format(c) + s += '\n' + print(s) + + # Remove ignored labels from confusions + for l_ind, label_value in reversed(list(enumerate(val_loader.dataset.label_values))): + if label_value in val_loader.dataset.ignored_labels: + C_tot = np.delete(C_tot, l_ind, axis=0) + C_tot = np.delete(C_tot, l_ind, axis=1) + + # Objects IoU + val_IoUs = IoU_from_confusions(C_tot) + + t5 = time.time() + + # Saving (optionnal) + if config.saving: + + IoU_list = [IoUs, val_IoUs] + file_list = ['subpart_IoUs.txt', 'val_IoUs.txt'] + for IoUs_to_save, IoU_file in zip(IoU_list, file_list): + + # Name of saving file + test_file = join(config.saving_path, IoU_file) + + # Line to write: + line = '' + for IoU in IoUs_to_save: + line += '{:.3f} '.format(IoU) + line = line + '\n' + + # Write in file + if exists(test_file): + with open(test_file, "a") as text_file: + text_file.write(line) + else: + with open(test_file, "w") as text_file: + text_file.write(line) + + # Print instance mean + mIoU = 100 * np.mean(IoUs) + print('{:s} : subpart mIoU = {:.1f} %'.format(config.dataset, mIoU)) + mIoU = 100 * np.mean(val_IoUs) + print('{:s} : val mIoU = {:.1f} %'.format(config.dataset, mIoU)) + + t6 = time.time() + + # Display timings + if debug: + print('\n************************\n') + print('Validation timings:') + print('Init ...... {:.1f}s'.format(t1 - t0)) + print('Loop ...... {:.1f}s'.format(t2 - t1)) + print('Confs ..... {:.1f}s'.format(t3 - t2)) + print('IoU1 ...... {:.1f}s'.format(t4 - t3)) + print('IoU2 ...... {:.1f}s'.format(t5 - t4)) + print('Save ...... {:.1f}s'.format(t6 - t5)) + print('\n************************\n') + + return + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/competing_methods/my_KPConv/utils/visualizer.py b/competing_methods/my_KPConv/utils/visualizer.py new file mode 100644 index 00000000..cda24b61 --- /dev/null +++ b/competing_methods/my_KPConv/utils/visualizer.py @@ -0,0 +1,531 @@ +# +# +# 0=================================0 +# | Kernel Point Convolutions | +# 0=================================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Class handling the visualization +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 11/06/2018 +# + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Imports and global variables +# \**********************************/ +# + + +# Basic libs +import torch +import numpy as np +from sklearn.neighbors import KDTree +from os import makedirs, remove, rename, listdir +from os.path import exists, join +import time +from mayavi import mlab +import sys + +from models.blocks import KPConv + +# PLY reader +from utils.ply import write_ply, read_ply + +# Configuration class +from utils.config import Config, bcolors + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Trainer Class +# \*******************/ +# + + +class ModelVisualizer: + + # Initialization methods + # ------------------------------------------------------------------------------------------------------------------ + + def __init__(self, net, config, chkp_path, on_gpu=True): + """ + Initialize training parameters and reload previous model for restore/finetune + :param net: network object + :param config: configuration object + :param chkp_path: path to the checkpoint that needs to be loaded (None for new training) + :param finetune: finetune from checkpoint (True) or restore training from checkpoint (False) + :param on_gpu: Train on GPU or CPU + """ + + ############ + # Parameters + ############ + + # Choose to train on CPU or GPU + if on_gpu and torch.cuda.is_available(): + self.device = torch.device("cuda:0") + else: + self.device = torch.device("cpu") + net.to(self.device) + + ########################## + # Load previous checkpoint + ########################## + + checkpoint = torch.load(chkp_path) + + new_dict = {} + for k, v in checkpoint['model_state_dict'].items(): + if 'blocs' in k: + k = k.replace('blocs', 'blocks') + new_dict[k] = v + + net.load_state_dict(new_dict) + self.epoch = checkpoint['epoch'] + net.eval() + print("\nModel state restored from {:s}.".format(chkp_path)) + + return + + # Main visualization methods + # ------------------------------------------------------------------------------------------------------------------ + + def show_deformable_kernels(self, net, loader, config, deform_idx=0): + """ + Show some inference with deformable kernels + """ + + ########################################## + # First choose the visualized deformations + ########################################## + + print('\nList of the deformable convolution available (chosen one highlighted in green)') + fmt_str = ' {:}{:2d} > KPConv(r={:.3f}, Din={:d}, Dout={:d}){:}' + deform_convs = [] + for m in net.modules(): + if isinstance(m, KPConv) and m.deformable: + if len(deform_convs) == deform_idx: + color = bcolors.OKGREEN + else: + color = bcolors.FAIL + print(fmt_str.format(color, len(deform_convs), m.radius, m.in_channels, m.out_channels, bcolors.ENDC)) + deform_convs.append(m) + + ################ + # Initialization + ################ + + print('\n****************************************************\n') + + # Loop variables + t0 = time.time() + t = [time.time()] + last_display = time.time() + mean_dt = np.zeros(1) + count = 0 + + # Start training loop + for epoch in range(config.max_epoch): + + for batch in loader: + + ################## + # Processing batch + ################## + + # New time + t = t[-1:] + t += [time.time()] + + if 'cuda' in self.device.type: + batch.to(self.device) + + # Forward pass + outputs = net(batch, config) + original_KP = deform_convs[deform_idx].kernel_points.cpu().detach().numpy() + stacked_deformed_KP = deform_convs[deform_idx].deformed_KP.cpu().detach().numpy() + count += batch.lengths[0].shape[0] + + if 'cuda' in self.device.type: + torch.cuda.synchronize(self.device) + + # Find layer + l = None + for i, p in enumerate(batch.points): + if p.shape[0] == stacked_deformed_KP.shape[0]: + l = i + + t += [time.time()] + + # Get data + in_points = [] + in_colors = [] + deformed_KP = [] + points = [] + lookuptrees = [] + i0 = 0 + for b_i, length in enumerate(batch.lengths[0]): + in_points.append(batch.points[0][i0:i0 + length].cpu().detach().numpy()) + if batch.features.shape[1] == 4: + in_colors.append(batch.features[i0:i0 + length, 1:].cpu().detach().numpy()) + else: + in_colors.append(None) + i0 += length + + i0 = 0 + for b_i, length in enumerate(batch.lengths[l]): + points.append(batch.points[l][i0:i0 + length].cpu().detach().numpy()) + deformed_KP.append(stacked_deformed_KP[i0:i0 + length]) + lookuptrees.append(KDTree(points[-1])) + i0 += length + + ########################### + # Interactive visualization + ########################### + + # Create figure for features + fig1 = mlab.figure('Deformations', bgcolor=(1.0, 1.0, 1.0), size=(1280, 920)) + fig1.scene.parallel_projection = False + + # Indices + global obj_i, point_i, plots, offsets, p_scale, show_in_p, aim_point + p_scale = 0.03 + obj_i = 0 + point_i = 0 + plots = {} + offsets = False + show_in_p = 2 + aim_point = np.zeros((1, 3)) + + def picker_callback(picker): + """ Picker callback: this get called when on pick events. + """ + global plots, aim_point + + if 'in_points' in plots: + if plots['in_points'].actor.actor._vtk_obj in [o._vtk_obj for o in picker.actors]: + point_rez = plots['in_points'].glyph.glyph_source.glyph_source.output.points.to_array().shape[0] + new_point_i = int(np.floor(picker.point_id / point_rez)) + if new_point_i < len(plots['in_points'].mlab_source.points): + # Get closest point in the layer we are interested in + aim_point = plots['in_points'].mlab_source.points[new_point_i:new_point_i + 1] + update_scene() + + if 'points' in plots: + if plots['points'].actor.actor._vtk_obj in [o._vtk_obj for o in picker.actors]: + point_rez = plots['points'].glyph.glyph_source.glyph_source.output.points.to_array().shape[0] + new_point_i = int(np.floor(picker.point_id / point_rez)) + if new_point_i < len(plots['points'].mlab_source.points): + # Get closest point in the layer we are interested in + aim_point = plots['points'].mlab_source.points[new_point_i:new_point_i + 1] + update_scene() + + def update_scene(): + global plots, offsets, p_scale, show_in_p, aim_point, point_i + + # Get the current view + v = mlab.view() + roll = mlab.roll() + + # clear figure + for key in plots.keys(): + plots[key].remove() + + plots = {} + + # Plot new data feature + p = points[obj_i] + + # Rescale points for visu + p = (p * 1.5 / config.in_radius) + + + # Show point cloud + if show_in_p <= 1: + plots['points'] = mlab.points3d(p[:, 0], + p[:, 1], + p[:, 2], + resolution=8, + scale_factor=p_scale, + scale_mode='none', + color=(0, 1, 1), + figure=fig1) + + if show_in_p >= 1: + + # Get points and colors + in_p = in_points[obj_i] + in_p = (in_p * 1.5 / config.in_radius) + + # Color point cloud if possible + in_c = in_colors[obj_i] + if in_c is not None: + + # Primitives + scalars = np.arange(len(in_p)) # Key point: set an integer for each point + + # Define color table (including alpha), which must be uint8 and [0,255] + colors = np.hstack((in_c, np.ones_like(in_c[:, :1]))) + colors = (colors * 255).astype(np.uint8) + + plots['in_points'] = mlab.points3d(in_p[:, 0], + in_p[:, 1], + in_p[:, 2], + scalars, + resolution=8, + scale_factor=p_scale*0.8, + scale_mode='none', + figure=fig1) + plots['in_points'].module_manager.scalar_lut_manager.lut.table = colors + + else: + + plots['in_points'] = mlab.points3d(in_p[:, 0], + in_p[:, 1], + in_p[:, 2], + resolution=8, + scale_factor=p_scale*0.8, + scale_mode='none', + figure=fig1) + + + # Get KP locations + rescaled_aim_point = aim_point * config.in_radius / 1.5 + point_i = lookuptrees[obj_i].query(rescaled_aim_point, return_distance=False)[0][0] + if offsets: + KP = points[obj_i][point_i] + deformed_KP[obj_i][point_i] + scals = np.ones_like(KP[:, 0]) + else: + KP = points[obj_i][point_i] + original_KP + scals = np.zeros_like(KP[:, 0]) + + KP = (KP * 1.5 / config.in_radius) + + plots['KP'] = mlab.points3d(KP[:, 0], + KP[:, 1], + KP[:, 2], + scals, + colormap='autumn', + resolution=8, + scale_factor=1.2*p_scale, + scale_mode='none', + vmin=0, + vmax=1, + figure=fig1) + + + if True: + plots['center'] = mlab.points3d(p[point_i, 0], + p[point_i, 1], + p[point_i, 2], + scale_factor=1.1*p_scale, + scale_mode='none', + color=(0, 1, 0), + figure=fig1) + + # New title + plots['title'] = mlab.title(str(obj_i), color=(0, 0, 0), size=0.3, height=0.01) + text = '<--- (press g for previous)' + 50 * ' ' + '(press h for next) --->' + plots['text'] = mlab.text(0.01, 0.01, text, color=(0, 0, 0), width=0.98) + plots['orient'] = mlab.orientation_axes() + + # Set the saved view + mlab.view(*v) + mlab.roll(roll) + + return + + def animate_kernel(): + global plots, offsets, p_scale, show_in_p + + # Get KP locations + + KP_def = points[obj_i][point_i] + deformed_KP[obj_i][point_i] + KP_def = (KP_def * 1.5 / config.in_radius) + KP_def_color = (1, 0, 0) + + KP_rigid = points[obj_i][point_i] + original_KP + KP_rigid = (KP_rigid * 1.5 / config.in_radius) + KP_rigid_color = (1, 0.7, 0) + + if offsets: + t_list = np.linspace(0, 1, 150, dtype=np.float32) + else: + t_list = np.linspace(1, 0, 150, dtype=np.float32) + + @mlab.animate(delay=10) + def anim(): + for t in t_list: + plots['KP'].mlab_source.set(x=t * KP_def[:, 0] + (1 - t) * KP_rigid[:, 0], + y=t * KP_def[:, 1] + (1 - t) * KP_rigid[:, 1], + z=t * KP_def[:, 2] + (1 - t) * KP_rigid[:, 2], + scalars=t * np.ones_like(KP_def[:, 0])) + + yield + + anim() + + return + + def keyboard_callback(vtk_obj, event): + global obj_i, point_i, offsets, p_scale, show_in_p + + if vtk_obj.GetKeyCode() in ['b', 'B']: + p_scale /= 1.5 + update_scene() + + elif vtk_obj.GetKeyCode() in ['n', 'N']: + p_scale *= 1.5 + update_scene() + + if vtk_obj.GetKeyCode() in ['g', 'G']: + obj_i = (obj_i - 1) % len(deformed_KP) + point_i = 0 + update_scene() + + elif vtk_obj.GetKeyCode() in ['h', 'H']: + obj_i = (obj_i + 1) % len(deformed_KP) + point_i = 0 + update_scene() + + elif vtk_obj.GetKeyCode() in ['k', 'K']: + offsets = not offsets + animate_kernel() + + elif vtk_obj.GetKeyCode() in ['z', 'Z']: + show_in_p = (show_in_p + 1) % 3 + update_scene() + + elif vtk_obj.GetKeyCode() in ['0']: + + print('Saving') + + # Find a new name + file_i = 0 + file_name = 'KP_{:03d}.ply'.format(file_i) + files = [f for f in listdir('KP_clouds') if f.endswith('.ply')] + while file_name in files: + file_i += 1 + file_name = 'KP_{:03d}.ply'.format(file_i) + + KP_deform = points[obj_i][point_i] + deformed_KP[obj_i][point_i] + KP_normal = points[obj_i][point_i] + original_KP + + # Save + write_ply(join('KP_clouds', file_name), + [in_points[obj_i], in_colors[obj_i]], + ['x', 'y', 'z', 'red', 'green', 'blue']) + write_ply(join('KP_clouds', 'KP_{:03d}_deform.ply'.format(file_i)), + [KP_deform], + ['x', 'y', 'z']) + write_ply(join('KP_clouds', 'KP_{:03d}_normal.ply'.format(file_i)), + [KP_normal], + ['x', 'y', 'z']) + print('OK') + + return + + # Draw a first plot + pick_func = fig1.on_mouse_pick(picker_callback) + pick_func.tolerance = 0.01 + update_scene() + fig1.scene.interactor.add_observer('KeyPressEvent', keyboard_callback) + mlab.show() + + return + + # Utilities + # ------------------------------------------------------------------------------------------------------------------ + + +def show_ModelNet_models(all_points): + + ########################### + # Interactive visualization + ########################### + + # Create figure for features + fig1 = mlab.figure('Models', bgcolor=(1, 1, 1), size=(1000, 800)) + fig1.scene.parallel_projection = False + + # Indices + global file_i + file_i = 0 + + def update_scene(): + + # clear figure + mlab.clf(fig1) + + # Plot new data feature + points = all_points[file_i] + + # Rescale points for visu + points = (points * 1.5 + np.array([1.0, 1.0, 1.0])) * 50.0 + + # Show point clouds colorized with activations + activations = mlab.points3d(points[:, 0], + points[:, 1], + points[:, 2], + points[:, 2], + scale_factor=3.0, + scale_mode='none', + figure=fig1) + + # New title + mlab.title(str(file_i), color=(0, 0, 0), size=0.3, height=0.01) + text = '<--- (press g for previous)' + 50 * ' ' + '(press h for next) --->' + mlab.text(0.01, 0.01, text, color=(0, 0, 0), width=0.98) + mlab.orientation_axes() + + return + + def keyboard_callback(vtk_obj, event): + global file_i + + if vtk_obj.GetKeyCode() in ['g', 'G']: + + file_i = (file_i - 1) % len(all_points) + update_scene() + + elif vtk_obj.GetKeyCode() in ['h', 'H']: + + file_i = (file_i + 1) % len(all_points) + update_scene() + + return + + # Draw a first plot + update_scene() + fig1.scene.interactor.add_observer('KeyPressEvent', keyboard_callback) + mlab.show() + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/competing_methods/my_KPConv/visualize_deformations.py b/competing_methods/my_KPConv/visualize_deformations.py new file mode 100644 index 00000000..a1c73b98 --- /dev/null +++ b/competing_methods/my_KPConv/visualize_deformations.py @@ -0,0 +1,202 @@ +# +# +# 0=================================0 +# | Kernel Point Convolutions | +# 0=================================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Callable script to start a training on ModelNet40 dataset +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 06/03/2020 +# + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Imports and global variables +# \**********************************/ +# + +# Common libs +import signal +import os +import numpy as np +import sys +import torch + +# Dataset +from datasets.ModelNet40 import * +from datasets.S3DIS import * +from torch.utils.data import DataLoader + +from utils.config import Config +from utils.visualizer import ModelVisualizer +from models.architectures import KPCNN, KPFCNN + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Main Call +# \***************/ +# + +def model_choice(chosen_log): + + ########################### + # Call the test initializer + ########################### + + # Automatically retrieve the last trained model + if chosen_log in ['last_ModelNet40', 'last_ShapeNetPart', 'last_S3DIS']: + + # Dataset name + test_dataset = '_'.join(chosen_log.split('_')[1:]) + + # List all training logs + logs = np.sort([os.path.join('results', f) for f in os.listdir('results') if f.startswith('Log')]) + + # Find the last log of asked dataset + for log in logs[::-1]: + log_config = Config() + log_config.load(log) + if log_config.dataset.startswith(test_dataset): + chosen_log = log + break + + if chosen_log in ['last_ModelNet40', 'last_ShapeNetPart', 'last_S3DIS']: + raise ValueError('No log of the dataset "' + test_dataset + '" found') + + # Check if log exists + if not os.path.exists(chosen_log): + raise ValueError('The given log does not exists: ' + chosen_log) + + return chosen_log + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Main Call +# \***************/ +# + +if __name__ == '__main__': + + ############################### + # Choose the model to visualize + ############################### + + # Here you can choose which model you want to test with the variable test_model. Here are the possible values : + # + # > 'last_XXX': Automatically retrieve the last trained model on dataset XXX + # > 'results/Log_YYYY-MM-DD_HH-MM-SS': Directly provide the path of a trained model + + chosen_log = 'results/Log_2020-04-23_19-42-18' + + # Choose the index of the checkpoint to load OR None if you want to load the current checkpoint + chkp_idx = None + + # Eventually you can choose which feature is visualized (index of the deform convolution in the network) + deform_idx = 0 + + # Deal with 'last_XXX' choices + chosen_log = model_choice(chosen_log) + + ############################ + # Initialize the environment + ############################ + + # Set which gpu is going to be used + GPU_ID = '0' + + # Set GPU visible device + os.environ['CUDA_VISIBLE_DEVICES'] = GPU_ID + + ############### + # Previous chkp + ############### + + # Find all checkpoints in the chosen training folder + chkp_path = os.path.join(chosen_log, 'checkpoints') + chkps = [f for f in os.listdir(chkp_path) if f[:4] == 'chkp'] + + # Find which snapshot to restore + if chkp_idx is None: + chosen_chkp = 'current_chkp.tar' + else: + chosen_chkp = np.sort(chkps)[chkp_idx] + chosen_chkp = os.path.join(chosen_log, 'checkpoints', chosen_chkp) + + # Initialize configuration class + config = Config() + config.load(chosen_log) + + ################################## + # Change model parameters for test + ################################## + + # Change parameters for the test here. For example, you can stop augmenting the input data. + + config.augment_noise = 0.0001 + config.batch_num = 1 + config.in_radius = 2.0 + config.input_threads = 0 + + ############## + # Prepare Data + ############## + + print() + print('Data Preparation') + print('****************') + + # Initiate dataset + if config.dataset.startswith('ModelNet40'): + test_dataset = ModelNet40Dataset(config, train=False) + test_sampler = ModelNet40Sampler(test_dataset) + collate_fn = ModelNet40Collate + elif config.dataset == 'S3DIS': + test_dataset = S3DISDataset(config, set='validation', use_potentials=True) + test_sampler = S3DISSampler(test_dataset) + collate_fn = S3DISCollate + else: + raise ValueError('Unsupported dataset : ' + config.dataset) + + # Data loader + test_loader = DataLoader(test_dataset, + batch_size=1, + sampler=test_sampler, + collate_fn=collate_fn, + num_workers=config.input_threads, + pin_memory=True) + + # Calibrate samplers + test_sampler.calibration(test_loader, verbose=True) + + print('\nModel Preparation') + print('*****************') + + # Define network model + t1 = time.time() + if config.dataset_task == 'classification': + net = KPCNN(config) + elif config.dataset_task in ['cloud_segmentation', 'slam_segmentation']: + net = KPFCNN(config, test_dataset.label_values, test_dataset.ignored_labels) + else: + raise ValueError('Unsupported dataset_task for deformation visu: ' + config.dataset_task) + + # Define a visualizer class + visualizer = ModelVisualizer(net, config, chkp_path=chosen_chkp, on_gpu=False) + print('Done in {:.1f}s\n'.format(time.time() - t1)) + + print('\nStart visualization') + print('*******************') + + # Training + visualizer.show_deformable_kernels(net, test_loader, config, deform_idx) + + + diff --git a/competing_methods/my_RandLANet/.gitignore b/competing_methods/my_RandLANet/.gitignore new file mode 100644 index 00000000..e7fb556f --- /dev/null +++ b/competing_methods/my_RandLANet/.gitignore @@ -0,0 +1,3 @@ +.idea +*.dll +*.ply \ No newline at end of file diff --git a/competing_methods/my_RandLANet/LICENSE b/competing_methods/my_RandLANet/LICENSE new file mode 100644 index 00000000..6d37ae89 --- /dev/null +++ b/competing_methods/my_RandLANet/LICENSE @@ -0,0 +1,176 @@ + +# Attribution-NonCommercial-ShareAlike 4.0 International + +Creative Commons Corporation (“Creative Commons”) is not a law firm and does not provide legal services or legal advice. Distribution of Creative Commons public licenses does not create a lawyer-client or other relationship. Creative Commons makes its licenses and related information available on an “as-is” basis. Creative Commons gives no warranties regarding its licenses, any material licensed under their terms and conditions, or any related information. Creative Commons disclaims all liability for damages resulting from their use to the fullest extent possible. + +### Using Creative Commons Public Licenses + +Creative Commons public licenses provide a standard set of terms and conditions that creators and other rights holders may use to share original works of authorship and other material subject to copyright and certain other rights specified in the public license below. The following considerations are for informational purposes only, are not exhaustive, and do not form part of our licenses. + +* __Considerations for licensors:__ Our public licenses are intended for use by those authorized to give the public permission to use material in ways otherwise restricted by copyright and certain other rights. Our licenses are irrevocable. Licensors should read and understand the terms and conditions of the license they choose before applying it. Licensors should also secure all rights necessary before applying our licenses so that the public can reuse the material as expected. Licensors should clearly mark any material not subject to the license. This includes other CC-licensed material, or material used under an exception or limitation to copyright. [More considerations for licensors](http://wiki.creativecommons.org/Considerations_for_licensors_and_licensees#Considerations_for_licensors). + +* __Considerations for the public:__ By using one of our public licenses, a licensor grants the public permission to use the licensed material under specified terms and conditions. If the licensor’s permission is not necessary for any reason–for example, because of any applicable exception or limitation to copyright–then that use is not regulated by the license. Our licenses grant only permissions under copyright and certain other rights that a licensor has authority to grant. Use of the licensed material may still be restricted for other reasons, including because others have copyright or other rights in the material. A licensor may make special requests, such as asking that all changes be marked or described. Although not required by our licenses, you are encouraged to respect those requests where reasonable. [More considerations for the public](http://wiki.creativecommons.org/Considerations_for_licensors_and_licensees#Considerations_for_licensees). + +## Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License + +By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions. + +### Section 1 – Definitions. + +a. __Adapted Material__ means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image. + +b. __Adapter's License__ means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License. + +c. __BY-NC-SA Compatible License__ means a license listed at [creativecommons.org/compatiblelicenses](http://creativecommons.org/compatiblelicenses), approved by Creative Commons as essentially the equivalent of this Public License. + +d. __Copyright and Similar Rights__ means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights. + +e. __Effective Technological Measures__ means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements. + +f. __Exceptions and Limitations__ means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material. + +g. __License Elements__ means the license attributes listed in the name of a Creative Commons Public License. The License Elements of this Public License are Attribution, NonCommercial, and ShareAlike. + +h. __Licensed Material__ means the artistic or literary work, database, or other material to which the Licensor applied this Public License. + +i. __Licensed Rights__ means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license. + +j. __Licensor__ means the individual(s) or entity(ies) granting rights under this Public License. + +k. __NonCommercial__ means not primarily intended for or directed towards commercial advantage or monetary compensation. For purposes of this Public License, the exchange of the Licensed Material for other material subject to Copyright and Similar Rights by digital file-sharing or similar means is NonCommercial provided there is no payment of monetary compensation in connection with the exchange. + +l. __Share__ means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them. + +m. __Sui Generis Database Rights__ means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world. + +n. __You__ means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning. + +### Section 2 – Scope. + +a. ___License grant.___ + + 1. Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to: + + A. reproduce and Share the Licensed Material, in whole or in part, for NonCommercial purposes only; and + + B. produce, reproduce, and Share Adapted Material for NonCommercial purposes only. + + 2. __Exceptions and Limitations.__ For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions. + + 3. __Term.__ The term of this Public License is specified in Section 6(a). + + 4. __Media and formats; technical modifications allowed.__ The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a)(4) never produces Adapted Material. + + 5. __Downstream recipients.__ + + A. __Offer from the Licensor – Licensed Material.__ Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License. + + B. __Additional offer from the Licensor – Adapted Material.__ Every recipient of Adapted Material from You automatically receives an offer from the Licensor to exercise the Licensed Rights in the Adapted Material under the conditions of the Adapter’s License You apply. + + C. __No downstream restrictions.__ You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material. + + 6. __No endorsement.__ Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i). + +b. ___Other rights.___ + + 1. Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise. + + 2. Patent and trademark rights are not licensed under this Public License. + + 3. To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties, including when the Licensed Material is used other than for NonCommercial purposes. + +### Section 3 – License Conditions. + +Your exercise of the Licensed Rights is expressly made subject to the following conditions. + +a. ___Attribution.___ + + 1. If You Share the Licensed Material (including in modified form), You must: + + A. retain the following if it is supplied by the Licensor with the Licensed Material: + + i. identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated); + + ii. a copyright notice; + + iii. a notice that refers to this Public License; + + iv. a notice that refers to the disclaimer of warranties; + + v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable; + + B. indicate if You modified the Licensed Material and retain an indication of any previous modifications; and + + C. indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License. + + 2. You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information. + + 3. If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable. + +b. ___ShareAlike.___ + +In addition to the conditions in Section 3(a), if You Share Adapted Material You produce, the following conditions also apply. + + 1. The Adapter’s License You apply must be a Creative Commons license with the same License Elements, this version or later, or a BY-NC-SA Compatible License. + + 2. You must include the text of, or the URI or hyperlink to, the Adapter's License You apply. You may satisfy this condition in any reasonable manner based on the medium, means, and context in which You Share Adapted Material. + + 3. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, Adapted Material that restrict exercise of the rights granted under the Adapter's License You apply. + +### Section 4 – Sui Generis Database Rights. + +Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material: + +a. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database for NonCommercial purposes only; + +b. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material, including for purposes of Section 3(b); and + +c. You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database. + +For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights. + +### Section 5 – Disclaimer of Warranties and Limitation of Liability. + +a. __Unless otherwise separately undertaken by the Licensor, to the extent possible, the Licensor offers the Licensed Material as-is and as-available, and makes no representations or warranties of any kind concerning the Licensed Material, whether express, implied, statutory, or other. This includes, without limitation, warranties of title, merchantability, fitness for a particular purpose, non-infringement, absence of latent or other defects, accuracy, or the presence or absence of errors, whether or not known or discoverable. Where disclaimers of warranties are not allowed in full or in part, this disclaimer may not apply to You.__ + +b. __To the extent possible, in no event will the Licensor be liable to You on any legal theory (including, without limitation, negligence) or otherwise for any direct, special, indirect, incidental, consequential, punitive, exemplary, or other losses, costs, expenses, or damages arising out of this Public License or use of the Licensed Material, even if the Licensor has been advised of the possibility of such losses, costs, expenses, or damages. Where a limitation of liability is not allowed in full or in part, this limitation may not apply to You.__ + +c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability. + +### Section 6 – Term and Termination. + +a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically. + +b. Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates: + + 1. automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or + + 2. automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or + + For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License. + +c. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License. + +d. Sections 1, 5, 6, 7, and 8 survive termination of this Public License. + +### Section 7 – Other Terms and Conditions. + +a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed. + +b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License. + +### Section 8 – Interpretation. + +a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License. + +b. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions. + +c. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor. + +d. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority. + +``` +Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the “Licensor.” Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at [creativecommons.org/policies](http://creativecommons.org/policies), Creative Commons does not authorize the use of the trademark “Creative Commons” or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses. + +Creative Commons may be contacted at [creativecommons.org](http://creativecommons.org/). +``` diff --git a/competing_methods/my_RandLANet/README.md b/competing_methods/my_RandLANet/README.md new file mode 100644 index 00000000..ddbcae42 --- /dev/null +++ b/competing_methods/my_RandLANet/README.md @@ -0,0 +1,163 @@ +[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/191111236/semantic-segmentation-on-semantic3d)](https://paperswithcode.com/sota/semantic-segmentation-on-semantic3d?p=191111236) +[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/191111236/3d-semantic-segmentation-on-semantickitti)](https://paperswithcode.com/sota/3d-semantic-segmentation-on-semantickitti?p=191111236) +[![License CC BY-NC-SA 4.0](https://img.shields.io/badge/license-CC4.0-blue.svg)](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode) + +# RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds (CVPR 2020) + +This is the official implementation of **RandLA-Net** (CVPR2020, Oral presentation), a simple and efficient neural architecture for semantic segmentation of large-scale 3D point clouds. For technical details, please refer to: + +**RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds**
+[Qingyong Hu](https://www.cs.ox.ac.uk/people/qingyong.hu/), [Bo Yang*](https://yang7879.github.io/), [Linhai Xie](https://www.cs.ox.ac.uk/people/linhai.xie/), [Stefano Rosa](https://www.cs.ox.ac.uk/people/stefano.rosa/), [Yulan Guo](http://yulanguo.me/), [Zhihua Wang](https://www.cs.ox.ac.uk/people/zhihua.wang/), [Niki Trigoni](https://www.cs.ox.ac.uk/people/niki.trigoni/), [Andrew Markham](https://www.cs.ox.ac.uk/people/andrew.markham/).
+**[[Paper](https://arxiv.org/abs/1911.11236)] [[Video](https://youtu.be/Ar3eY_lwzMk)] [[Blog](https://zhuanlan.zhihu.com/p/105433460)] [[Project page](http://randla-net.cs.ox.ac.uk/)]**
+ + +

+ + + +### (1) Setup +This code has been tested with Python 3.5, Tensorflow 1.11, CUDA 9.0 and cuDNN 7.4.1 on Ubuntu 16.04. + +- Clone the repository +``` +git clone --depth=1 https://github.com/QingyongHu/RandLA-Net && cd RandLA-Net +``` +- Setup python environment +``` +conda create -n randlanet python=3.5 +source activate randlanet +pip install -r helper_requirements.txt +sh compile_op.sh +``` + +**Update 03/21/2020, pre-trained models and results are available now.** +You can download the pre-trained models and results [here](https://drive.google.com/open?id=1iU8yviO3TP87-IexBXsu13g6NklwEkXB). +Note that, please specify the model path in the main function (e.g., `main_S3DIS.py`) if you want to use the pre-trained model and have a quick try of our RandLA-Net. + +### (2) S3DIS +S3DIS dataset can be found +here. +Download the files named "Stanford3dDataset_v1.2_Aligned_Version.zip". Uncompress the folder and move it to +`/data/S3DIS`. + +- Preparing the dataset: +``` +python utils/data_prepare_s3dis.py +``` +- Start 6-fold cross validation: +``` +sh jobs_6_fold_cv_s3dis.sh +``` +- Move all the generated results (*.ply) in `/test` folder to `/data/S3DIS/results`, calculate the final mean IoU results: +``` +python utils/6_fold_cv.py +``` + +Quantitative results of different approaches on S3DIS dataset (6-fold cross-validation): + +![a](http://randla-net.cs.ox.ac.uk/imgs/S3DIS_table.png) + +Qualitative results of our RandLA-Net: + +| ![2](imgs/S3DIS_area2.gif) | ![z](imgs/S3DIS_area3.gif) | +| ------------------------------ | ---------------------------- | + + + +### (3) Semantic3D +7zip is required to uncompress the raw data in this dataset, to install p7zip: +``` +sudo apt-get install p7zip-full +``` +- Download and extract the dataset. First, please specify the path of the dataset by changing the `BASE_DIR` in "download_semantic3d.sh" +``` +sh utils/download_semantic3d.sh +``` +- Preparing the dataset: +``` +python utils/data_prepare_semantic3d.py +``` +- Start training: +``` +python main_Semantic3D.py --mode train --gpu 0 +``` +- Evaluation: +``` +python main_Semantic3D.py --mode test --gpu 0 +``` +Quantitative results of different approaches on Semantic3D (reduced-8): + +![a](http://randla-net.cs.ox.ac.uk/imgs/Semantic3D_table.png) + +Qualitative results of our RandLA-Net: + +| ![z](imgs/Semantic3D-1.gif) | ![z](http://randla-net.cs.ox.ac.uk/imgs/Semantic3D-2.gif) | +| -------------------------------- | ------------------------------- | +| ![z](imgs/Semantic3D-3.gif) | ![z](imgs/Semantic3D-4.gif) | + + + +**Note:** +- Preferably with more than 64G RAM to process this dataset due to the large volume of point cloud + + +### (4) SemanticKITTI + +SemanticKITTI dataset can be found here. Download the files + related to semantic segmentation and extract everything into the same folder. Uncompress the folder and move it to +`/data/semantic_kitti/dataset`. + +- Preparing the dataset: +``` +python utils/data_prepare_semantickitti.py +``` + +- Start training: +``` +python main_SemanticKITTI.py --mode train --gpu 0 +``` + +- Evaluation: +``` +sh jobs_test_semantickitti.sh +``` + +Quantitative results of different approaches on SemanticKITTI dataset: + +![s](http://randla-net.cs.ox.ac.uk/imgs/SemanticKITTI_table.png) + +Qualitative results of our RandLA-Net: + +![zzz](imgs/SemanticKITTI-2.gif) + + +### (5) Demo + +

+ + +### Citation +If you find our work useful in your research, please consider citing: + + @article{hu2019randla, + title={RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds}, + author={Hu, Qingyong and Yang, Bo and Xie, Linhai and Rosa, Stefano and Guo, Yulan and Wang, Zhihua and Trigoni, Niki and Markham, Andrew}, + journal={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, + year={2020} + } + + +### Acknowledgment +- Part of our code refers to nanoflann library and the the recent work KPConv. +- We use blender to make the video demo. + + +### License +Licensed under the CC BY-NC-SA 4.0 license, see [LICENSE](./LICENSE). + + +### Updates +* 21/03/2020: Updating all experimental results +* 21/03/2020: Adding pretrained models and results +* 02/03/2020: Code available! +* 15/11/2019: Initial release! diff --git a/competing_methods/my_RandLANet/RandLANet.py b/competing_methods/my_RandLANet/RandLANet.py new file mode 100644 index 00000000..a4091359 --- /dev/null +++ b/competing_methods/my_RandLANet/RandLANet.py @@ -0,0 +1,361 @@ +from os.path import exists, join +from os import makedirs +from sklearn.metrics import confusion_matrix +from helper_tool import DataProcessing as DP +import tensorflow as tf +import numpy as np +import helper_tf_util +import time + + +def log_out(out_str, f_out): + f_out.write(out_str + '\n') + f_out.flush() + print(out_str) + + +class Network: + def __init__(self, dataset, config): + flat_inputs = dataset.flat_inputs + self.config = config + # Path of the result folder + if self.config.saving: + if self.config.saving_path is None: + self.saving_path = time.strftime('results/Log_%Y-%m-%d_%H-%M-%S', time.gmtime()) + else: + self.saving_path = self.config.saving_path + makedirs(self.saving_path) if not exists(self.saving_path) else None + + with tf.variable_scope('inputs'): + self.inputs = dict() + num_layers = self.config.num_layers + self.inputs['xyz'] = flat_inputs[:num_layers] + self.inputs['neigh_idx'] = flat_inputs[num_layers: 2 * num_layers] + self.inputs['sub_idx'] = flat_inputs[2 * num_layers:3 * num_layers] + self.inputs['interp_idx'] = flat_inputs[3 * num_layers:4 * num_layers] + self.inputs['features'] = flat_inputs[4 * num_layers] + self.inputs['labels'] = flat_inputs[4 * num_layers + 1] + self.inputs['input_inds'] = flat_inputs[4 * num_layers + 2] + self.inputs['cloud_inds'] = flat_inputs[4 * num_layers + 3] + + self.labels = self.inputs['labels'] + self.is_training = tf.placeholder(tf.bool, shape=()) + self.training_step = 1 + self.training_epoch = 0 + self.correct_prediction = 0 + self.accuracy = 0 + self.mIou_list = [0] + self.class_weights = DP.get_class_weights(dataset.name) + self.Log_file = open('log_train_' + dataset.name + str(dataset.val_split) + '.txt', 'a') + + with tf.variable_scope('layers'): + self.logits = self.inference(self.inputs, self.is_training) + + ##################################################################### + # Ignore the invalid point (unlabeled) when calculating the loss # + ##################################################################### + with tf.variable_scope('loss'): + self.logits = tf.reshape(self.logits, [-1, config.num_classes]) + self.labels = tf.reshape(self.labels, [-1]) + + # Boolean mask of points that should be ignored + ignored_bool = tf.zeros_like(self.labels, dtype=tf.bool) + for ign_label in self.config.ignored_label_inds: + ignored_bool = tf.logical_or(ignored_bool, tf.equal(self.labels, ign_label)) + + # Collect logits and labels that are not ignored + valid_idx = tf.squeeze(tf.where(tf.logical_not(ignored_bool))) + valid_logits = tf.gather(self.logits, valid_idx, axis=0) + valid_labels_init = tf.gather(self.labels, valid_idx, axis=0) + + # Reduce label values in the range of logit shape + reducing_list = tf.range(self.config.num_classes, dtype=tf.int32) + inserted_value = tf.zeros((1,), dtype=tf.int32) + for ign_label in self.config.ignored_label_inds: + reducing_list = tf.concat([reducing_list[:ign_label], inserted_value, reducing_list[ign_label:]], 0) + valid_labels = tf.gather(reducing_list, valid_labels_init) + + self.loss = self.get_loss(valid_logits, valid_labels, self.class_weights) + + with tf.variable_scope('optimizer'): + self.learning_rate = tf.Variable(config.learning_rate, trainable=False, name='learning_rate') + self.train_op = tf.train.AdamOptimizer(self.learning_rate).minimize(self.loss) + self.extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) + + with tf.variable_scope('results'): + self.correct_prediction = tf.nn.in_top_k(valid_logits, valid_labels, 1) + self.accuracy = tf.reduce_mean(tf.cast(self.correct_prediction, tf.float32)) + self.prob_logits = tf.nn.softmax(self.logits) + + tf.summary.scalar('learning_rate', self.learning_rate) + tf.summary.scalar('loss', self.loss) + tf.summary.scalar('accuracy', self.accuracy) + + my_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES) + self.saver = tf.train.Saver(my_vars, max_to_keep=100) + c_proto = tf.ConfigProto() + c_proto.gpu_options.allow_growth = True + self.sess = tf.Session(config=c_proto) + self.merged = tf.summary.merge_all() + self.train_writer = tf.summary.FileWriter(config.train_sum_dir, self.sess.graph) + self.sess.run(tf.global_variables_initializer()) + + def inference(self, inputs, is_training): + + d_out = self.config.d_out + feature = inputs['features'] + feature = tf.layers.dense(feature, 8, activation=None, name='fc0') + feature = tf.nn.leaky_relu(tf.layers.batch_normalization(feature, -1, 0.99, 1e-6, training=is_training)) + feature = tf.expand_dims(feature, axis=2) + + # ###########################Encoder############################ + f_encoder_list = [] + for i in range(self.config.num_layers): + f_encoder_i = self.dilated_res_block(feature, inputs['xyz'][i], inputs['neigh_idx'][i], d_out[i], + 'Encoder_layer_' + str(i), is_training) + f_sampled_i = self.random_sample(f_encoder_i, inputs['sub_idx'][i]) + feature = f_sampled_i + if i == 0: + f_encoder_list.append(f_encoder_i) + f_encoder_list.append(f_sampled_i) + # ###########################Encoder############################ + + feature = helper_tf_util.conv2d(f_encoder_list[-1], f_encoder_list[-1].get_shape()[3].value, [1, 1], + 'decoder_0', + [1, 1], 'VALID', True, is_training) + + # ###########################Decoder############################ + f_decoder_list = [] + for j in range(self.config.num_layers): + f_interp_i = self.nearest_interpolation(feature, inputs['interp_idx'][-j - 1]) + f_decoder_i = helper_tf_util.conv2d_transpose(tf.concat([f_encoder_list[-j - 2], f_interp_i], axis=3), + f_encoder_list[-j - 2].get_shape()[-1].value, [1, 1], + 'Decoder_layer_' + str(j), [1, 1], 'VALID', bn=True, + is_training=is_training) + feature = f_decoder_i + f_decoder_list.append(f_decoder_i) + # ###########################Decoder############################ + + f_layer_fc1 = helper_tf_util.conv2d(f_decoder_list[-1], 64, [1, 1], 'fc1', [1, 1], 'VALID', True, is_training) + f_layer_fc2 = helper_tf_util.conv2d(f_layer_fc1, 32, [1, 1], 'fc2', [1, 1], 'VALID', True, is_training) + f_layer_drop = helper_tf_util.dropout(f_layer_fc2, keep_prob=0.5, is_training=is_training, scope='dp1') + f_layer_fc3 = helper_tf_util.conv2d(f_layer_drop, self.config.num_classes, [1, 1], 'fc', [1, 1], 'VALID', False, + is_training, activation_fn=None) + f_out = tf.squeeze(f_layer_fc3, [2]) + return f_out + + def train(self, dataset): + log_out('****EPOCH {}****'.format(self.training_epoch), self.Log_file) + self.sess.run(dataset.train_init_op) + while self.training_epoch < self.config.max_epoch: + t_start = time.time() + try: + ops = [self.train_op, + self.extra_update_ops, + self.merged, + self.loss, + self.logits, + self.labels, + self.accuracy] + _, _, summary, l_out, probs, labels, acc = self.sess.run(ops, {self.is_training: True}) + self.train_writer.add_summary(summary, self.training_step) + t_end = time.time() + if self.training_step % 50 == 0: + message = 'Step {:08d} L_out={:5.3f} Acc={:4.2f} ''---{:8.2f} ms/batch' + log_out(message.format(self.training_step, l_out, acc, 1000 * (t_end - t_start)), self.Log_file) + self.training_step += 1 + + except tf.errors.OutOfRangeError: + + m_iou = self.evaluate(dataset) + if m_iou > np.max(self.mIou_list): + # Save the best model + snapshot_directory = join(self.saving_path, 'snapshots') + makedirs(snapshot_directory) if not exists(snapshot_directory) else None + self.saver.save(self.sess, snapshot_directory + '/snap', global_step=self.training_step) + self.mIou_list.append(m_iou) + log_out('Best m_IoU is: {:5.3f}'.format(max(self.mIou_list)), self.Log_file) + + self.training_epoch += 1 + self.sess.run(dataset.train_init_op) + # Update learning rate + op = self.learning_rate.assign(tf.multiply(self.learning_rate, + self.config.lr_decays[self.training_epoch])) + self.sess.run(op) + log_out('****EPOCH {}****'.format(self.training_epoch), self.Log_file) + + except tf.errors.InvalidArgumentError as e: + + print('Caught a NaN error :') + print(e.error_code) + print(e.message) + print(e.op) + print(e.op.name) + print([t.name for t in e.op.inputs]) + print([t.name for t in e.op.outputs]) + + a = 1 / 0 + + print('finished') + self.sess.close() + + def evaluate(self, dataset): + + # Initialise iterator with validation data + self.sess.run(dataset.val_init_op) + + gt_classes = [0 for _ in range(self.config.num_classes)] + positive_classes = [0 for _ in range(self.config.num_classes)] + true_positive_classes = [0 for _ in range(self.config.num_classes)] + val_total_correct = 0 + val_total_seen = 0 + + for step_id in range(self.config.val_steps): + if step_id % 50 == 0: + print(str(step_id) + ' / ' + str(self.config.val_steps)) + try: + ops = (self.prob_logits, self.labels, self.accuracy) + stacked_prob, labels, acc = self.sess.run(ops, {self.is_training: False}) + pred = np.argmax(stacked_prob, 1) + if not self.config.ignored_label_inds: + pred_valid = pred + labels_valid = labels + else: + invalid_idx = np.where(labels == self.config.ignored_label_inds)[0] + labels_valid = np.delete(labels, invalid_idx) + labels_valid = labels_valid - 1 + pred_valid = np.delete(pred, invalid_idx) + + correct = np.sum(pred_valid == labels_valid) + val_total_correct += correct + val_total_seen += len(labels_valid) + + conf_matrix = confusion_matrix(labels_valid, pred_valid, np.arange(0, self.config.num_classes, 1)) + gt_classes += np.sum(conf_matrix, axis=1) + positive_classes += np.sum(conf_matrix, axis=0) + true_positive_classes += np.diagonal(conf_matrix) + + except tf.errors.OutOfRangeError: + break + + iou_list = [] + for n in range(0, self.config.num_classes, 1): + denom = float(gt_classes[n] + positive_classes[n] - true_positive_classes[n]) + if denom != 0: + iou = true_positive_classes[n] / denom + else: + iou = 0.0 + iou_list.append(iou) + mean_iou = sum(iou_list) / float(self.config.num_classes) + + log_out('eval accuracy: {}'.format(val_total_correct / float(val_total_seen)), self.Log_file) + log_out('mean IOU:{}'.format(mean_iou), self.Log_file) + + mean_iou = 100 * mean_iou + log_out('Mean IoU = {:.1f}%'.format(mean_iou), self.Log_file) + s = '{:5.2f} | '.format(mean_iou) + for IoU in iou_list: + s += '{:5.2f} '.format(100 * IoU) + log_out('-' * len(s), self.Log_file) + log_out(s, self.Log_file) + log_out('-' * len(s) + '\n', self.Log_file) + return mean_iou + + def get_loss(self, logits, labels, pre_cal_weights): + # calculate the weighted cross entropy according to the inverse frequency + class_weights = tf.convert_to_tensor(pre_cal_weights, dtype=tf.float32) + one_hot_labels = tf.one_hot(labels, depth=self.config.num_classes) + weights = tf.reduce_sum(class_weights * one_hot_labels, axis=1) + unweighted_losses = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=one_hot_labels) + weighted_losses = unweighted_losses * weights + output_loss = tf.reduce_mean(weighted_losses) + return output_loss + + def dilated_res_block(self, feature, xyz, neigh_idx, d_out, name, is_training): + f_pc = helper_tf_util.conv2d(feature, d_out // 2, [1, 1], name + 'mlp1', [1, 1], 'VALID', True, is_training) + f_pc = self.building_block(xyz, f_pc, neigh_idx, d_out, name + 'LFA', is_training) + f_pc = helper_tf_util.conv2d(f_pc, d_out * 2, [1, 1], name + 'mlp2', [1, 1], 'VALID', True, is_training, + activation_fn=None) + shortcut = helper_tf_util.conv2d(feature, d_out * 2, [1, 1], name + 'shortcut', [1, 1], 'VALID', + activation_fn=None, bn=True, is_training=is_training) + return tf.nn.leaky_relu(f_pc + shortcut) + + def building_block(self, xyz, feature, neigh_idx, d_out, name, is_training): + d_in = feature.get_shape()[-1].value + f_xyz = self.relative_pos_encoding(xyz, neigh_idx) + f_xyz = helper_tf_util.conv2d(f_xyz, d_in, [1, 1], name + 'mlp1', [1, 1], 'VALID', True, is_training) + f_neighbours = self.gather_neighbour(tf.squeeze(feature, axis=2), neigh_idx) + f_concat = tf.concat([f_neighbours, f_xyz], axis=-1) + f_pc_agg = self.att_pooling(f_concat, d_out // 2, name + 'att_pooling_1', is_training) + + f_xyz = helper_tf_util.conv2d(f_xyz, d_out // 2, [1, 1], name + 'mlp2', [1, 1], 'VALID', True, is_training) + f_neighbours = self.gather_neighbour(tf.squeeze(f_pc_agg, axis=2), neigh_idx) + f_concat = tf.concat([f_neighbours, f_xyz], axis=-1) + f_pc_agg = self.att_pooling(f_concat, d_out, name + 'att_pooling_2', is_training) + return f_pc_agg + + def relative_pos_encoding(self, xyz, neigh_idx): + neighbor_xyz = self.gather_neighbour(xyz, neigh_idx) + xyz_tile = tf.tile(tf.expand_dims(xyz, axis=2), [1, 1, tf.shape(neigh_idx)[-1], 1]) + relative_xyz = xyz_tile - neighbor_xyz + relative_dis = tf.sqrt(tf.reduce_sum(tf.square(relative_xyz), axis=-1, keepdims=True)) + relative_feature = tf.concat([relative_dis, relative_xyz, xyz_tile, neighbor_xyz], axis=-1) + return relative_feature + + @staticmethod + def random_sample(feature, pool_idx): + """ + :param feature: [B, N, d] input features matrix + :param pool_idx: [B, N', max_num] N' < N, N' is the selected position after pooling + :return: pool_features = [B, N', d] pooled features matrix + """ + feature = tf.squeeze(feature, axis=2) + num_neigh = tf.shape(pool_idx)[-1] + d = feature.get_shape()[-1] + batch_size = tf.shape(pool_idx)[0] + pool_idx = tf.reshape(pool_idx, [batch_size, -1]) + pool_features = tf.batch_gather(feature, pool_idx) + pool_features = tf.reshape(pool_features, [batch_size, -1, num_neigh, d]) + pool_features = tf.reduce_max(pool_features, axis=2, keepdims=True) + return pool_features + + @staticmethod + def nearest_interpolation(feature, interp_idx): + """ + :param feature: [B, N, d] input features matrix + :param interp_idx: [B, up_num_points, 1] nearest neighbour index + :return: [B, up_num_points, d] interpolated features matrix + """ + feature = tf.squeeze(feature, axis=2) + batch_size = tf.shape(interp_idx)[0] + up_num_points = tf.shape(interp_idx)[1] + interp_idx = tf.reshape(interp_idx, [batch_size, up_num_points]) + interpolated_features = tf.batch_gather(feature, interp_idx) + interpolated_features = tf.expand_dims(interpolated_features, axis=2) + return interpolated_features + + @staticmethod + def gather_neighbour(pc, neighbor_idx): + # gather the coordinates or features of neighboring points + batch_size = tf.shape(pc)[0] + num_points = tf.shape(pc)[1] + d = pc.get_shape()[2].value + index_input = tf.reshape(neighbor_idx, shape=[batch_size, -1]) + features = tf.batch_gather(pc, index_input) + features = tf.reshape(features, [batch_size, num_points, tf.shape(neighbor_idx)[-1], d]) + return features + + @staticmethod + def att_pooling(feature_set, d_out, name, is_training): + batch_size = tf.shape(feature_set)[0] + num_points = tf.shape(feature_set)[1] + num_neigh = tf.shape(feature_set)[2] + d = feature_set.get_shape()[3].value + f_reshaped = tf.reshape(feature_set, shape=[-1, num_neigh, d]) + att_activation = tf.layers.dense(f_reshaped, d, activation=None, use_bias=False, name=name + 'fc') + att_scores = tf.nn.softmax(att_activation, axis=1) + f_agg = f_reshaped * att_scores + f_agg = tf.reduce_sum(f_agg, axis=1) + f_agg = tf.reshape(f_agg, [batch_size, num_points, 1, d]) + f_agg = helper_tf_util.conv2d(f_agg, d_out, [1, 1], name + 'mlp', [1, 1], 'VALID', True, is_training) + return f_agg diff --git a/competing_methods/my_RandLANet/compile_op.sh b/competing_methods/my_RandLANet/compile_op.sh new file mode 100644 index 00000000..bd6a3b27 --- /dev/null +++ b/competing_methods/my_RandLANet/compile_op.sh @@ -0,0 +1,7 @@ +cd utils/nearest_neighbors +python setup.py install --home="." +cd ../../ + +cd utils/cpp_wrappers +sh compile_wrappers.sh +cd ../../../ \ No newline at end of file diff --git a/competing_methods/my_RandLANet/helper_ply.py b/competing_methods/my_RandLANet/helper_ply.py new file mode 100644 index 00000000..4a2aeabd --- /dev/null +++ b/competing_methods/my_RandLANet/helper_ply.py @@ -0,0 +1,356 @@ +# +# +# 0===============================0 +# | PLY files reader/writer | +# 0===============================0 +# +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# function to read/write .ply files +# +# ---------------------------------------------------------------------------------------------------------------------- +# +# Hugues THOMAS - 10/02/2017 +# + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Imports and global variables +# \**********************************/ +# + + +# Basic libs +import numpy as np +import sys + + +# Define PLY types +ply_dtypes = dict([ + (b'int8', 'i1'), + (b'char', 'i1'), + (b'uint8', 'u1'), + (b'uchar', 'u1'), + (b'int16', 'i2'), + (b'short', 'i2'), + (b'uint16', 'u2'), + (b'ushort', 'u2'), + (b'int32', 'i4'), + (b'int', 'i4'), + (b'uint32', 'u4'), + (b'uint', 'u4'), + (b'float32', 'f4'), + (b'float', 'f4'), + (b'float64', 'f8'), + (b'double', 'f8') +]) + +# Numpy reader format +valid_formats = {'ascii': '', 'binary_big_endian': '>', + 'binary_little_endian': '<'} + + +# ---------------------------------------------------------------------------------------------------------------------- +# +# Functions +# \***************/ +# + + +def parse_header(plyfile, ext): + # Variables + line = [] + properties = [] + num_points = None + + while b'end_header' not in line and line != b'': + line = plyfile.readline() + + if b'element' in line: + line = line.split() + num_points = int(line[2]) + + elif b'property' in line: + line = line.split() + properties.append((line[2].decode(), ext + ply_dtypes[line[1]])) + + return num_points, properties + + +def parse_mesh_header(plyfile, ext): + # Variables + line = [] + vertex_properties = [] + num_points = None + num_faces = None + current_element = None + + + while b'end_header' not in line and line != b'': + line = plyfile.readline() + + # Find point element + if b'element vertex' in line: + current_element = 'vertex' + line = line.split() + num_points = int(line[2]) + + elif b'element face' in line: + current_element = 'face' + line = line.split() + num_faces = int(line[2]) + + elif b'property' in line: + if current_element == 'vertex': + line = line.split() + vertex_properties.append((line[2].decode(), ext + ply_dtypes[line[1]])) + elif current_element == 'vertex': + if not line.startswith('property list uchar int'): + raise ValueError('Unsupported faces property : ' + line) + + return num_points, num_faces, vertex_properties + + +def read_ply(filename, triangular_mesh=False): + """ + Read ".ply" files + + Parameters + ---------- + filename : string + the name of the file to read. + + Returns + ------- + result : array + data stored in the file + + Examples + -------- + Store data in file + + >>> points = np.random.rand(5, 3) + >>> values = np.random.randint(2, size=10) + >>> write_ply('example.ply', [points, values], ['x', 'y', 'z', 'values']) + + Read the file + + >>> data = read_ply('example.ply') + >>> values = data['values'] + array([0, 0, 1, 1, 0]) + + >>> points = np.vstack((data['x'], data['y'], data['z'])).T + array([[ 0.466 0.595 0.324] + [ 0.538 0.407 0.654] + [ 0.850 0.018 0.988] + [ 0.395 0.394 0.363] + [ 0.873 0.996 0.092]]) + + """ + + with open(filename, 'rb') as plyfile: + + + # Check if the file start with ply + if b'ply' not in plyfile.readline(): + raise ValueError('The file does not start whith the word ply') + + # get binary_little/big or ascii + fmt = plyfile.readline().split()[1].decode() + if fmt == "ascii": + raise ValueError('The file is not binary') + + # get extension for building the numpy dtypes + ext = valid_formats[fmt] + + # PointCloud reader vs mesh reader + if triangular_mesh: + + # Parse header + num_points, num_faces, properties = parse_mesh_header(plyfile, ext) + + # Get point data + vertex_data = np.fromfile(plyfile, dtype=properties, count=num_points) + + # Get face data + face_properties = [('k', ext + 'u1'), + ('v1', ext + 'i4'), + ('v2', ext + 'i4'), + ('v3', ext + 'i4')] + faces_data = np.fromfile(plyfile, dtype=face_properties, count=num_faces) + + # Return vertex data and concatenated faces + faces = np.vstack((faces_data['v1'], faces_data['v2'], faces_data['v3'])).T + data = [vertex_data, faces] + + else: + + # Parse header + num_points, properties = parse_header(plyfile, ext) + + # Get data + data = np.fromfile(plyfile, dtype=properties, count=num_points) + + return data + + +def header_properties(field_list, field_names): + + # List of lines to write + lines = [] + + # First line describing element vertex + lines.append('element vertex %d' % field_list[0].shape[0]) + + # Properties lines + i = 0 + for fields in field_list: + for field in fields.T: + lines.append('property %s %s' % (field.dtype.name, field_names[i])) + i += 1 + + return lines + + +def write_ply(filename, field_list, field_names, triangular_faces=None): + """ + Write ".ply" files + + Parameters + ---------- + filename : string + the name of the file to which the data is saved. A '.ply' extension will be appended to the + file name if it does no already have one. + + field_list : list, tuple, numpy array + the fields to be saved in the ply file. Either a numpy array, a list of numpy arrays or a + tuple of numpy arrays. Each 1D numpy array and each column of 2D numpy arrays are considered + as one field. + + field_names : list + the name of each fields as a list of strings. Has to be the same length as the number of + fields. + + Examples + -------- + >>> points = np.random.rand(10, 3) + >>> write_ply('example1.ply', points, ['x', 'y', 'z']) + + >>> values = np.random.randint(2, size=10) + >>> write_ply('example2.ply', [points, values], ['x', 'y', 'z', 'values']) + + >>> colors = np.random.randint(255, size=(10,3), dtype=np.uint8) + >>> field_names = ['x', 'y', 'z', 'red', 'green', 'blue', values'] + >>> write_ply('example3.ply', [points, colors, values], field_names) + + """ + + # Format list input to the right form + field_list = list(field_list) if (type(field_list) == list or type(field_list) == tuple) else list((field_list,)) + for i, field in enumerate(field_list): + if field.ndim < 2: + field_list[i] = field.reshape(-1, 1) + if field.ndim > 2: + print('fields have more than 2 dimensions') + return False + + # check all fields have the same number of data + n_points = [field.shape[0] for field in field_list] + if not np.all(np.equal(n_points, n_points[0])): + print('wrong field dimensions') + return False + + # Check if field_names and field_list have same nb of column + n_fields = np.sum([field.shape[1] for field in field_list]) + if (n_fields != len(field_names)): + print('wrong number of field names') + return False + + # Add extension if not there + if not filename.endswith('.ply'): + filename += '.ply' + + # open in text mode to write the header + with open(filename, 'w') as plyfile: + + # First magical word + header = ['ply'] + + # Encoding format + header.append('format binary_' + sys.byteorder + '_endian 1.0') + + # Points properties description + header.extend(header_properties(field_list, field_names)) + + # Add faces if needded + if triangular_faces is not None: + header.append('element face {:d}'.format(triangular_faces.shape[0])) + header.append('property list uchar int vertex_indices') + + # End of header + header.append('end_header') + + # Write all lines + for line in header: + plyfile.write("%s\n" % line) + + # open in binary/append to use tofile + with open(filename, 'ab') as plyfile: + + # Create a structured array + i = 0 + type_list = [] + for fields in field_list: + for field in fields.T: + type_list += [(field_names[i], field.dtype.str)] + i += 1 + data = np.empty(field_list[0].shape[0], dtype=type_list) + i = 0 + for fields in field_list: + for field in fields.T: + data[field_names[i]] = field + i += 1 + + data.tofile(plyfile) + + if triangular_faces is not None: + triangular_faces = triangular_faces.astype(np.int32) + type_list = [('k', 'uint8')] + [(str(ind), 'int32') for ind in range(3)] + data = np.empty(triangular_faces.shape[0], dtype=type_list) + data['k'] = np.full((triangular_faces.shape[0],), 3, dtype=np.uint8) + data['0'] = triangular_faces[:, 0] + data['1'] = triangular_faces[:, 1] + data['2'] = triangular_faces[:, 2] + data.tofile(plyfile) + + return True + + +def describe_element(name, df): + """ Takes the columns of the dataframe and builds a ply-like description + + Parameters + ---------- + name: str + df: pandas DataFrame + + Returns + ------- + element: list[str] + """ + property_formats = {'f': 'float', 'u': 'uchar', 'i': 'int'} + element = ['element ' + name + ' ' + str(len(df))] + + if name == 'face': + element.append("property list uchar int points_indices") + + else: + for i in range(len(df.columns)): + # get first letter of dtype to infer format + f = property_formats[str(df.dtypes[i])[0]] + element.append('property ' + f + ' ' + df.columns.values[i]) + + return element + diff --git a/competing_methods/my_RandLANet/helper_requirements.txt b/competing_methods/my_RandLANet/helper_requirements.txt new file mode 100644 index 00000000..c6ef57ea --- /dev/null +++ b/competing_methods/my_RandLANet/helper_requirements.txt @@ -0,0 +1,8 @@ +numpy==1.16.1 +h5py==2.10.0 +cython==0.29.15 +open3d-python==0.3.0 +pandas==0.25.3 +scikit-learn==0.21.3 +scipy==1.4.1 +PyYAML==5.1.2 diff --git a/competing_methods/my_RandLANet/helper_tf_util.py b/competing_methods/my_RandLANet/helper_tf_util.py new file mode 100644 index 00000000..1c48ea3b --- /dev/null +++ b/competing_methods/my_RandLANet/helper_tf_util.py @@ -0,0 +1,574 @@ +""" Wrapper functions for TensorFlow layers. + +Author: Charles R. Qi +Date: November 2016 +""" + +import numpy as np +import tensorflow as tf + + +def _variable_on_cpu(name, shape, initializer, use_fp16=False): + """Helper to create a Variable stored on CPU memory. + Args: + name: name of the variable + shape: list of ints + initializer: initializer for Variable + Returns: + Variable Tensor + """ + with tf.device('/cpu:0'): + dtype = tf.float16 if use_fp16 else tf.float32 + var = tf.get_variable(name, shape, initializer=initializer, dtype=dtype) + return var + + +def _variable_with_weight_decay(name, shape, stddev, wd, use_xavier=True): + """Helper to create an initialized Variable with weight decay. + + Note that the Variable is initialized with a truncated normal distribution. + A weight decay is added only if one is specified. + + Args: + name: name of the variable + shape: list of ints + stddev: standard deviation of a truncated Gaussian + wd: add L2Loss weight decay multiplied by this float. If None, weight + decay is not added for this Variable. + use_xavier: bool, whether to use xavier initializer + + Returns: + Variable Tensor + """ + if use_xavier: + initializer = tf.contrib.layers.xavier_initializer() + var = _variable_on_cpu(name, shape, initializer) + else: + # initializer = tf.truncated_normal_initializer(stddev=stddev) + with tf.device('/cpu:0'): + var = tf.truncated_normal(shape, stddev=np.sqrt(2 / shape[-1])) + var = tf.round(var * tf.constant(1000, dtype=tf.float32)) / tf.constant(1000, dtype=tf.float32) + var = tf.Variable(var, name='weights') + if wd is not None: + weight_decay = tf.multiply(tf.nn.l2_loss(var), wd, name='weight_loss') + tf.add_to_collection('losses', weight_decay) + return var + + +def conv1d(inputs, + num_output_channels, + kernel_size, + scope, + stride=1, + padding='SAME', + use_xavier=True, + stddev=1e-3, + weight_decay=0.0, + activation_fn=tf.nn.relu, + bn=False, + bn_decay=None, + is_training=None): + """ 1D convolution with non-linear operation. + + Args: + inputs: 3-D tensor variable BxLxC + num_output_channels: int + kernel_size: int + scope: string + stride: int + padding: 'SAME' or 'VALID' + use_xavier: bool, use xavier_initializer if true + stddev: float, stddev for truncated_normal init + weight_decay: float + activation_fn: function + bn: bool, whether to use batch norm + bn_decay: float or float tensor variable in [0,1] + is_training: bool Tensor variable + + Returns: + Variable tensor + """ + with tf.variable_scope(scope) as sc: + num_in_channels = inputs.get_shape()[-1].value + kernel_shape = [kernel_size, + num_in_channels, num_output_channels] + kernel = _variable_with_weight_decay('weights', + shape=kernel_shape, + use_xavier=use_xavier, + stddev=stddev, + wd=weight_decay) + outputs = tf.nn.conv1d(inputs, kernel, + stride=stride, + padding=padding) + biases = _variable_on_cpu('biases', [num_output_channels], + tf.constant_initializer(0.0)) + outputs = tf.nn.bias_add(outputs, biases) + + if bn: + outputs = batch_norm_for_conv1d(outputs, is_training, + bn_decay=bn_decay, scope='bn') + if activation_fn is not None: + outputs = activation_fn(outputs) + return outputs + + +def conv2d(inputs, + num_output_channels, + kernel_size, + scope, + stride=[1, 1], + padding='SAME', + bn=False, + is_training=None, + use_xavier=False, + stddev=1e-3, + weight_decay=0.0, + activation_fn=tf.nn.relu, + bn_decay=None): + """ 2D convolution with non-linear operation. + + Args: + inputs: 4-D tensor variable BxHxWxC + num_output_channels: int + kernel_size: a list of 2 ints + scope: string + stride: a list of 2 ints + padding: 'SAME' or 'VALID' + use_xavier: bool, use xavier_initializer if true + stddev: float, stddev for truncated_normal init + weight_decay: float + activation_fn: function + bn: bool, whether to use batch norm + bn_decay: float or float tensor variable in [0,1] + is_training: bool Tensor variable + + Returns: + Variable tensor + """ + with tf.variable_scope(scope) as sc: + kernel_h, kernel_w = kernel_size + num_in_channels = inputs.get_shape()[-1].value + kernel_shape = [kernel_h, kernel_w, + num_in_channels, num_output_channels] + kernel = _variable_with_weight_decay('weights', + shape=kernel_shape, + use_xavier=use_xavier, + stddev=stddev, + wd=weight_decay) + stride_h, stride_w = stride + outputs = tf.nn.conv2d(inputs, kernel, + [1, stride_h, stride_w, 1], + padding=padding) + biases = _variable_on_cpu('biases', [num_output_channels], + tf.constant_initializer(0.0)) + outputs = tf.nn.bias_add(outputs, biases) + + if bn: + outputs = tf.layers.batch_normalization(outputs, momentum=0.99, epsilon=1e-6, training=is_training) + if activation_fn is not None: + outputs = tf.nn.leaky_relu(outputs, alpha=0.2) + return outputs + + +def conv2d_transpose(inputs, + num_output_channels, + kernel_size, + scope, + stride=[1, 1], + padding='SAME', + use_xavier=False, + stddev=1e-3, + weight_decay=0.0, + activation_fn=tf.nn.relu, + bn=False, + bn_decay=None, + is_training=None): + """ 2D convolution transpose with non-linear operation. + + Args: + inputs: 4-D tensor variable BxHxWxC + num_output_channels: int + kernel_size: a list of 2 ints + scope: string + stride: a list of 2 ints + padding: 'SAME' or 'VALID' + use_xavier: bool, use xavier_initializer if true + stddev: float, stddev for truncated_normal init + weight_decay: float + activation_fn: function + bn: bool, whether to use batch norm + bn_decay: float or float tensor variable in [0,1] + is_training: bool Tensor variable + + Returns: + Variable tensor + + Note: conv2d(conv2d_transpose(a, num_out, ksize, stride), a.shape[-1], ksize, stride) == a + """ + with tf.variable_scope(scope) as sc: + kernel_h, kernel_w = kernel_size + num_in_channels = inputs.get_shape()[-1].value + kernel_shape = [kernel_h, kernel_w, + num_output_channels, num_in_channels] # reversed to conv2d + kernel = _variable_with_weight_decay('weights', + shape=kernel_shape, + use_xavier=use_xavier, + stddev=stddev, + wd=weight_decay) + stride_h, stride_w = stride + + # from slim.convolution2d_transpose + def get_deconv_dim(dim_size, stride_size, kernel_size, padding): + dim_size *= stride_size + + if padding == 'VALID' and dim_size is not None: + dim_size += max(kernel_size - stride_size, 0) + return dim_size + + # caculate output shape + batch_size = tf.shape(inputs)[0] + height = tf.shape(inputs)[1] + width = tf.shape(inputs)[2] + out_height = get_deconv_dim(height, stride_h, kernel_h, padding) + out_width = get_deconv_dim(width, stride_w, kernel_w, padding) + output_shape = tf.stack([batch_size, out_height, out_width, num_output_channels], axis=0) + + outputs = tf.nn.conv2d_transpose(inputs, kernel, output_shape, + [1, stride_h, stride_w, 1], + padding=padding) + biases = _variable_on_cpu('biases', [num_output_channels], + tf.constant_initializer(0.0)) + outputs = tf.nn.bias_add(outputs, biases) + + if bn: + # outputs = batch_norm_for_conv2d(outputs, is_training, + # bn_decay=bn_decay, scope='bn') + outputs = tf.layers.batch_normalization(outputs, momentum=0.99, epsilon=1e-6, training=is_training) + if activation_fn is not None: + # outputs = activation_fn(outputs) + outputs = tf.nn.leaky_relu(outputs, alpha=0.2) + return outputs + + +def conv3d(inputs, + num_output_channels, + kernel_size, + scope, + stride=[1, 1, 1], + padding='SAME', + use_xavier=True, + stddev=1e-3, + weight_decay=0.0, + activation_fn=tf.nn.relu, + bn=False, + bn_decay=None, + is_training=None): + """ 3D convolution with non-linear operation. + + Args: + inputs: 5-D tensor variable BxDxHxWxC + num_output_channels: int + kernel_size: a list of 3 ints + scope: string + stride: a list of 3 ints + padding: 'SAME' or 'VALID' + use_xavier: bool, use xavier_initializer if true + stddev: float, stddev for truncated_normal init + weight_decay: float + activation_fn: function + bn: bool, whether to use batch norm + bn_decay: float or float tensor variable in [0,1] + is_training: bool Tensor variable + + Returns: + Variable tensor + """ + with tf.variable_scope(scope) as sc: + kernel_d, kernel_h, kernel_w = kernel_size + num_in_channels = inputs.get_shape()[-1].value + kernel_shape = [kernel_d, kernel_h, kernel_w, + num_in_channels, num_output_channels] + kernel = _variable_with_weight_decay('weights', + shape=kernel_shape, + use_xavier=use_xavier, + stddev=stddev, + wd=weight_decay) + stride_d, stride_h, stride_w = stride + outputs = tf.nn.conv3d(inputs, kernel, + [1, stride_d, stride_h, stride_w, 1], + padding=padding) + biases = _variable_on_cpu('biases', [num_output_channels], + tf.constant_initializer(0.0)) + outputs = tf.nn.bias_add(outputs, biases) + + if bn: + outputs = batch_norm_for_conv3d(outputs, is_training, + bn_decay=bn_decay, scope='bn') + + if activation_fn is not None: + outputs = activation_fn(outputs) + return outputs + + +def fully_connected(inputs, + num_outputs, + scope, + use_xavier=True, + stddev=1e-3, + weight_decay=0.0, + activation_fn=tf.nn.relu, + bn=False, + bn_decay=None, + is_training=None): + """ Fully connected layer with non-linear operation. + + Args: + inputs: 2-D tensor BxN + num_outputs: int + + Returns: + Variable tensor of size B x num_outputs. + """ + with tf.variable_scope(scope) as sc: + num_input_units = inputs.get_shape()[-1].value + weights = _variable_with_weight_decay('weights', + shape=[num_input_units, num_outputs], + use_xavier=use_xavier, + stddev=stddev, + wd=weight_decay) + outputs = tf.matmul(inputs, weights) + biases = _variable_on_cpu('biases', [num_outputs], + tf.constant_initializer(0.0)) + outputs = tf.nn.bias_add(outputs, biases) + + if bn: + outputs = batch_norm_for_fc(outputs, is_training, bn_decay, 'bn') + + if activation_fn is not None: + # outputs = activation_fn(outputs) + outputs = tf.nn.leaky_relu(outputs, alpha=0.2) + return outputs + + +def max_pool2d(inputs, + kernel_size, + scope, + stride=[2, 2], + padding='VALID'): + """ 2D max pooling. + + Args: + inputs: 4-D tensor BxHxWxC + kernel_size: a list of 2 ints + stride: a list of 2 ints + + Returns: + Variable tensor + """ + with tf.variable_scope(scope) as sc: + kernel_h, kernel_w = kernel_size + stride_h, stride_w = stride + outputs = tf.nn.max_pool(inputs, + ksize=[1, kernel_h, kernel_w, 1], + strides=[1, stride_h, stride_w, 1], + padding=padding, + name=sc.name) + return outputs + + +def avg_pool2d(inputs, + kernel_size, + scope, + stride=[2, 2], + padding='VALID'): + """ 2D avg pooling. + + Args: + inputs: 4-D tensor BxHxWxC + kernel_size: a list of 2 ints + stride: a list of 2 ints + + Returns: + Variable tensor + """ + with tf.variable_scope(scope) as sc: + kernel_h, kernel_w = kernel_size + stride_h, stride_w = stride + outputs = tf.nn.avg_pool(inputs, + ksize=[1, kernel_h, kernel_w, 1], + strides=[1, stride_h, stride_w, 1], + padding=padding, + name=sc.name) + return outputs + + +def max_pool3d(inputs, + kernel_size, + scope, + stride=[2, 2, 2], + padding='VALID'): + """ 3D max pooling. + + Args: + inputs: 5-D tensor BxDxHxWxC + kernel_size: a list of 3 ints + stride: a list of 3 ints + + Returns: + Variable tensor + """ + with tf.variable_scope(scope) as sc: + kernel_d, kernel_h, kernel_w = kernel_size + stride_d, stride_h, stride_w = stride + outputs = tf.nn.max_pool3d(inputs, + ksize=[1, kernel_d, kernel_h, kernel_w, 1], + strides=[1, stride_d, stride_h, stride_w, 1], + padding=padding, + name=sc.name) + return outputs + + +def avg_pool3d(inputs, + kernel_size, + scope, + stride=[2, 2, 2], + padding='VALID'): + """ 3D avg pooling. + + Args: + inputs: 5-D tensor BxDxHxWxC + kernel_size: a list of 3 ints + stride: a list of 3 ints + + Returns: + Variable tensor + """ + with tf.variable_scope(scope) as sc: + kernel_d, kernel_h, kernel_w = kernel_size + stride_d, stride_h, stride_w = stride + outputs = tf.nn.avg_pool3d(inputs, + ksize=[1, kernel_d, kernel_h, kernel_w, 1], + strides=[1, stride_d, stride_h, stride_w, 1], + padding=padding, + name=sc.name) + return outputs + + +def batch_norm_template(inputs, is_training, scope, moments_dims, bn_decay): + """ Batch normalization on convolutional maps and beyond... + Ref.: http://stackoverflow.com/questions/33949786/how-could-i-use-batch-normalization-in-tensorflow + + Args: + inputs: Tensor, k-D input ... x C could be BC or BHWC or BDHWC + is_training: boolean tf.Varialbe, true indicates training phase + scope: string, variable scope + moments_dims: a list of ints, indicating dimensions for moments calculation + bn_decay: float or float tensor variable, controling moving average weight + Return: + normed: batch-normalized maps + """ + with tf.variable_scope(scope) as sc: + num_channels = inputs.get_shape()[-1].value + beta = tf.Variable(tf.constant(0.0, shape=[num_channels]), + name='beta', trainable=True) + gamma = tf.Variable(tf.constant(1.0, shape=[num_channels]), + name='gamma', trainable=True) + batch_mean, batch_var = tf.nn.moments(inputs, moments_dims, name='moments') + decay = bn_decay if bn_decay is not None else 0.9 + ema = tf.train.ExponentialMovingAverage(decay=decay) + # Operator that maintains moving averages of variables. + ema_apply_op = tf.cond(is_training, + lambda: ema.apply([batch_mean, batch_var]), + lambda: tf.no_op()) + + # Update moving average and return current batch's avg and var. + def mean_var_with_update(): + with tf.control_dependencies([ema_apply_op]): + return tf.identity(batch_mean), tf.identity(batch_var) + + # ema.average returns the Variable holding the average of var. + mean, var = tf.cond(is_training, + mean_var_with_update, + lambda: (ema.average(batch_mean), ema.average(batch_var))) + normed = tf.nn.batch_normalization(inputs, mean, var, beta, gamma, 1e-3) + return normed + + +def batch_norm_for_fc(inputs, is_training, bn_decay, scope): + """ Batch normalization on FC data. + + Args: + inputs: Tensor, 2D BxC input + is_training: boolean tf.Varialbe, true indicates training phase + bn_decay: float or float tensor variable, controling moving average weight + scope: string, variable scope + Return: + normed: batch-normalized maps + """ + return batch_norm_template(inputs, is_training, scope, [0, ], bn_decay) + + +def batch_norm_for_conv1d(inputs, is_training, bn_decay, scope): + """ Batch normalization on 1D convolutional maps. + + Args: + inputs: Tensor, 3D BLC input maps + is_training: boolean tf.Varialbe, true indicates training phase + bn_decay: float or float tensor variable, controling moving average weight + scope: string, variable scope + Return: + normed: batch-normalized maps + """ + return batch_norm_template(inputs, is_training, scope, [0, 1], bn_decay) + + +def batch_norm_for_conv2d(inputs, is_training, bn_decay, scope): + """ Batch normalization on 2D convolutional maps. + + Args: + inputs: Tensor, 4D BHWC input maps + is_training: boolean tf.Varialbe, true indicates training phase + bn_decay: float or float tensor variable, controling moving average weight + scope: string, variable scope + Return: + normed: batch-normalized maps + """ + return batch_norm_template(inputs, is_training, scope, [0, 1, 2], bn_decay) + + +def batch_norm_for_conv3d(inputs, is_training, bn_decay, scope): + """ Batch normalization on 3D convolutional maps. + + Args: + inputs: Tensor, 5D BDHWC input maps + is_training: boolean tf.Varialbe, true indicates training phase + bn_decay: float or float tensor variable, controling moving average weight + scope: string, variable scope + Return: + normed: batch-normalized maps + """ + return batch_norm_template(inputs, is_training, scope, [0, 1, 2, 3], bn_decay) + + +def dropout(inputs, + is_training, + scope, + keep_prob=0.5, + noise_shape=None): + """ Dropout layer. + + Args: + inputs: tensor + is_training: boolean tf.Variable + scope: string + keep_prob: float in [0,1] + noise_shape: list of ints + + Returns: + tensor variable + """ + with tf.variable_scope(scope) as sc: + outputs = tf.cond(is_training, + lambda: tf.nn.dropout(inputs, keep_prob, noise_shape), + lambda: inputs) + return outputs diff --git a/competing_methods/my_RandLANet/helper_tool.py b/competing_methods/my_RandLANet/helper_tool.py new file mode 100644 index 00000000..33509a7a --- /dev/null +++ b/competing_methods/my_RandLANet/helper_tool.py @@ -0,0 +1,367 @@ +import open3d #from open3d import linux as open3d +from os.path import join +import numpy as np +import colorsys, random, os, sys +import pandas as pd + +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' + +BASE_DIR = os.path.dirname(os.path.abspath(__file__)) + +sys.path.append(BASE_DIR) +sys.path.append(os.path.join(BASE_DIR, 'utils')) + +import cpp_wrappers.cpp_subsampling.grid_subsampling as cpp_subsampling +import nearest_neighbors.lib.python.nearest_neighbors as nearest_neighbors + + +class ConfigSemanticKITTI: + k_n = 16 # KNN + num_layers = 4 # Number of layers + num_points = 4096 * 11 # Number of input points + num_classes = 19 # Number of valid classes + sub_grid_size = 0.06 # preprocess_parameter + + batch_size = 6 # batch_size during training + val_batch_size = 20 # batch_size during validation and test + train_steps = 500 # Number of steps per epochs + val_steps = 100 # Number of validation steps per epoch + + sub_sampling_ratio = [4, 4, 4, 4] # sampling ratio of random sampling at each layer + d_out = [16, 64, 128, 256] # feature dimension + num_sub_points = [num_points // 4, num_points // 16, num_points // 64, num_points // 256] + + noise_init = 3.5 # noise initial parameter + max_epoch = 100 # maximum epoch during training + learning_rate = 1e-2 # initial learning rate + lr_decays = {i: 0.95 for i in range(0, 500)} # decay rate of learning rate + + train_sum_dir = 'train_log' + saving = True + saving_path = None + + +class ConfigS3DIS: + k_n = 16 # KNN + num_layers = 5 # Number of layers + num_points = 40960 # Number of input points + num_classes = 13 # Number of valid classes + sub_grid_size = 0.04 # preprocess_parameter + + batch_size = 6 # batch_size during training + val_batch_size = 20 # batch_size during validation and test + train_steps = 500 # Number of steps per epochs + val_steps = 100 # Number of validation steps per epoch + + sub_sampling_ratio = [4, 4, 4, 4, 2] # sampling ratio of random sampling at each layer + d_out = [16, 64, 128, 256, 512] # feature dimension + + noise_init = 3.5 # noise initial parameter + max_epoch = 100 # maximum epoch during training + learning_rate = 1e-2 # initial learning rate + lr_decays = {i: 0.95 for i in range(0, 500)} # decay rate of learning rate + + train_sum_dir = 'train_log' + saving = True + saving_path = None + + +class ConfigSemantic3D: + k_n = 16 # KNN + num_layers = 5 # Number of layers + num_points = 65536 # Number of input points + num_classes = 8 # Number of valid classes + sub_grid_size = 0.06 # preprocess_parameter + + batch_size = 4 # batch_size during training + val_batch_size = 16 # batch_size during validation and test + train_steps = 500 # Number of steps per epochs + val_steps = 100 # Number of validation steps per epoch + + sub_sampling_ratio = [4, 4, 4, 4, 2] # sampling ratio of random sampling at each layer + d_out = [16, 64, 128, 256, 512] # feature dimension + + noise_init = 3.5 # noise initial parameter + max_epoch = 100 # maximum epoch during training + learning_rate = 1e-2 # initial learning rate + lr_decays = {i: 0.95 for i in range(0, 500)} # decay rate of learning rate + + train_sum_dir = 'train_log' + saving = True + saving_path = None + + augment_scale_anisotropic = True + augment_symmetries = [True, False, False] + augment_rotation = 'vertical' + augment_scale_min = 0.8 + augment_scale_max = 1.2 + augment_noise = 0.001 + augment_occlusion = 'none' + augment_color = 0.8 + + +class ConfigUrbanMesh: + k_n = 16 # 16 # KNN + num_layers = 5 # Number of layers + num_points = 200000 # Number of input points, depends on the GPU memory + num_classes = 6 # Number of valid classes + sub_grid_size = 0.01 # preprocess_parameter + + batch_size = 1 #4 # batch_size during training + val_batch_size = 1 # 16 # batch_size during validation and test + train_steps = 500 # 500 # Number of steps per epochs + val_steps = 100 # 100 # Number of validation steps per epoch + + sub_sampling_ratio = [4, 4, 4, 4, 2] # sampling ratio of random sampling at each layer + d_out = [16, 64, 128, 256, 512] # feature dimension + + noise_init = 3.5 # noise initial parameter + max_epoch = 100 # 100 # maximum epoch during training + learning_rate = 1e-2 # initial learning rate + lr_decays = {i: 0.95 for i in range(0, 500)} # decay rate of learning rate + + train_sum_dir = 'train_log' + saving = True + saving_path = None + + augment_scale_anisotropic = True + augment_symmetries = [True, False, False] + augment_rotation = 'vertical' + augment_scale_min = 0.8 + augment_scale_max = 1.2 + augment_noise = 0.001 + augment_occlusion = 'none' + augment_color = 0.8 + + +class DataProcessing: + @staticmethod + def load_pc_semantic3d(filename): + pc_pd = pd.read_csv(filename, header=None, delim_whitespace=True, dtype=np.float16) + pc = pc_pd.values + return pc + + @staticmethod + def load_label_semantic3d(filename): + label_pd = pd.read_csv(filename, header=None, delim_whitespace=True, dtype=np.uint8) + cloud_labels = label_pd.values + return cloud_labels + + @staticmethod + def load_pc_kitti(pc_path): + scan = np.fromfile(pc_path, dtype=np.float32) + scan = scan.reshape((-1, 4)) + points = scan[:, 0:3] # get xyz + return points + + @staticmethod + def load_label_kitti(label_path, remap_lut): + label = np.fromfile(label_path, dtype=np.uint32) + label = label.reshape((-1)) + sem_label = label & 0xFFFF # semantic label in lower half + inst_label = label >> 16 # instance id in upper half + assert ((sem_label + (inst_label << 16) == label).all()) + sem_label = remap_lut[sem_label] + return sem_label.astype(np.int32) + + @staticmethod + def get_file_list(dataset_path, test_scan_num): + seq_list = np.sort(os.listdir(dataset_path)) + + train_file_list = [] + test_file_list = [] + val_file_list = [] + for seq_id in seq_list: + seq_path = join(dataset_path, seq_id) + pc_path = join(seq_path, 'velodyne') + if seq_id == '08': + val_file_list.append([join(pc_path, f) for f in np.sort(os.listdir(pc_path))]) + if seq_id == test_scan_num: + test_file_list.append([join(pc_path, f) for f in np.sort(os.listdir(pc_path))]) + elif int(seq_id) >= 11 and seq_id == test_scan_num: + test_file_list.append([join(pc_path, f) for f in np.sort(os.listdir(pc_path))]) + elif seq_id in ['00', '01', '02', '03', '04', '05', '06', '07', '09', '10']: + train_file_list.append([join(pc_path, f) for f in np.sort(os.listdir(pc_path))]) + + train_file_list = np.concatenate(train_file_list, axis=0) + val_file_list = np.concatenate(val_file_list, axis=0) + test_file_list = np.concatenate(test_file_list, axis=0) + return train_file_list, val_file_list, test_file_list + + @staticmethod + def knn_search(support_pts, query_pts, k): + """ + :param support_pts: points you have, B*N1*3 + :param query_pts: points you want to know the neighbour index, B*N2*3 + :param k: Number of neighbours in knn search + :return: neighbor_idx: neighboring points indexes, B*N2*k + """ + + neighbor_idx = nearest_neighbors.knn_batch(support_pts, query_pts, k, omp=True) + return neighbor_idx.astype(np.int32) + + @staticmethod + def data_aug(xyz, color, labels, idx, num_out): + num_in = len(xyz) + dup = np.random.choice(num_in, num_out - num_in) + xyz_dup = xyz[dup, ...] + xyz_aug = np.concatenate([xyz, xyz_dup], 0) + color_dup = color[dup, ...] + color_aug = np.concatenate([color, color_dup], 0) + idx_dup = list(range(num_in)) + list(dup) + idx_aug = idx[idx_dup] + label_aug = labels[idx_dup] + return xyz_aug, color_aug, idx_aug, label_aug + + @staticmethod + def shuffle_idx(x): + # random shuffle the index + idx = np.arange(len(x)) + np.random.shuffle(idx) + return x[idx] + + @staticmethod + def shuffle_list(data_list): + indices = np.arange(np.shape(data_list)[0]) + np.random.shuffle(indices) + data_list = data_list[indices] + return data_list + + @staticmethod + def grid_sub_sampling(points, features=None, labels=None, grid_size=0.1, verbose=0): + """ + CPP wrapper for a grid sub_sampling (method = barycenter for points and features + :param points: (N, 3) matrix of input points + :param features: optional (N, d) matrix of features (floating number) + :param labels: optional (N,) matrix of integer labels + :param grid_size: parameter defining the size of grid voxels + :param verbose: 1 to display + :return: sub_sampled points, with features and/or labels depending of the input + """ + + if (features is None) and (labels is None): + return cpp_subsampling.compute(points, sampleDl=grid_size, verbose=verbose) + elif labels is None: + return cpp_subsampling.compute(points, features=features, sampleDl=grid_size, verbose=verbose) + elif features is None: + return cpp_subsampling.compute(points, classes=labels, sampleDl=grid_size, verbose=verbose) + else: + return cpp_subsampling.compute(points, features=features, classes=labels, sampleDl=grid_size, + verbose=verbose) + + @staticmethod + def IoU_from_confusions(confusions): + """ + Computes IoU from confusion matrices. + :param confusions: ([..., n_c, n_c] np.int32). Can be any dimension, the confusion matrices should be described by + the last axes. n_c = number of classes + :return: ([..., n_c] np.float32) IoU score + """ + + # Compute TP, FP, FN. This assume that the second to last axis counts the truths (like the first axis of a + # confusion matrix), and that the last axis counts the predictions (like the second axis of a confusion matrix) + TP = np.diagonal(confusions, axis1=-2, axis2=-1) + TP_plus_FN = np.sum(confusions, axis=-1) + TP_plus_FP = np.sum(confusions, axis=-2) + + # Compute IoU + IoU = TP / (TP_plus_FP + TP_plus_FN - TP + 1e-6) + + # Compute mIoU with only the actual classes + mask = TP_plus_FN < 1e-3 + counts = np.sum(1 - mask, axis=-1, keepdims=True) + mIoU = np.sum(IoU, axis=-1, keepdims=True) / (counts + 1e-6) + + # If class is absent, place mIoU in place of 0 IoU to get the actual mean later + IoU += mask * mIoU + return IoU + + @staticmethod + def get_class_weights(dataset_name): + # pre-calculate the number of points in each category + num_per_class = [] + if dataset_name is 'S3DIS': + num_per_class = np.array([3370714, 2856755, 4919229, 318158, 375640, 478001, 974733, + 650464, 791496, 88727, 1284130, 229758, 2272837], dtype=np.int32) + elif dataset_name is 'Semantic3D': + num_per_class = np.array([5181602, 5012952, 6830086, 1311528, 10476365, 946982, 334860, 269353], + dtype=np.int32) + elif dataset_name is 'SemanticKITTI': + num_per_class = np.array([55437630, 320797, 541736, 2578735, 3274484, 552662, 184064, 78858, + 240942562, 17294618, 170599734, 6369672, 230413074, 101130274, 476491114, + 9833174, 129609852, 4506626, 1168181]) + elif dataset_name is 'UrbanMesh': + num_per_class = np.array([47134956,33425255,105259026,15148602,2378792,952302], dtype=np.int32) #np.array([1, 1, 1, 1, 1, 1], dtype=np.int32)# + + weight = num_per_class / float(sum(num_per_class)) + ce_label_weight = 1 / (weight + 0.02) + return np.expand_dims(ce_label_weight, axis=0) + + +class Plot: + @staticmethod + def random_colors(N, bright=True, seed=0): + brightness = 1.0 if bright else 0.7 + hsv = [(0.15 + i / float(N), 1, brightness) for i in range(N)] + colors = list(map(lambda c: colorsys.hsv_to_rgb(*c), hsv)) + random.seed(seed) + random.shuffle(colors) + return colors + + @staticmethod + def draw_pc(pc_xyzrgb): + pc = open3d.PointCloud() + pc.points = open3d.Vector3dVector(pc_xyzrgb[:, 0:3]) + if pc_xyzrgb.shape[1] == 3: + open3d.draw_geometries([pc]) + return 0 + if np.max(pc_xyzrgb[:, 3:6]) > 20: ## 0-255 + pc.colors = open3d.Vector3dVector(pc_xyzrgb[:, 3:6] / 255.) + else: + pc.colors = open3d.Vector3dVector(pc_xyzrgb[:, 3:6]) + open3d.draw_geometries([pc]) + return 0 + + @staticmethod + def draw_pc_sem_ins(pc_xyz, pc_sem_ins, plot_colors=None): + """ + pc_xyz: 3D coordinates of point clouds + pc_sem_ins: semantic or instance labels + plot_colors: custom color list + """ + if plot_colors is not None: + ins_colors = plot_colors + else: + ins_colors = Plot.random_colors(len(np.unique(pc_sem_ins)) + 1, seed=2) + + ############################## + sem_ins_labels = np.unique(pc_sem_ins) + sem_ins_bbox = [] + Y_colors = np.zeros((pc_sem_ins.shape[0], 3)) + for id, semins in enumerate(sem_ins_labels): + valid_ind = np.argwhere(pc_sem_ins == semins)[:, 0] + if semins <= -1: + tp = [0, 0, 0] + else: + if plot_colors is not None: + tp = ins_colors[semins] + else: + tp = ins_colors[id] + + Y_colors[valid_ind] = tp + + ### bbox + valid_xyz = pc_xyz[valid_ind] + + xmin = np.min(valid_xyz[:, 0]); + xmax = np.max(valid_xyz[:, 0]) + ymin = np.min(valid_xyz[:, 1]); + ymax = np.max(valid_xyz[:, 1]) + zmin = np.min(valid_xyz[:, 2]); + zmax = np.max(valid_xyz[:, 2]) + sem_ins_bbox.append( + [[xmin, ymin, zmin], [xmax, ymax, zmax], [min(tp[0], 1.), min(tp[1], 1.), min(tp[2], 1.)]]) + + Y_semins = np.concatenate([pc_xyz[:, 0:3], Y_colors], axis=-1) + Plot.draw_pc(Y_semins) + return Y_semins diff --git a/competing_methods/my_RandLANet/imgs/S3DIS_area2.gif b/competing_methods/my_RandLANet/imgs/S3DIS_area2.gif new file mode 100644 index 00000000..1df595be Binary files /dev/null and b/competing_methods/my_RandLANet/imgs/S3DIS_area2.gif differ diff --git a/competing_methods/my_RandLANet/imgs/S3DIS_area3.gif b/competing_methods/my_RandLANet/imgs/S3DIS_area3.gif new file mode 100644 index 00000000..c754f9d3 Binary files /dev/null and b/competing_methods/my_RandLANet/imgs/S3DIS_area3.gif differ diff --git a/competing_methods/my_RandLANet/imgs/Semantic3D-1.gif b/competing_methods/my_RandLANet/imgs/Semantic3D-1.gif new file mode 100644 index 00000000..02f1a95c Binary files /dev/null and b/competing_methods/my_RandLANet/imgs/Semantic3D-1.gif differ diff --git a/competing_methods/my_RandLANet/imgs/Semantic3D-3.gif b/competing_methods/my_RandLANet/imgs/Semantic3D-3.gif new file mode 100644 index 00000000..a93c4b31 Binary files /dev/null and b/competing_methods/my_RandLANet/imgs/Semantic3D-3.gif differ diff --git a/competing_methods/my_RandLANet/imgs/Semantic3D-4.gif b/competing_methods/my_RandLANet/imgs/Semantic3D-4.gif new file mode 100644 index 00000000..628f7185 Binary files /dev/null and b/competing_methods/my_RandLANet/imgs/Semantic3D-4.gif differ diff --git a/competing_methods/my_RandLANet/imgs/SemanticKITTI-2.gif b/competing_methods/my_RandLANet/imgs/SemanticKITTI-2.gif new file mode 100644 index 00000000..006fadd6 Binary files /dev/null and b/competing_methods/my_RandLANet/imgs/SemanticKITTI-2.gif differ diff --git a/competing_methods/my_RandLANet/jobs_6_fold_cv_s3dis.sh b/competing_methods/my_RandLANet/jobs_6_fold_cv_s3dis.sh new file mode 100644 index 00000000..eb14c8cf --- /dev/null +++ b/competing_methods/my_RandLANet/jobs_6_fold_cv_s3dis.sh @@ -0,0 +1,14 @@ +python -B main_S3DIS.py --gpu 0 --mode train --test_area 1 +python -B main_S3DIS.py --gpu 0 --mode test --test_area 1 +python -B main_S3DIS.py --gpu 0 --mode train --test_area 2 +python -B main_S3DIS.py --gpu 0 --mode test --test_area 2 +python -B main_S3DIS.py --gpu 0 --mode train --test_area 3 +python -B main_S3DIS.py --gpu 0 --mode test --test_area 3 +python -B main_S3DIS.py --gpu 0 --mode train --test_area 4 +python -B main_S3DIS.py --gpu 0 --mode test --test_area 4 +python -B main_S3DIS.py --gpu 0 --mode train --test_area 5 +python -B main_S3DIS.py --gpu 0 --mode test --test_area 5 +python -B main_S3DIS.py --gpu 0 --mode train --test_area 6 +python -B main_S3DIS.py --gpu 0 --mode test --test_area 6 + + diff --git a/competing_methods/my_RandLANet/jobs_test_semantickitti.sh b/competing_methods/my_RandLANet/jobs_test_semantickitti.sh new file mode 100644 index 00000000..340c66c4 --- /dev/null +++ b/competing_methods/my_RandLANet/jobs_test_semantickitti.sh @@ -0,0 +1,12 @@ +python -B main_SemanticKITTI.py --gpu 0 --mode test --test_area 11 +python -B main_SemanticKITTI.py --gpu 0 --mode test --test_area 12 +python -B main_SemanticKITTI.py --gpu 0 --mode test --test_area 13 +python -B main_SemanticKITTI.py --gpu 0 --mode test --test_area 14 +python -B main_SemanticKITTI.py --gpu 0 --mode test --test_area 15 +python -B main_SemanticKITTI.py --gpu 0 --mode test --test_area 16 +python -B main_SemanticKITTI.py --gpu 0 --mode test --test_area 17 +python -B main_SemanticKITTI.py --gpu 0 --mode test --test_area 18 +python -B main_SemanticKITTI.py --gpu 0 --mode test --test_area 19 +python -B main_SemanticKITTI.py --gpu 0 --mode test --test_area 20 +python -B main_SemanticKITTI.py --gpu 0 --mode test --test_area 21 + diff --git a/competing_methods/my_RandLANet/main_S3DIS.py b/competing_methods/my_RandLANet/main_S3DIS.py new file mode 100644 index 00000000..7e218342 --- /dev/null +++ b/competing_methods/my_RandLANet/main_S3DIS.py @@ -0,0 +1,269 @@ +from os.path import join +from RandLANet import Network +from tester_S3DIS import ModelTester +from helper_ply import read_ply +from helper_tool import ConfigS3DIS as cfg +from helper_tool import DataProcessing as DP +from helper_tool import Plot +import tensorflow as tf +import numpy as np +import time, pickle, argparse, glob, os + + +class S3DIS: + def __init__(self, test_area_idx): + self.name = 'S3DIS' + self.path = '/data/S3DIS' + self.label_to_names = {0: 'ceiling', + 1: 'floor', + 2: 'wall', + 3: 'beam', + 4: 'column', + 5: 'window', + 6: 'door', + 7: 'table', + 8: 'chair', + 9: 'sofa', + 10: 'bookcase', + 11: 'board', + 12: 'clutter'} + self.num_classes = len(self.label_to_names) + self.label_values = np.sort([k for k, v in self.label_to_names.items()]) + self.label_to_idx = {l: i for i, l in enumerate(self.label_values)} + self.ignored_labels = np.array([]) + + self.val_split = 'Area_' + str(test_area_idx) + self.all_files = glob.glob(join(self.path, 'original_ply', '*.ply')) + + # Initiate containers + self.val_proj = [] + self.val_labels = [] + self.possibility = {} + self.min_possibility = {} + self.input_trees = {'training': [], 'validation': []} + self.input_colors = {'training': [], 'validation': []} + self.input_labels = {'training': [], 'validation': []} + self.input_names = {'training': [], 'validation': []} + self.load_sub_sampled_clouds(cfg.sub_grid_size) + + def load_sub_sampled_clouds(self, sub_grid_size): + tree_path = join(self.path, 'input_{:.3f}'.format(sub_grid_size)) + for i, file_path in enumerate(self.all_files): + t0 = time.time() + cloud_name = file_path.split('/')[-1][:-4] + if self.val_split in cloud_name: + cloud_split = 'validation' + else: + cloud_split = 'training' + + # Name of the input files + kd_tree_file = join(tree_path, '{:s}_KDTree.pkl'.format(cloud_name)) + sub_ply_file = join(tree_path, '{:s}.ply'.format(cloud_name)) + + data = read_ply(sub_ply_file) + sub_colors = np.vstack((data['red'], data['green'], data['blue'])).T + sub_labels = data['class'] + + # Read pkl with search tree + with open(kd_tree_file, 'rb') as f: + search_tree = pickle.load(f) + + self.input_trees[cloud_split] += [search_tree] + self.input_colors[cloud_split] += [sub_colors] + self.input_labels[cloud_split] += [sub_labels] + self.input_names[cloud_split] += [cloud_name] + + size = sub_colors.shape[0] * 4 * 7 + print('{:s} {:.1f} MB loaded in {:.1f}s'.format(kd_tree_file.split('/')[-1], size * 1e-6, time.time() - t0)) + + print('\nPreparing reprojected indices for testing') + + # Get validation and test reprojected indices + for i, file_path in enumerate(self.all_files): + t0 = time.time() + cloud_name = file_path.split('/')[-1][:-4] + + # Validation projection and labels + if self.val_split in cloud_name: + proj_file = join(tree_path, '{:s}_proj.pkl'.format(cloud_name)) + with open(proj_file, 'rb') as f: + proj_idx, labels = pickle.load(f) + self.val_proj += [proj_idx] + self.val_labels += [labels] + print('{:s} done in {:.1f}s'.format(cloud_name, time.time() - t0)) + + # Generate the input data flow + def get_batch_gen(self, split): + if split == 'training': + num_per_epoch = cfg.train_steps * cfg.batch_size + elif split == 'validation': + num_per_epoch = cfg.val_steps * cfg.val_batch_size + + self.possibility[split] = [] + self.min_possibility[split] = [] + # Random initialize + for i, tree in enumerate(self.input_colors[split]): + self.possibility[split] += [np.random.rand(tree.data.shape[0]) * 1e-3] + self.min_possibility[split] += [float(np.min(self.possibility[split][-1]))] + + def spatially_regular_gen(): + # Generator loop + for i in range(num_per_epoch): + + # Choose the cloud with the lowest probability + cloud_idx = int(np.argmin(self.min_possibility[split])) + + # choose the point with the minimum of possibility in the cloud as query point + point_ind = np.argmin(self.possibility[split][cloud_idx]) + + # Get all points within the cloud from tree structure + points = np.array(self.input_trees[split][cloud_idx].data, copy=False) + + # Center point of input region + center_point = points[point_ind, :].reshape(1, -1) + + # Add noise to the center point + noise = np.random.normal(scale=cfg.noise_init / 10, size=center_point.shape) + pick_point = center_point + noise.astype(center_point.dtype) + + # Check if the number of points in the selected cloud is less than the predefined num_points + if len(points) < cfg.num_points: + # Query all points within the cloud + queried_idx = self.input_trees[split][cloud_idx].query(pick_point, k=len(points))[1][0] + else: + # Query the predefined number of points + queried_idx = self.input_trees[split][cloud_idx].query(pick_point, k=cfg.num_points)[1][0] + + # Shuffle index + queried_idx = DP.shuffle_idx(queried_idx) + # Get corresponding points and colors based on the index + queried_pc_xyz = points[queried_idx] + queried_pc_xyz = queried_pc_xyz - pick_point + queried_pc_colors = self.input_colors[split][cloud_idx][queried_idx] + queried_pc_labels = self.input_labels[split][cloud_idx][queried_idx] + + # Update the possibility of the selected points + dists = np.sum(np.square((points[queried_idx] - pick_point).astype(np.float32)), axis=1) + delta = np.square(1 - dists / np.max(dists)) + self.possibility[split][cloud_idx][queried_idx] += delta + self.min_possibility[split][cloud_idx] = float(np.min(self.possibility[split][cloud_idx])) + + # up_sampled with replacement + if len(points) < cfg.num_points: + queried_pc_xyz, queried_pc_colors, queried_idx, queried_pc_labels = \ + DP.data_aug(queried_pc_xyz, queried_pc_colors, queried_pc_labels, queried_idx, cfg.num_points) + + if True: + yield (queried_pc_xyz.astype(np.float32), + queried_pc_colors.astype(np.float32), + queried_pc_labels, + queried_idx.astype(np.int32), + np.array([cloud_idx], dtype=np.int32)) + + gen_func = spatially_regular_gen + gen_types = (tf.float32, tf.float32, tf.int32, tf.int32, tf.int32) + gen_shapes = ([None, 3], [None, 3], [None], [None], [None]) + return gen_func, gen_types, gen_shapes + + @staticmethod + def get_tf_mapping2(): + # Collect flat inputs + def tf_map(batch_xyz, batch_features, batch_labels, batch_pc_idx, batch_cloud_idx): + batch_features = tf.concat([batch_xyz, batch_features], axis=-1) + input_points = [] + input_neighbors = [] + input_pools = [] + input_up_samples = [] + + for i in range(cfg.num_layers): + neighbour_idx = tf.py_func(DP.knn_search, [batch_xyz, batch_xyz, cfg.k_n], tf.int32) + sub_points = batch_xyz[:, :tf.shape(batch_xyz)[1] // cfg.sub_sampling_ratio[i], :] + pool_i = neighbour_idx[:, :tf.shape(batch_xyz)[1] // cfg.sub_sampling_ratio[i], :] + up_i = tf.py_func(DP.knn_search, [sub_points, batch_xyz, 1], tf.int32) + input_points.append(batch_xyz) + input_neighbors.append(neighbour_idx) + input_pools.append(pool_i) + input_up_samples.append(up_i) + batch_xyz = sub_points + + input_list = input_points + input_neighbors + input_pools + input_up_samples + input_list += [batch_features, batch_labels, batch_pc_idx, batch_cloud_idx] + + return input_list + + return tf_map + + def init_input_pipeline(self): + print('Initiating input pipelines') + cfg.ignored_label_inds = [self.label_to_idx[ign_label] for ign_label in self.ignored_labels] + gen_function, gen_types, gen_shapes = self.get_batch_gen('training') + gen_function_val, _, _ = self.get_batch_gen('validation') + self.train_data = tf.data.Dataset.from_generator(gen_function, gen_types, gen_shapes) + self.val_data = tf.data.Dataset.from_generator(gen_function_val, gen_types, gen_shapes) + + self.batch_train_data = self.train_data.batch(cfg.batch_size) + self.batch_val_data = self.val_data.batch(cfg.val_batch_size) + map_func = self.get_tf_mapping2() + + self.batch_train_data = self.batch_train_data.map(map_func=map_func) + self.batch_val_data = self.batch_val_data.map(map_func=map_func) + + self.batch_train_data = self.batch_train_data.prefetch(cfg.batch_size) + self.batch_val_data = self.batch_val_data.prefetch(cfg.val_batch_size) + + iter = tf.data.Iterator.from_structure(self.batch_train_data.output_types, self.batch_train_data.output_shapes) + self.flat_inputs = iter.get_next() + self.train_init_op = iter.make_initializer(self.batch_train_data) + self.val_init_op = iter.make_initializer(self.batch_val_data) + + +if __name__ == '__main__': + parser = argparse.ArgumentParser() + parser.add_argument('--gpu', type=int, default=0, help='the number of GPUs to use [default: 0]') + parser.add_argument('--test_area', type=int, default=5, help='Which area to use for test, option: 1-6 [default: 5]') + parser.add_argument('--mode', type=str, default='train', help='options: train, test, vis') + parser.add_argument('--model_path', type=str, default='None', help='pretrained model path') + FLAGS = parser.parse_args() + + os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" + os.environ['CUDA_VISIBLE_DEVICES'] = str(FLAGS.gpu) + os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' + Mode = FLAGS.mode + + test_area = FLAGS.test_area + dataset = S3DIS(test_area) + dataset.init_input_pipeline() + + if Mode == 'train': + model = Network(dataset, cfg) + model.train(dataset) + elif Mode == 'test': + cfg.saving = False + model = Network(dataset, cfg) + if FLAGS.model_path is not 'None': + chosen_snap = FLAGS.model_path + else: + chosen_snapshot = -1 + logs = np.sort([os.path.join('results', f) for f in os.listdir('results') if f.startswith('Log')]) + chosen_folder = logs[-1] + snap_path = join(chosen_folder, 'snapshots') + snap_steps = [int(f[:-5].split('-')[-1]) for f in os.listdir(snap_path) if f[-5:] == '.meta'] + chosen_step = np.sort(snap_steps)[-1] + chosen_snap = os.path.join(snap_path, 'snap-{:d}'.format(chosen_step)) + tester = ModelTester(model, dataset, restore_snap=chosen_snap) + tester.test(model, dataset) + else: + ################## + # Visualize data # + ################## + + with tf.Session() as sess: + sess.run(tf.global_variables_initializer()) + sess.run(dataset.train_init_op) + while True: + flat_inputs = sess.run(dataset.flat_inputs) + pc_xyz = flat_inputs[0] + sub_pc_xyz = flat_inputs[1] + labels = flat_inputs[21] + Plot.draw_pc_sem_ins(pc_xyz[0, :, :], labels[0, :]) + Plot.draw_pc_sem_ins(sub_pc_xyz[0, :, :], labels[0, 0:np.shape(sub_pc_xyz)[1]]) diff --git a/competing_methods/my_RandLANet/main_Semantic3D.py b/competing_methods/my_RandLANet/main_Semantic3D.py new file mode 100644 index 00000000..6eb9c7bd --- /dev/null +++ b/competing_methods/my_RandLANet/main_Semantic3D.py @@ -0,0 +1,385 @@ +from os.path import join, exists +from RandLANet import Network +from tester_Semantic3D import ModelTester +from helper_ply import read_ply +from helper_tool import Plot +from helper_tool import DataProcessing as DP +from helper_tool import ConfigSemantic3D as cfg +import tensorflow as tf +import numpy as np +import pickle, argparse, os + + +class Semantic3D: + def __init__(self): + self.name = 'Semantic3D' + self.path = '/data/semantic3d' + self.label_to_names = {0: 'unlabeled', + 1: 'man-made terrain', + 2: 'natural terrain', + 3: 'high vegetation', + 4: 'low vegetation', + 5: 'buildings', + 6: 'hard scape', + 7: 'scanning artefacts', + 8: 'cars'} + self.num_classes = len(self.label_to_names) + self.label_values = np.sort([k for k, v in self.label_to_names.items()]) + self.label_to_idx = {l: i for i, l in enumerate(self.label_values)} + self.ignored_labels = np.sort([0]) + + self.original_folder = join(self.path, 'original_data') + self.full_pc_folder = join(self.path, 'original_ply') + self.sub_pc_folder = join(self.path, 'input_{:.3f}'.format(cfg.sub_grid_size)) + + # Following KPConv to do the train-validation split + self.all_splits = [0, 1, 4, 5, 3, 4, 3, 0, 1, 2, 3, 4, 2, 0, 5] + self.val_split = 1 + + # Initial training-validation-testing files + self.train_files = [] + self.val_files = [] + self.test_files = [] + cloud_names = [file_name[:-4] for file_name in os.listdir(self.original_folder) if file_name[-4:] == '.txt'] + for pc_name in cloud_names: + if exists(join(self.original_folder, pc_name + '.labels')): + self.train_files.append(join(self.sub_pc_folder, pc_name + '.ply')) + else: + self.test_files.append(join(self.full_pc_folder, pc_name + '.ply')) + + self.train_files = np.sort(self.train_files) + self.test_files = np.sort(self.test_files) + + for i, file_path in enumerate(self.train_files): + if self.all_splits[i] == self.val_split: + self.val_files.append(file_path) + + self.train_files = np.sort([x for x in self.train_files if x not in self.val_files]) + + # Initiate containers + self.val_proj = [] + self.val_labels = [] + self.test_proj = [] + self.test_labels = [] + + self.possibility = {} + self.min_possibility = {} + self.class_weight = {} + self.input_trees = {'training': [], 'validation': [], 'test': []} + self.input_colors = {'training': [], 'validation': [], 'test': []} + self.input_labels = {'training': [], 'validation': []} + + # Ascii files dict for testing + self.ascii_files = { + 'MarketplaceFeldkirch_Station4_rgb_intensity-reduced.ply': 'marketsquarefeldkirch4-reduced.labels', + 'sg27_station10_rgb_intensity-reduced.ply': 'sg27_10-reduced.labels', + 'sg28_Station2_rgb_intensity-reduced.ply': 'sg28_2-reduced.labels', + 'StGallenCathedral_station6_rgb_intensity-reduced.ply': 'stgallencathedral6-reduced.labels', + 'birdfountain_station1_xyz_intensity_rgb.ply': 'birdfountain1.labels', + 'castleblatten_station1_intensity_rgb.ply': 'castleblatten1.labels', + 'castleblatten_station5_xyz_intensity_rgb.ply': 'castleblatten5.labels', + 'marketplacefeldkirch_station1_intensity_rgb.ply': 'marketsquarefeldkirch1.labels', + 'marketplacefeldkirch_station4_intensity_rgb.ply': 'marketsquarefeldkirch4.labels', + 'marketplacefeldkirch_station7_intensity_rgb.ply': 'marketsquarefeldkirch7.labels', + 'sg27_station10_intensity_rgb.ply': 'sg27_10.labels', + 'sg27_station3_intensity_rgb.ply': 'sg27_3.labels', + 'sg27_station6_intensity_rgb.ply': 'sg27_6.labels', + 'sg27_station8_intensity_rgb.ply': 'sg27_8.labels', + 'sg28_station2_intensity_rgb.ply': 'sg28_2.labels', + 'sg28_station5_xyz_intensity_rgb.ply': 'sg28_5.labels', + 'stgallencathedral_station1_intensity_rgb.ply': 'stgallencathedral1.labels', + 'stgallencathedral_station3_intensity_rgb.ply': 'stgallencathedral3.labels', + 'stgallencathedral_station6_intensity_rgb.ply': 'stgallencathedral6.labels'} + + self.load_sub_sampled_clouds(cfg.sub_grid_size) + + def load_sub_sampled_clouds(self, sub_grid_size): + + tree_path = join(self.path, 'input_{:.3f}'.format(sub_grid_size)) + files = np.hstack((self.train_files, self.val_files, self.test_files)) + + for i, file_path in enumerate(files): + cloud_name = file_path.split('/')[-1][:-4] + print('Load_pc_' + str(i) + ': ' + cloud_name) + if file_path in self.val_files: + cloud_split = 'validation' + elif file_path in self.train_files: + cloud_split = 'training' + else: + cloud_split = 'test' + + # Name of the input files + kd_tree_file = join(tree_path, '{:s}_KDTree.pkl'.format(cloud_name)) + sub_ply_file = join(tree_path, '{:s}.ply'.format(cloud_name)) + + # read ply with data + data = read_ply(sub_ply_file) + sub_colors = np.vstack((data['red'], data['green'], data['blue'])).T + if cloud_split == 'test': + sub_labels = None + else: + sub_labels = data['class'] + + # Read pkl with search tree + with open(kd_tree_file, 'rb') as f: + search_tree = pickle.load(f) + + self.input_trees[cloud_split] += [search_tree] + self.input_colors[cloud_split] += [sub_colors] + if cloud_split in ['training', 'validation']: + self.input_labels[cloud_split] += [sub_labels] + + # Get validation and test re_projection indices + print('\nPreparing reprojection indices for validation and test') + + for i, file_path in enumerate(files): + + # get cloud name and split + cloud_name = file_path.split('/')[-1][:-4] + + # Validation projection and labels + if file_path in self.val_files: + proj_file = join(tree_path, '{:s}_proj.pkl'.format(cloud_name)) + with open(proj_file, 'rb') as f: + proj_idx, labels = pickle.load(f) + self.val_proj += [proj_idx] + self.val_labels += [labels] + + # Test projection + if file_path in self.test_files: + proj_file = join(tree_path, '{:s}_proj.pkl'.format(cloud_name)) + with open(proj_file, 'rb') as f: + proj_idx, labels = pickle.load(f) + self.test_proj += [proj_idx] + self.test_labels += [labels] + print('finished') + return + + # Generate the input data flow + def get_batch_gen(self, split): + if split == 'training': + num_per_epoch = cfg.train_steps * cfg.batch_size + elif split == 'validation': + num_per_epoch = cfg.val_steps * cfg.val_batch_size + elif split == 'test': + num_per_epoch = cfg.val_steps * cfg.val_batch_size + + # Reset possibility + self.possibility[split] = [] + self.min_possibility[split] = [] + self.class_weight[split] = [] + + # Random initialize + for i, tree in enumerate(self.input_trees[split]): + self.possibility[split] += [np.random.rand(tree.data.shape[0]) * 1e-3] + self.min_possibility[split] += [float(np.min(self.possibility[split][-1]))] + + if split != 'test': + _, num_class_total = np.unique(np.hstack(self.input_labels[split]), return_counts=True) + self.class_weight[split] += [np.squeeze([num_class_total / np.sum(num_class_total)], axis=0)] + + def spatially_regular_gen(): + + # Generator loop + for i in range(num_per_epoch): # num_per_epoch + + # Choose the cloud with the lowest probability + cloud_idx = int(np.argmin(self.min_possibility[split])) + + # choose the point with the minimum of possibility in the cloud as query point + point_ind = np.argmin(self.possibility[split][cloud_idx]) + + # Get all points within the cloud from tree structure + points = np.array(self.input_trees[split][cloud_idx].data, copy=False) + + # Center point of input region + center_point = points[point_ind, :].reshape(1, -1) + + # Add noise to the center point + noise = np.random.normal(scale=cfg.noise_init / 10, size=center_point.shape) + pick_point = center_point + noise.astype(center_point.dtype) + query_idx = self.input_trees[split][cloud_idx].query(pick_point, k=cfg.num_points)[1][0] + + # Shuffle index + query_idx = DP.shuffle_idx(query_idx) + + # Get corresponding points and colors based on the index + queried_pc_xyz = points[query_idx] + queried_pc_xyz[:, 0:2] = queried_pc_xyz[:, 0:2] - pick_point[:, 0:2] + queried_pc_colors = self.input_colors[split][cloud_idx][query_idx] + if split == 'test': + queried_pc_labels = np.zeros(queried_pc_xyz.shape[0]) + queried_pt_weight = 1 + else: + queried_pc_labels = self.input_labels[split][cloud_idx][query_idx] + queried_pc_labels = np.array([self.label_to_idx[l] for l in queried_pc_labels]) + queried_pt_weight = np.array([self.class_weight[split][0][n] for n in queried_pc_labels]) + + # Update the possibility of the selected points + dists = np.sum(np.square((points[query_idx] - pick_point).astype(np.float32)), axis=1) + delta = np.square(1 - dists / np.max(dists)) * queried_pt_weight + self.possibility[split][cloud_idx][query_idx] += delta + self.min_possibility[split][cloud_idx] = float(np.min(self.possibility[split][cloud_idx])) + + if True: + yield (queried_pc_xyz, + queried_pc_colors.astype(np.float32), + queried_pc_labels, + query_idx.astype(np.int32), + np.array([cloud_idx], dtype=np.int32)) + + gen_func = spatially_regular_gen + gen_types = (tf.float32, tf.float32, tf.int32, tf.int32, tf.int32) + gen_shapes = ([None, 3], [None, 3], [None], [None], [None]) + return gen_func, gen_types, gen_shapes + + def get_tf_mapping(self): + # Collect flat inputs + def tf_map(batch_xyz, batch_features, batch_labels, batch_pc_idx, batch_cloud_idx): + batch_features = tf.map_fn(self.tf_augment_input, [batch_xyz, batch_features], dtype=tf.float32) + input_points = [] + input_neighbors = [] + input_pools = [] + input_up_samples = [] + + for i in range(cfg.num_layers): + neigh_idx = tf.py_func(DP.knn_search, [batch_xyz, batch_xyz, cfg.k_n], tf.int32) + sub_points = batch_xyz[:, :tf.shape(batch_xyz)[1] // cfg.sub_sampling_ratio[i], :] + pool_i = neigh_idx[:, :tf.shape(batch_xyz)[1] // cfg.sub_sampling_ratio[i], :] + up_i = tf.py_func(DP.knn_search, [sub_points, batch_xyz, 1], tf.int32) + input_points.append(batch_xyz) + input_neighbors.append(neigh_idx) + input_pools.append(pool_i) + input_up_samples.append(up_i) + batch_xyz = sub_points + + input_list = input_points + input_neighbors + input_pools + input_up_samples + input_list += [batch_features, batch_labels, batch_pc_idx, batch_cloud_idx] + + return input_list + + return tf_map + + # data augmentation + @staticmethod + def tf_augment_input(inputs): + xyz = inputs[0] + features = inputs[1] + theta = tf.random_uniform((1,), minval=0, maxval=2 * np.pi) + # Rotation matrices + c, s = tf.cos(theta), tf.sin(theta) + cs0 = tf.zeros_like(c) + cs1 = tf.ones_like(c) + R = tf.stack([c, -s, cs0, s, c, cs0, cs0, cs0, cs1], axis=1) + stacked_rots = tf.reshape(R, (3, 3)) + + # Apply rotations + transformed_xyz = tf.reshape(tf.matmul(xyz, stacked_rots), [-1, 3]) + # Choose random scales for each example + min_s = cfg.augment_scale_min + max_s = cfg.augment_scale_max + if cfg.augment_scale_anisotropic: + s = tf.random_uniform((1, 3), minval=min_s, maxval=max_s) + else: + s = tf.random_uniform((1, 1), minval=min_s, maxval=max_s) + + symmetries = [] + for i in range(3): + if cfg.augment_symmetries[i]: + symmetries.append(tf.round(tf.random_uniform((1, 1))) * 2 - 1) + else: + symmetries.append(tf.ones([1, 1], dtype=tf.float32)) + s *= tf.concat(symmetries, 1) + + # Create N x 3 vector of scales to multiply with stacked_points + stacked_scales = tf.tile(s, [tf.shape(transformed_xyz)[0], 1]) + + # Apply scales + transformed_xyz = transformed_xyz * stacked_scales + + noise = tf.random_normal(tf.shape(transformed_xyz), stddev=cfg.augment_noise) + transformed_xyz = transformed_xyz + noise + rgb = features[:, :3] + stacked_features = tf.concat([transformed_xyz, rgb], axis=-1) + return stacked_features + + def init_input_pipeline(self): + print('Initiating input pipelines') + cfg.ignored_label_inds = [self.label_to_idx[ign_label] for ign_label in self.ignored_labels] + gen_function, gen_types, gen_shapes = self.get_batch_gen('training') + gen_function_val, _, _ = self.get_batch_gen('validation') + gen_function_test, _, _ = self.get_batch_gen('test') + self.train_data = tf.data.Dataset.from_generator(gen_function, gen_types, gen_shapes) + self.val_data = tf.data.Dataset.from_generator(gen_function_val, gen_types, gen_shapes) + self.test_data = tf.data.Dataset.from_generator(gen_function_test, gen_types, gen_shapes) + + self.batch_train_data = self.train_data.batch(cfg.batch_size) + self.batch_val_data = self.val_data.batch(cfg.val_batch_size) + self.batch_test_data = self.test_data.batch(cfg.val_batch_size) + map_func = self.get_tf_mapping() + + self.batch_train_data = self.batch_train_data.map(map_func=map_func) + self.batch_val_data = self.batch_val_data.map(map_func=map_func) + self.batch_test_data = self.batch_test_data.map(map_func=map_func) + + self.batch_train_data = self.batch_train_data.prefetch(cfg.batch_size) + self.batch_val_data = self.batch_val_data.prefetch(cfg.val_batch_size) + self.batch_test_data = self.batch_test_data.prefetch(cfg.val_batch_size) + + iter = tf.data.Iterator.from_structure(self.batch_train_data.output_types, self.batch_train_data.output_shapes) + self.flat_inputs = iter.get_next() + self.train_init_op = iter.make_initializer(self.batch_train_data) + self.val_init_op = iter.make_initializer(self.batch_val_data) + self.test_init_op = iter.make_initializer(self.batch_test_data) + + +if __name__ == '__main__': + parser = argparse.ArgumentParser() + parser.add_argument('--gpu', type=int, default=0, help='the number of GPUs to use [default: 0]') + parser.add_argument('--mode', type=str, default='train', help='options: train, test, vis') + parser.add_argument('--model_path', type=str, default='None', help='pretrained model path') + FLAGS = parser.parse_args() + + GPU_ID = FLAGS.gpu + os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" + os.environ['CUDA_VISIBLE_DEVICES'] = str(GPU_ID) + os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' + + Mode = FLAGS.mode + dataset = Semantic3D() + dataset.init_input_pipeline() + + if Mode == 'train': + model = Network(dataset, cfg) + model.train(dataset) + elif Mode == 'test': + cfg.saving = False + model = Network(dataset, cfg) + if FLAGS.model_path is not 'None': + chosen_snap = FLAGS.model_path + else: + chosen_snapshot = -1 + logs = np.sort([os.path.join('results', f) for f in os.listdir('results') if f.startswith('Log')]) + chosen_folder = logs[-1] + snap_path = join(chosen_folder, 'snapshots') + snap_steps = [int(f[:-5].split('-')[-1]) for f in os.listdir(snap_path) if f[-5:] == '.meta'] + chosen_step = np.sort(snap_steps)[-1] + chosen_snap = os.path.join(snap_path, 'snap-{:d}'.format(chosen_step)) + tester = ModelTester(model, dataset, restore_snap=chosen_snap) + tester.test(model, dataset) + + else: + ################## + # Visualize data # + ################## + + with tf.Session() as sess: + sess.run(tf.global_variables_initializer()) + sess.run(dataset.train_init_op) + while True: + flat_inputs = sess.run(dataset.flat_inputs) + pc_xyz = flat_inputs[0] + sub_pc_xyz = flat_inputs[1] + labels = flat_inputs[21] + Plot.draw_pc_sem_ins(pc_xyz[0, :, :], labels[0, :]) + Plot.draw_pc_sem_ins(sub_pc_xyz[0, :, :], labels[0, 0:np.shape(sub_pc_xyz)[1]]) diff --git a/competing_methods/my_RandLANet/main_SemanticKITTI.py b/competing_methods/my_RandLANet/main_SemanticKITTI.py new file mode 100644 index 00000000..07fd6c58 --- /dev/null +++ b/competing_methods/my_RandLANet/main_SemanticKITTI.py @@ -0,0 +1,240 @@ +from helper_tool import DataProcessing as DP +from helper_tool import ConfigSemanticKITTI as cfg +from helper_tool import Plot +from os.path import join +from RandLANet import Network +from tester_SemanticKITTI import ModelTester +import tensorflow as tf +import numpy as np +import os, argparse, pickle + + +class SemanticKITTI: + def __init__(self, test_id): + self.name = 'SemanticKITTI' + self.dataset_path = '/data/semantic_kitti/dataset/sequences_0.06' + self.label_to_names = {0: 'unlabeled', + 1: 'car', + 2: 'bicycle', + 3: 'motorcycle', + 4: 'truck', + 5: 'other-vehicle', + 6: 'person', + 7: 'bicyclist', + 8: 'motorcyclist', + 9: 'road', + 10: 'parking', + 11: 'sidewalk', + 12: 'other-ground', + 13: 'building', + 14: 'fence', + 15: 'vegetation', + 16: 'trunk', + 17: 'terrain', + 18: 'pole', + 19: 'traffic-sign'} + self.num_classes = len(self.label_to_names) + self.label_values = np.sort([k for k, v in self.label_to_names.items()]) + self.label_to_idx = {l: i for i, l in enumerate(self.label_values)} + self.ignored_labels = np.sort([0]) + + self.val_split = '08' + + self.seq_list = np.sort(os.listdir(self.dataset_path)) + self.test_scan_number = str(test_id) + self.train_list, self.val_list, self.test_list = DP.get_file_list(self.dataset_path, + self.test_scan_number) + self.train_list = DP.shuffle_list(self.train_list) + self.val_list = DP.shuffle_list(self.val_list) + + self.possibility = [] + self.min_possibility = [] + + # Generate the input data flow + def get_batch_gen(self, split): + if split == 'training': + num_per_epoch = int(len(self.train_list) / cfg.batch_size) * cfg.batch_size + path_list = self.train_list + elif split == 'validation': + num_per_epoch = int(len(self.val_list) / cfg.val_batch_size) * cfg.val_batch_size + cfg.val_steps = int(len(self.val_list) / cfg.batch_size) + path_list = self.val_list + elif split == 'test': + num_per_epoch = int(len(self.test_list) / cfg.val_batch_size) * cfg.val_batch_size * 4 + path_list = self.test_list + for test_file_name in path_list: + points = np.load(test_file_name) + self.possibility += [np.random.rand(points.shape[0]) * 1e-3] + self.min_possibility += [float(np.min(self.possibility[-1]))] + + def spatially_regular_gen(): + # Generator loop + for i in range(num_per_epoch): + if split != 'test': + cloud_ind = i + pc_path = path_list[cloud_ind] + pc, tree, labels = self.get_data(pc_path) + # crop a small point cloud + pick_idx = np.random.choice(len(pc), 1) + selected_pc, selected_labels, selected_idx = self.crop_pc(pc, labels, tree, pick_idx) + else: + cloud_ind = int(np.argmin(self.min_possibility)) + pick_idx = np.argmin(self.possibility[cloud_ind]) + pc_path = path_list[cloud_ind] + pc, tree, labels = self.get_data(pc_path) + selected_pc, selected_labels, selected_idx = self.crop_pc(pc, labels, tree, pick_idx) + + # update the possibility of the selected pc + dists = np.sum(np.square((selected_pc - pc[pick_idx]).astype(np.float32)), axis=1) + delta = np.square(1 - dists / np.max(dists)) + self.possibility[cloud_ind][selected_idx] += delta + self.min_possibility[cloud_ind] = np.min(self.possibility[cloud_ind]) + + if True: + yield (selected_pc.astype(np.float32), + selected_labels.astype(np.int32), + selected_idx.astype(np.int32), + np.array([cloud_ind], dtype=np.int32)) + + gen_func = spatially_regular_gen + gen_types = (tf.float32, tf.int32, tf.int32, tf.int32) + gen_shapes = ([None, 3], [None], [None], [None]) + + return gen_func, gen_types, gen_shapes + + def get_data(self, file_path): + seq_id = file_path.split('/')[-3] + frame_id = file_path.split('/')[-1][:-4] + kd_tree_path = join(self.dataset_path, seq_id, 'KDTree', frame_id + '.pkl') + # Read pkl with search tree + with open(kd_tree_path, 'rb') as f: + search_tree = pickle.load(f) + points = np.array(search_tree.data, copy=False) + # Load labels + if int(seq_id) >= 11: + labels = np.zeros(np.shape(points)[0], dtype=np.uint8) + else: + label_path = join(self.dataset_path, seq_id, 'labels', frame_id + '.npy') + labels = np.squeeze(np.load(label_path)) + return points, search_tree, labels + + @staticmethod + def crop_pc(points, labels, search_tree, pick_idx): + # crop a fixed size point cloud for training + center_point = points[pick_idx, :].reshape(1, -1) + select_idx = search_tree.query(center_point, k=cfg.num_points)[1][0] + select_idx = DP.shuffle_idx(select_idx) + select_points = points[select_idx] + select_labels = labels[select_idx] + return select_points, select_labels, select_idx + + @staticmethod + def get_tf_mapping2(): + + def tf_map(batch_pc, batch_label, batch_pc_idx, batch_cloud_idx): + features = batch_pc + input_points = [] + input_neighbors = [] + input_pools = [] + input_up_samples = [] + + for i in range(cfg.num_layers): + neighbour_idx = tf.py_func(DP.knn_search, [batch_pc, batch_pc, cfg.k_n], tf.int32) + sub_points = batch_pc[:, :tf.shape(batch_pc)[1] // cfg.sub_sampling_ratio[i], :] + pool_i = neighbour_idx[:, :tf.shape(batch_pc)[1] // cfg.sub_sampling_ratio[i], :] + up_i = tf.py_func(DP.knn_search, [sub_points, batch_pc, 1], tf.int32) + input_points.append(batch_pc) + input_neighbors.append(neighbour_idx) + input_pools.append(pool_i) + input_up_samples.append(up_i) + batch_pc = sub_points + + input_list = input_points + input_neighbors + input_pools + input_up_samples + input_list += [features, batch_label, batch_pc_idx, batch_cloud_idx] + + return input_list + + return tf_map + + def init_input_pipeline(self): + print('Initiating input pipelines') + cfg.ignored_label_inds = [self.label_to_idx[ign_label] for ign_label in self.ignored_labels] + gen_function, gen_types, gen_shapes = self.get_batch_gen('training') + gen_function_val, _, _ = self.get_batch_gen('validation') + gen_function_test, _, _ = self.get_batch_gen('test') + + self.train_data = tf.data.Dataset.from_generator(gen_function, gen_types, gen_shapes) + self.val_data = tf.data.Dataset.from_generator(gen_function_val, gen_types, gen_shapes) + self.test_data = tf.data.Dataset.from_generator(gen_function_test, gen_types, gen_shapes) + + self.batch_train_data = self.train_data.batch(cfg.batch_size) + self.batch_val_data = self.val_data.batch(cfg.val_batch_size) + self.batch_test_data = self.test_data.batch(cfg.val_batch_size) + + map_func = self.get_tf_mapping2() + + self.batch_train_data = self.batch_train_data.map(map_func=map_func) + self.batch_val_data = self.batch_val_data.map(map_func=map_func) + self.batch_test_data = self.batch_test_data.map(map_func=map_func) + + self.batch_train_data = self.batch_train_data.prefetch(cfg.batch_size) + self.batch_val_data = self.batch_val_data.prefetch(cfg.val_batch_size) + self.batch_test_data = self.batch_test_data.prefetch(cfg.val_batch_size) + + iter = tf.data.Iterator.from_structure(self.batch_train_data.output_types, self.batch_train_data.output_shapes) + self.flat_inputs = iter.get_next() + self.train_init_op = iter.make_initializer(self.batch_train_data) + self.val_init_op = iter.make_initializer(self.batch_val_data) + self.test_init_op = iter.make_initializer(self.batch_test_data) + + +if __name__ == '__main__': + parser = argparse.ArgumentParser() + parser.add_argument('--gpu', type=int, default=0, help='the number of GPUs to use [default: 0]') + parser.add_argument('--mode', type=str, default='train', help='options: train, test, vis') + parser.add_argument('--test_area', type=str, default='14', help='options: 08, 11,12,13,14,15,16,17,18,19,20,21') + parser.add_argument('--model_path', type=str, default='None', help='pretrained model path') + FLAGS = parser.parse_args() + + os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" + os.environ['CUDA_VISIBLE_DEVICES'] = str(FLAGS.gpu) + os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' + Mode = FLAGS.mode + + test_area = FLAGS.test_area + dataset = SemanticKITTI(test_area) + dataset.init_input_pipeline() + + if Mode == 'train': + model = Network(dataset, cfg) + model.train(dataset) + elif Mode == 'test': + cfg.saving = False + model = Network(dataset, cfg) + if FLAGS.model_path is not 'None': + chosen_snap = FLAGS.model_path + else: + chosen_snapshot = -1 + logs = np.sort([os.path.join('results', f) for f in os.listdir('results') if f.startswith('Log')]) + chosen_folder = logs[-1] + snap_path = join(chosen_folder, 'snapshots') + snap_steps = [int(f[:-5].split('-')[-1]) for f in os.listdir(snap_path) if f[-5:] == '.meta'] + chosen_step = np.sort(snap_steps)[-1] + chosen_snap = os.path.join(snap_path, 'snap-{:d}'.format(chosen_step)) + tester = ModelTester(model, dataset, restore_snap=chosen_snap) + tester.test(model, dataset) + else: + ################## + # Visualize data # + ################## + + with tf.Session() as sess: + sess.run(tf.global_variables_initializer()) + sess.run(dataset.train_init_op) + while True: + flat_inputs = sess.run(dataset.flat_inputs) + pc_xyz = flat_inputs[0] + sub_pc_xyz = flat_inputs[1] + labels = flat_inputs[17] + Plot.draw_pc_sem_ins(pc_xyz[0, :, :], labels[0, :]) + Plot.draw_pc_sem_ins(sub_pc_xyz[0, :, :], labels[0, 0:np.shape(sub_pc_xyz)[1]]) diff --git a/competing_methods/my_RandLANet/main_UrbanMesh.py b/competing_methods/my_RandLANet/main_UrbanMesh.py new file mode 100644 index 00000000..ae607fe9 --- /dev/null +++ b/competing_methods/my_RandLANet/main_UrbanMesh.py @@ -0,0 +1,363 @@ +from os.path import join, exists, dirname, abspath +from RandLANet import Network +from tester_UrbanMesh import ModelTester +from helper_ply import read_ply +from helper_tool import Plot +from helper_tool import DataProcessing as DP +from helper_tool import ConfigUrbanMesh as cfg +import tensorflow as tf +import numpy as np +import pickle, argparse, os +import glob +import sys + +BASE_DIR = dirname(abspath(__file__)) +sys.path.append(BASE_DIR) + +class UrbanMesh: + def __init__(self): + self.name = 'UrbanMesh' + self.path = BASE_DIR + '/data' + self.label_to_names = {0: 'unlabelled', + 1: 'ground', + 2: 'vegetation', + 3: 'building', + 4: 'water', + 5: 'car', + 6: 'boat'} + self.num_classes = len(self.label_to_names) + self.label_values = np.sort([k for k, v in self.label_to_names.items()]) + self.label_to_idx = {l: i for i, l in enumerate(self.label_values)} + self.ignored_labels = np.sort([0]) + self.val_split = 1 + + self.original_folder = join(self.path, 'original_data') + self.full_pc_folder = join(self.path, 'original_ply') + self.sub_pc_folder = join(self.path, 'input_{:.3f}'.format(cfg.sub_grid_size)) + + # Initial training-validation-testing files + self.train_files = [] + self.val_files = [] + self.test_files = [] + + folders = ["train/", "test/", "validate/"] + for folder in folders: + data_folder = self.path + "/raw/" + folder + files = glob.glob(data_folder + "*.ply") + for file in files: + file_name = os.path.splitext(os.path.basename(file))[0] + if folder == "train/": + self.train_files.append(join(self.sub_pc_folder, file_name + '.ply')) + elif folder == "test/": + self.test_files.append(join(self.full_pc_folder, file_name + '.ply')) + elif folder == "validate/": + self.val_files.append(join(self.sub_pc_folder, file_name + '.ply')) + + self.train_files = np.sort(self.train_files) + self.test_files = np.sort(self.test_files) + + # Initiate containers + self.val_proj = [] + self.val_labels = [] + self.test_proj = [] + self.test_labels = [] + + self.possibility = {} + self.min_possibility = {} + self.class_weight = {} + self.input_trees = {'training': [], 'validation': [], 'test': []} + self.input_colors = {'training': [], 'validation': [], 'test': []} + self.input_labels = {'training': [], 'validation': []} + + self.load_sub_sampled_clouds(cfg.sub_grid_size) + + def load_sub_sampled_clouds(self, sub_grid_size): + + tree_path = join(self.path, 'input_{:.3f}'.format(sub_grid_size)) + files = np.hstack((self.train_files, self.val_files, self.test_files)) + + n_files = len(files) + for i, file_path in enumerate(files): + cloud_name = os.path.splitext(os.path.basename(file_path))[0] + print(str(i) + " / " + str(n_files) + "---> " + cloud_name) + if file_path in self.val_files: + cloud_split = 'validation' + elif file_path in self.train_files: + cloud_split = 'training' + else: + cloud_split = 'test' + + # Name of the input files + kd_tree_file = join(tree_path, '{:s}_KDTree.pkl'.format(cloud_name)) + sub_ply_file = join(tree_path, '{:s}.ply'.format(cloud_name)) + + # read ply with data + data = read_ply(sub_ply_file) + sub_colors = np.vstack((data['red'], data['green'], data['blue'])).T + if cloud_split == 'test': + sub_labels = None + else: + sub_labels = data['class'] + + # Read pkl with search tree + with open(kd_tree_file, 'rb') as f: + search_tree = pickle.load(f) + + self.input_trees[cloud_split] += [search_tree] + self.input_colors[cloud_split] += [sub_colors] + if cloud_split in ['training', 'validation']: + self.input_labels[cloud_split] += [sub_labels] + + # Get validation and test re_projection indices + print('\nPreparing reprojection indices for validation and test') + + for i, file_path in enumerate(files): + + # get cloud name and split + cloud_name = os.path.splitext(os.path.basename(file_path))[0] + # Validation projection and labels + if file_path in self.val_files: + proj_file = join(tree_path, '{:s}_proj.pkl'.format(cloud_name)) + with open(proj_file, 'rb') as f: + proj_idx, labels = pickle.load(f) + self.val_proj += [proj_idx] + self.val_labels += [labels] + + # Test projection + if file_path in self.test_files: + proj_file = join(tree_path, '{:s}_proj.pkl'.format(cloud_name)) + with open(proj_file, 'rb') as f: + proj_idx, labels = pickle.load(f) + self.test_proj += [proj_idx] + self.test_labels += [labels] + print('finished') + return + + # Generate the input data flow + def get_batch_gen(self, split): + if split == 'training': + num_per_epoch = cfg.train_steps * cfg.batch_size + elif split == 'validation': + num_per_epoch = cfg.val_steps * cfg.val_batch_size + elif split == 'test': + num_per_epoch = cfg.val_steps * cfg.val_batch_size + + # Reset possibility + self.possibility[split] = [] + self.min_possibility[split] = [] + self.class_weight[split] = [] + + # Random initialize + for i, tree in enumerate(self.input_trees[split]): + self.possibility[split] += [np.random.rand(tree.data.shape[0]) * 1e-3] + self.min_possibility[split] += [float(np.min(self.possibility[split][-1]))] + + if split != 'test': + _, num_class_total = np.unique(np.hstack(self.input_labels[split]), return_counts=True) + self.class_weight[split] += [np.squeeze([num_class_total / np.sum(num_class_total)], axis=0)] + + def spatially_regular_gen(): + + # Generator loop + for i in range(num_per_epoch): # num_per_epoch + + # Choose the cloud with the lowest probability + cloud_idx = int(np.argmin(self.min_possibility[split])) + + # choose the point with the minimum of possibility in the cloud as query point + point_ind = np.argmin(self.possibility[split][cloud_idx]) + + # Get all points within the cloud from tree structure + points = np.array(self.input_trees[split][cloud_idx].data, copy=False) + + # Center point of input region + center_point = points[point_ind, :].reshape(1, -1) + + # Add noise to the center point + noise = np.random.normal(scale=cfg.noise_init / 10, size=center_point.shape) + pick_point = center_point + noise.astype(center_point.dtype) + query_idx = self.input_trees[split][cloud_idx].query(pick_point, k=cfg.num_points)[1][0] + + # Shuffle index + query_idx = DP.shuffle_idx(query_idx) + + # Get corresponding points and colors based on the index + queried_pc_xyz = points[query_idx] + queried_pc_xyz[:, 0:2] = queried_pc_xyz[:, 0:2] - pick_point[:, 0:2] + queried_pc_colors = self.input_colors[split][cloud_idx][query_idx] + if split == 'test': + queried_pc_labels = np.zeros(queried_pc_xyz.shape[0]) + queried_pt_weight = 1 + else: + queried_pc_labels = self.input_labels[split][cloud_idx][query_idx] + queried_pc_labels = np.array([self.label_to_idx[l] for l in queried_pc_labels]) + queried_pt_weight = np.array([self.class_weight[split][0][n] for n in queried_pc_labels]) # 1 + + # Update the possibility of the selected points + dists = np.sum(np.square((points[query_idx] - pick_point).astype(np.float32)), axis=1) + delta = np.square(1 - dists / np.max(dists)) * queried_pt_weight + self.possibility[split][cloud_idx][query_idx] += delta + self.min_possibility[split][cloud_idx] = float(np.min(self.possibility[split][cloud_idx])) + + if True: + yield (queried_pc_xyz, + queried_pc_colors.astype(np.float32), + queried_pc_labels, + query_idx.astype(np.int32), + np.array([cloud_idx], dtype=np.int32)) + + gen_func = spatially_regular_gen + gen_types = (tf.float32, tf.float32, tf.int32, tf.int32, tf.int32) + gen_shapes = ([None, 3], [None, 3], [None], [None], [None]) + return gen_func, gen_types, gen_shapes + + def get_tf_mapping(self): + # Collect flat inputs + def tf_map(batch_xyz, batch_features, batch_labels, batch_pc_idx, batch_cloud_idx): + batch_features = tf.map_fn(self.tf_augment_input, [batch_xyz, batch_features], dtype=tf.float32) + input_points = [] + input_neighbors = [] + input_pools = [] + input_up_samples = [] + + for i in range(cfg.num_layers): + neigh_idx = tf.py_func(DP.knn_search, [batch_xyz, batch_xyz, cfg.k_n], tf.int32) + sub_points = batch_xyz[:, :tf.shape(batch_xyz)[1] // cfg.sub_sampling_ratio[i], :] + pool_i = neigh_idx[:, :tf.shape(batch_xyz)[1] // cfg.sub_sampling_ratio[i], :] + up_i = tf.py_func(DP.knn_search, [sub_points, batch_xyz, 1], tf.int32) + input_points.append(batch_xyz) + input_neighbors.append(neigh_idx) + input_pools.append(pool_i) + input_up_samples.append(up_i) + batch_xyz = sub_points + + input_list = input_points + input_neighbors + input_pools + input_up_samples + input_list += [batch_features, batch_labels, batch_pc_idx, batch_cloud_idx] + + return input_list + + return tf_map + + # data augmentation + @staticmethod + def tf_augment_input(inputs): + xyz = inputs[0] + features = inputs[1] + theta = tf.random_uniform((1,), minval=0, maxval=2 * np.pi) + # Rotation matrices + c, s = tf.cos(theta), tf.sin(theta) + cs0 = tf.zeros_like(c) + cs1 = tf.ones_like(c) + R = tf.stack([c, -s, cs0, s, c, cs0, cs0, cs0, cs1], axis=1) + stacked_rots = tf.reshape(R, (3, 3)) + + # Apply rotations + transformed_xyz = tf.reshape(tf.matmul(xyz, stacked_rots), [-1, 3]) + # Choose random scales for each example + min_s = cfg.augment_scale_min + max_s = cfg.augment_scale_max + if cfg.augment_scale_anisotropic: + s = tf.random_uniform((1, 3), minval=min_s, maxval=max_s) + else: + s = tf.random_uniform((1, 1), minval=min_s, maxval=max_s) + + symmetries = [] + for i in range(3): + if cfg.augment_symmetries[i]: + symmetries.append(tf.round(tf.random_uniform((1, 1))) * 2 - 1) + else: + symmetries.append(tf.ones([1, 1], dtype=tf.float32)) + s *= tf.concat(symmetries, 1) + + # Create N x 3 vector of scales to multiply with stacked_points + stacked_scales = tf.tile(s, [tf.shape(transformed_xyz)[0], 1]) + + # Apply scales + transformed_xyz = transformed_xyz * stacked_scales + + noise = tf.random_normal(tf.shape(transformed_xyz), stddev=cfg.augment_noise) + transformed_xyz = transformed_xyz + noise + rgb = features[:, :3] + stacked_features = tf.concat([transformed_xyz, rgb], axis=-1) + return stacked_features + + def init_input_pipeline(self): + print('Initiating input pipelines') + cfg.ignored_label_inds = [self.label_to_idx[ign_label] for ign_label in self.ignored_labels] + gen_function, gen_types, gen_shapes = self.get_batch_gen('training') + gen_function_val, _, _ = self.get_batch_gen('validation') + gen_function_test, _, _ = self.get_batch_gen('test') + self.train_data = tf.data.Dataset.from_generator(gen_function, gen_types, gen_shapes) + self.val_data = tf.data.Dataset.from_generator(gen_function_val, gen_types, gen_shapes) + self.test_data = tf.data.Dataset.from_generator(gen_function_test, gen_types, gen_shapes) + + self.batch_train_data = self.train_data.batch(cfg.batch_size) + self.batch_val_data = self.val_data.batch(cfg.val_batch_size) + self.batch_test_data = self.test_data.batch(cfg.val_batch_size) + map_func = self.get_tf_mapping() + + self.batch_train_data = self.batch_train_data.map(map_func=map_func) + self.batch_val_data = self.batch_val_data.map(map_func=map_func) + self.batch_test_data = self.batch_test_data.map(map_func=map_func) + + self.batch_train_data = self.batch_train_data.prefetch(cfg.batch_size) + self.batch_val_data = self.batch_val_data.prefetch(cfg.val_batch_size) + self.batch_test_data = self.batch_test_data.prefetch(cfg.val_batch_size) + + iter = tf.data.Iterator.from_structure(self.batch_train_data.output_types, self.batch_train_data.output_shapes) + self.flat_inputs = iter.get_next() + self.train_init_op = iter.make_initializer(self.batch_train_data) + self.val_init_op = iter.make_initializer(self.batch_val_data) + self.test_init_op = iter.make_initializer(self.batch_test_data) + + +if __name__ == '__main__': + parser = argparse.ArgumentParser() + parser.add_argument('--gpu', type=int, default= 0, help='the number of GPUs to use [default: 0]') + parser.add_argument('--mode', type=str, default='test', help='options: train, test, vis') + parser.add_argument('--model_path', type=str, default='None', help='pretrained model path') + FLAGS = parser.parse_args() + + GPU_ID = FLAGS.gpu + os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" + os.environ['CUDA_VISIBLE_DEVICES'] = str(GPU_ID) + os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' + + Mode = FLAGS.mode + dataset = UrbanMesh() + dataset.init_input_pipeline() + + if Mode == 'train': + model = Network(dataset, cfg) + model.train(dataset) + elif Mode == 'test': + cfg.saving = False + model = Network(dataset, cfg) + if FLAGS.model_path is not 'None': + chosen_snap = FLAGS.model_path + else: + chosen_snapshot = -1 + logs = np.sort([os.path.join('results', f) for f in os.listdir('results') if f.startswith('Log')]) + chosen_folder = logs[-1] + snap_path = join(chosen_folder, 'snapshots') + snap_steps = [int(f[:-5].split('-')[-1]) for f in os.listdir(snap_path) if f[-5:] == '.meta'] + chosen_step = np.sort(snap_steps)[-1] + chosen_snap = os.path.join(snap_path, 'snap-{:d}'.format(chosen_step)) + tester = ModelTester(model, dataset, restore_snap=chosen_snap) + tester.test(model, dataset) + + else: + ################## + # Visualize data # + ################## + + with tf.Session() as sess: + sess.run(tf.global_variables_initializer()) + sess.run(dataset.train_init_op) + while True: + flat_inputs = sess.run(dataset.flat_inputs) + pc_xyz = flat_inputs[0] + sub_pc_xyz = flat_inputs[1] + labels = flat_inputs[21] + Plot.draw_pc_sem_ins(pc_xyz[0, :, :], labels[0, :]) + Plot.draw_pc_sem_ins(sub_pc_xyz[0, :, :], labels[0, 0:np.shape(sub_pc_xyz)[1]]) diff --git a/competing_methods/my_RandLANet/set_up_GPU_on_Windows_with_CUDA11_and_TF1.15.txt b/competing_methods/my_RandLANet/set_up_GPU_on_Windows_with_CUDA11_and_TF1.15.txt new file mode 100644 index 00000000..ee57c536 --- /dev/null +++ b/competing_methods/my_RandLANet/set_up_GPU_on_Windows_with_CUDA11_and_TF1.15.txt @@ -0,0 +1,15 @@ +python 3.6 +tensorflow 1.15 work with CUDA 11.0 + +conda create -n randlanet python=3.6 + +# pip install -U tensorflow-gpu +pip install tensorflow-gpu==1.15 +pip install -r helper_requirements.txt +pip install plyfile + +conda install cudatoolkit=10.0 +conda install cudnn=7.6.0 + +dll link: https://drive.google.com/file/d/1ETKljz7aWyPyXeBKKc23yqHW5_j_o4bg/view?usp=sharing +put cuda dll in folder "C:\ProgramData\Anaconda3\envs\randlanet\Library\bin" \ No newline at end of file diff --git a/competing_methods/my_RandLANet/tester_S3DIS.py b/competing_methods/my_RandLANet/tester_S3DIS.py new file mode 100644 index 00000000..47a8183d --- /dev/null +++ b/competing_methods/my_RandLANet/tester_S3DIS.py @@ -0,0 +1,178 @@ +from os import makedirs +from os.path import exists, join +from helper_ply import write_ply +from sklearn.metrics import confusion_matrix +from helper_tool import DataProcessing as DP +import tensorflow as tf +import numpy as np +import time + + +def log_out(out_str, log_f_out): + log_f_out.write(out_str + '\n') + log_f_out.flush() + print(out_str) + + +class ModelTester: + def __init__(self, model, dataset, restore_snap=None): + my_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES) + self.saver = tf.train.Saver(my_vars, max_to_keep=100) + self.Log_file = open('log_test_' + str(dataset.val_split) + '.txt', 'a') + + # Create a session for running Ops on the Graph. + on_cpu = False + if on_cpu: + c_proto = tf.ConfigProto(device_count={'GPU': 0}) + else: + c_proto = tf.ConfigProto() + c_proto.gpu_options.allow_growth = True + self.sess = tf.Session(config=c_proto) + self.sess.run(tf.global_variables_initializer()) + + # Load trained model + if restore_snap is not None: + self.saver.restore(self.sess, restore_snap) + print("Model restored from " + restore_snap) + + self.prob_logits = tf.nn.softmax(model.logits) + + # Initiate global prediction over all test clouds + self.test_probs = [np.zeros(shape=[l.shape[0], model.config.num_classes], dtype=np.float32) + for l in dataset.input_labels['validation']] + + def test(self, model, dataset, num_votes=100): + + # Smoothing parameter for votes + test_smooth = 0.95 + + # Initialise iterator with validation/test data + self.sess.run(dataset.val_init_op) + + # Number of points per class in validation set + val_proportions = np.zeros(model.config.num_classes, dtype=np.float32) + i = 0 + for label_val in dataset.label_values: + if label_val not in dataset.ignored_labels: + val_proportions[i] = np.sum([np.sum(labels == label_val) for labels in dataset.val_labels]) + i += 1 + + # Test saving path + saving_path = time.strftime('results/Log_%Y-%m-%d_%H-%M-%S', time.gmtime()) + test_path = join('test', saving_path.split('/')[-1]) + makedirs(test_path) if not exists(test_path) else None + makedirs(join(test_path, 'val_preds')) if not exists(join(test_path, 'val_preds')) else None + + step_id = 0 + epoch_id = 0 + last_min = -0.5 + + while last_min < num_votes: + try: + ops = (self.prob_logits, + model.labels, + model.inputs['input_inds'], + model.inputs['cloud_inds'], + ) + + stacked_probs, stacked_labels, point_idx, cloud_idx = self.sess.run(ops, {model.is_training: False}) + correct = np.sum(np.argmax(stacked_probs, axis=1) == stacked_labels) + acc = correct / float(np.prod(np.shape(stacked_labels))) + print('step' + str(step_id) + ' acc:' + str(acc)) + stacked_probs = np.reshape(stacked_probs, [model.config.val_batch_size, model.config.num_points, + model.config.num_classes]) + + for j in range(np.shape(stacked_probs)[0]): + probs = stacked_probs[j, :, :] + p_idx = point_idx[j, :] + c_i = cloud_idx[j][0] + self.test_probs[c_i][p_idx] = test_smooth * self.test_probs[c_i][p_idx] + (1 - test_smooth) * probs + step_id += 1 + + except tf.errors.OutOfRangeError: + + new_min = np.min(dataset.min_possibility['validation']) + log_out('Epoch {:3d}, end. Min possibility = {:.1f}'.format(epoch_id, new_min), self.Log_file) + + if last_min + 1 < new_min: + + # Update last_min + last_min += 1 + + # Show vote results (On subcloud so it is not the good values here) + log_out('\nConfusion on sub clouds', self.Log_file) + confusion_list = [] + + num_val = len(dataset.input_labels['validation']) + + for i_test in range(num_val): + probs = self.test_probs[i_test] + preds = dataset.label_values[np.argmax(probs, axis=1)].astype(np.int32) + labels = dataset.input_labels['validation'][i_test] + + # Confs + confusion_list += [confusion_matrix(labels, preds, dataset.label_values)] + + # Regroup confusions + C = np.sum(np.stack(confusion_list), axis=0).astype(np.float32) + + # Rescale with the right number of point per class + C *= np.expand_dims(val_proportions / (np.sum(C, axis=1) + 1e-6), 1) + + # Compute IoUs + IoUs = DP.IoU_from_confusions(C) + m_IoU = np.mean(IoUs) + s = '{:5.2f} | '.format(100 * m_IoU) + for IoU in IoUs: + s += '{:5.2f} '.format(100 * IoU) + log_out(s + '\n', self.Log_file) + + if int(np.ceil(new_min)) % 1 == 0: + + # Project predictions + log_out('\nReproject Vote #{:d}'.format(int(np.floor(new_min))), self.Log_file) + proj_probs_list = [] + + for i_val in range(num_val): + # Reproject probs back to the evaluations points + proj_idx = dataset.val_proj[i_val] + probs = self.test_probs[i_val][proj_idx, :] + proj_probs_list += [probs] + + # Show vote results + log_out('Confusion on full clouds', self.Log_file) + confusion_list = [] + for i_test in range(num_val): + # Get the predicted labels + preds = dataset.label_values[np.argmax(proj_probs_list[i_test], axis=1)].astype(np.uint8) + + # Confusion + labels = dataset.val_labels[i_test] + acc = np.sum(preds == labels) / len(labels) + log_out(dataset.input_names['validation'][i_test] + ' Acc:' + str(acc), self.Log_file) + + confusion_list += [confusion_matrix(labels, preds, dataset.label_values)] + name = dataset.input_names['validation'][i_test] + '.ply' + write_ply(join(test_path, 'val_preds', name), [preds, labels], ['pred', 'label']) + + # Regroup confusions + C = np.sum(np.stack(confusion_list), axis=0) + + IoUs = DP.IoU_from_confusions(C) + m_IoU = np.mean(IoUs) + s = '{:5.2f} | '.format(100 * m_IoU) + for IoU in IoUs: + s += '{:5.2f} '.format(100 * IoU) + log_out('-' * len(s), self.Log_file) + log_out(s, self.Log_file) + log_out('-' * len(s) + '\n', self.Log_file) + print('finished \n') + self.sess.close() + return + + self.sess.run(dataset.val_init_op) + epoch_id += 1 + step_id = 0 + continue + + return diff --git a/competing_methods/my_RandLANet/tester_Semantic3D.py b/competing_methods/my_RandLANet/tester_Semantic3D.py new file mode 100644 index 00000000..ebcc900f --- /dev/null +++ b/competing_methods/my_RandLANet/tester_Semantic3D.py @@ -0,0 +1,147 @@ +from os import makedirs +from os.path import exists, join +from helper_ply import read_ply, write_ply +import tensorflow as tf +import numpy as np +import time + + +def log_string(out_str, log_out): + log_out.write(out_str + '\n') + log_out.flush() + print(out_str) + + +class ModelTester: + def __init__(self, model, dataset, restore_snap=None): + # Tensorflow Saver definition + my_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES) + self.saver = tf.train.Saver(my_vars, max_to_keep=100) + + # Create a session for running Ops on the Graph. + on_cpu = False + if on_cpu: + c_proto = tf.ConfigProto(device_count={'GPU': 0}) + else: + c_proto = tf.ConfigProto() + c_proto.gpu_options.allow_growth = True + self.sess = tf.Session(config=c_proto) + self.sess.run(tf.global_variables_initializer()) + + if restore_snap is not None: + self.saver.restore(self.sess, restore_snap) + print("Model restored from " + restore_snap) + + # Add a softmax operation for predictions + self.prob_logits = tf.nn.softmax(model.logits) + self.test_probs = [np.zeros((l.data.shape[0], model.config.num_classes), dtype=np.float16) + for l in dataset.input_trees['test']] + + self.log_out = open('log_test_' + dataset.name + '.txt', 'a') + + def test(self, model, dataset, num_votes=100): + + # Smoothing parameter for votes + test_smooth = 0.98 + + # Initialise iterator with train data + self.sess.run(dataset.test_init_op) + + # Test saving path + saving_path = time.strftime('results/Log_%Y-%m-%d_%H-%M-%S', time.gmtime()) + test_path = join('test', saving_path.split('/')[-1]) + makedirs(test_path) if not exists(test_path) else None + makedirs(join(test_path, 'predictions')) if not exists(join(test_path, 'predictions')) else None + makedirs(join(test_path, 'probs')) if not exists(join(test_path, 'probs')) else None + + ##################### + # Network predictions + ##################### + + step_id = 0 + epoch_id = 0 + last_min = -0.5 + + while last_min < num_votes: + + try: + ops = (self.prob_logits, + model.labels, + model.inputs['input_inds'], + model.inputs['cloud_inds'],) + + stacked_probs, stacked_labels, point_idx, cloud_idx = self.sess.run(ops, {model.is_training: False}) + stacked_probs = np.reshape(stacked_probs, [model.config.val_batch_size, model.config.num_points, + model.config.num_classes]) + + for j in range(np.shape(stacked_probs)[0]): + probs = stacked_probs[j, :, :] + inds = point_idx[j, :] + c_i = cloud_idx[j][0] + self.test_probs[c_i][inds] = test_smooth * self.test_probs[c_i][inds] + (1 - test_smooth) * probs + step_id += 1 + log_string('Epoch {:3d}, step {:3d}. min possibility = {:.1f}'.format(epoch_id, step_id, np.min( + dataset.min_possibility['test'])), self.log_out) + + except tf.errors.OutOfRangeError: + + # Save predicted cloud + new_min = np.min(dataset.min_possibility['test']) + log_string('Epoch {:3d}, end. Min possibility = {:.1f}'.format(epoch_id, new_min), self.log_out) + + if last_min + 4 < new_min: + + print('Saving clouds') + + # Update last_min + last_min = new_min + + # Project predictions + print('\nReproject Vote #{:d}'.format(int(np.floor(new_min)))) + t1 = time.time() + files = dataset.test_files + i_test = 0 + for i, file_path in enumerate(files): + # Get file + points = self.load_evaluation_points(file_path) + points = points.astype(np.float16) + + # Reproject probs + probs = np.zeros(shape=[np.shape(points)[0], 8], dtype=np.float16) + proj_index = dataset.test_proj[i_test] + + probs = self.test_probs[i_test][proj_index, :] + + # Insert false columns for ignored labels + probs2 = probs + for l_ind, label_value in enumerate(dataset.label_values): + if label_value in dataset.ignored_labels: + probs2 = np.insert(probs2, l_ind, 0, axis=1) + + # Get the predicted labels + preds = dataset.label_values[np.argmax(probs2, axis=1)].astype(np.uint8) + + # Save plys + cloud_name = file_path.split('/')[-1] + + # Save ascii preds + ascii_name = join(test_path, 'predictions', dataset.ascii_files[cloud_name]) + np.savetxt(ascii_name, preds, fmt='%d') + log_string(ascii_name + 'has saved', self.log_out) + i_test += 1 + + t2 = time.time() + print('Done in {:.1f} s\n'.format(t2 - t1)) + self.sess.close() + return + + self.sess.run(dataset.test_init_op) + epoch_id += 1 + step_id = 0 + continue + return + + @staticmethod + def load_evaluation_points(file_path): + data = read_ply(file_path) + return np.vstack((data['x'], data['y'], data['z'])).T diff --git a/competing_methods/my_RandLANet/tester_SemanticKITTI.py b/competing_methods/my_RandLANet/tester_SemanticKITTI.py new file mode 100644 index 00000000..6f3a99b1 --- /dev/null +++ b/competing_methods/my_RandLANet/tester_SemanticKITTI.py @@ -0,0 +1,167 @@ +from os import makedirs +from os.path import exists, join, isfile, dirname, abspath +from helper_tool import DataProcessing as DP +from sklearn.metrics import confusion_matrix +import tensorflow as tf +import numpy as np +import yaml +import pickle + +BASE_DIR = dirname(abspath(__file__)) + +data_config = join(BASE_DIR, 'utils', 'semantic-kitti.yaml') +DATA = yaml.safe_load(open(data_config, 'r')) +remap_dict = DATA["learning_map_inv"] + +# make lookup table for mapping +max_key = max(remap_dict.keys()) +remap_lut = np.zeros((max_key + 100), dtype=np.int32) +remap_lut[list(remap_dict.keys())] = list(remap_dict.values()) + +remap_dict_val = DATA["learning_map"] +max_key = max(remap_dict_val.keys()) +remap_lut_val = np.zeros((max_key + 100), dtype=np.int32) +remap_lut_val[list(remap_dict_val.keys())] = list(remap_dict_val.values()) + + +def log_out(out_str, f_out): + f_out.write(out_str + '\n') + f_out.flush() + print(out_str) + + +class ModelTester: + def __init__(self, model, dataset, restore_snap=None): + my_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES) + self.saver = tf.train.Saver(my_vars, max_to_keep=100) + self.Log_file = open('log_test_' + dataset.name + '.txt', 'a') + + # Create a session for running Ops on the Graph. + on_cpu = False + if on_cpu: + c_proto = tf.ConfigProto(device_count={'GPU': 0}) + else: + c_proto = tf.ConfigProto() + c_proto.gpu_options.allow_growth = True + self.sess = tf.Session(config=c_proto) + self.sess.run(tf.global_variables_initializer()) + + # Name of the snapshot to restore to (None if you want to start from beginning) + if restore_snap is not None: + self.saver.restore(self.sess, restore_snap) + print("Model restored from " + restore_snap) + + self.prob_logits = tf.nn.softmax(model.logits) + self.test_probs = 0 + self.idx = 0 + + def test(self, model, dataset): + + # Initialise iterator with train data + self.sess.run(dataset.test_init_op) + self.test_probs = [np.zeros(shape=[len(l), model.config.num_classes], dtype=np.float16) + for l in dataset.possibility] + + test_path = join('test', 'sequences') + makedirs(test_path) if not exists(test_path) else None + save_path = join(test_path, dataset.test_scan_number, 'predictions') + makedirs(save_path) if not exists(save_path) else None + test_smooth = 0.98 + epoch_ind = 0 + + while True: + try: + ops = (self.prob_logits, + model.labels, + model.inputs['input_inds'], + model.inputs['cloud_inds']) + stacked_probs, labels, point_inds, cloud_inds = self.sess.run(ops, {model.is_training: False}) + if self.idx % 10 == 0: + print('step ' + str(self.idx)) + self.idx += 1 + stacked_probs = np.reshape(stacked_probs, [model.config.val_batch_size, + model.config.num_points, + model.config.num_classes]) + for j in range(np.shape(stacked_probs)[0]): + probs = stacked_probs[j, :, :] + inds = point_inds[j, :] + c_i = cloud_inds[j][0] + self.test_probs[c_i][inds] = test_smooth * self.test_probs[c_i][inds] + (1 - test_smooth) * probs + + except tf.errors.OutOfRangeError: + new_min = np.min(dataset.min_possibility) + log_out('Epoch {:3d}, end. Min possibility = {:.1f}'.format(epoch_ind, new_min), self.Log_file) + if np.min(dataset.min_possibility) > 0.5: # 0.5 + log_out(' Min possibility = {:.1f}'.format(np.min(dataset.min_possibility)), self.Log_file) + print('\nReproject Vote #{:d}'.format(int(np.floor(new_min)))) + + # For validation set + num_classes = 19 + gt_classes = [0 for _ in range(num_classes)] + positive_classes = [0 for _ in range(num_classes)] + true_positive_classes = [0 for _ in range(num_classes)] + val_total_correct = 0 + val_total_seen = 0 + + for j in range(len(self.test_probs)): + test_file_name = dataset.test_list[j] + frame = test_file_name.split('/')[-1][:-4] + proj_path = join(dataset.dataset_path, dataset.test_scan_number, 'proj') + proj_file = join(proj_path, str(frame) + '_proj.pkl') + if isfile(proj_file): + with open(proj_file, 'rb') as f: + proj_inds = pickle.load(f) + probs = self.test_probs[j][proj_inds[0], :] + pred = np.argmax(probs, 1) + if dataset.test_scan_number == '08': + label_path = join(dirname(dataset.dataset_path), 'sequences', dataset.test_scan_number, + 'labels') + label_file = join(label_path, str(frame) + '.label') + labels = DP.load_label_kitti(label_file, remap_lut_val) + invalid_idx = np.where(labels == 0)[0] + labels_valid = np.delete(labels, invalid_idx) + pred_valid = np.delete(pred, invalid_idx) + labels_valid = labels_valid - 1 + correct = np.sum(pred_valid == labels_valid) + val_total_correct += correct + val_total_seen += len(labels_valid) + conf_matrix = confusion_matrix(labels_valid, pred_valid, np.arange(0, num_classes, 1)) + gt_classes += np.sum(conf_matrix, axis=1) + positive_classes += np.sum(conf_matrix, axis=0) + true_positive_classes += np.diagonal(conf_matrix) + else: + store_path = join(test_path, dataset.test_scan_number, 'predictions', + str(frame) + '.label') + pred = pred + 1 + pred = pred.astype(np.uint32) + upper_half = pred >> 16 # get upper half for instances + lower_half = pred & 0xFFFF # get lower half for semantics + lower_half = remap_lut[lower_half] # do the remapping of semantics + pred = (upper_half << 16) + lower_half # reconstruct full label + pred = pred.astype(np.uint32) + pred.tofile(store_path) + log_out(str(dataset.test_scan_number) + ' finished', self.Log_file) + if dataset.test_scan_number=='08': + iou_list = [] + for n in range(0, num_classes, 1): + iou = true_positive_classes[n] / float( + gt_classes[n] + positive_classes[n] - true_positive_classes[n]) + iou_list.append(iou) + mean_iou = sum(iou_list) / float(num_classes) + + log_out('eval accuracy: {}'.format(val_total_correct / float(val_total_seen)), self.Log_file) + log_out('mean IOU:{}'.format(mean_iou), self.Log_file) + + mean_iou = 100 * mean_iou + print('Mean IoU = {:.1f}%'.format(mean_iou)) + s = '{:5.2f} | '.format(mean_iou) + for IoU in iou_list: + s += '{:5.2f} '.format(100 * IoU) + print('-' * len(s)) + print(s) + print('-' * len(s) + '\n') + self.sess.close() + return + self.sess.run(dataset.test_init_op) + epoch_ind += 1 + continue diff --git a/competing_methods/my_RandLANet/tester_UrbanMesh.py b/competing_methods/my_RandLANet/tester_UrbanMesh.py new file mode 100644 index 00000000..ff6f8400 --- /dev/null +++ b/competing_methods/my_RandLANet/tester_UrbanMesh.py @@ -0,0 +1,211 @@ +import os, glob, pickle +from os import makedirs +from os.path import exists, join, dirname, abspath +from helper_ply import read_ply, write_ply +import tensorflow as tf +import numpy as np +import time + +from plyfile import PlyData, PlyElement +################################### UTILS Functions ####################################### +COLOR_MAP = { + 0: (170, 85, 0), # 'ground' -> brown + 1: (0, 255, 0), # 'vegetation' -> green + 2: (255, 255, 0),# 'building' -> yellow + 3: (0, 255, 255),# 'water' -> blue + 4: (255, 0, 255),# 'vehicle'/'car' -> pink + 5: (0, 0, 153) # 'boat' -> purple +} + +def read_ply_with_plyfilelib(filename): + """convert from a ply file. include the label and the object number""" + # ---read the ply file-------- + plydata = PlyData.read(filename) + xyz = np.stack([plydata['vertex'][n] for n in ['x', 'y', 'z']], axis=1) + try: + rgb = np.stack([plydata['vertex'][n] + for n in ['red', 'green', 'blue']] + , axis=1).astype(np.uint8) + except ValueError: + rgb = np.stack([plydata['vertex'][n] + for n in ['r', 'g', 'b']] + , axis=1).astype(np.float32) + if np.max(rgb) > 1: + rgb = rgb + try: + object_indices = plydata['vertex']['object_index'] + labels = plydata['vertex']['label'] + return xyz, rgb, labels, object_indices + except ValueError: + try: + labels = plydata['vertex']['label'] + return xyz, rgb, labels + except ValueError: + return xyz, rgb + +def write_ply_with_plyfilelib(filename, xyz, rgb, labels): + """write into a ply file""" + prop = [('x', 'f4'), ('y', 'f4'), ('z', 'f4'), ('red', 'u1'), ('green', 'u1'), ('blue', 'u1'), ('label', 'u1')] + vertex_all = np.empty(len(xyz), dtype=prop) + for i_prop in range(0, 3): + vertex_all[prop[i_prop][0]] = xyz[:, i_prop] + for i_prop in range(0, 3): + vertex_all[prop[i_prop + 3][0]] = rgb[:, i_prop] + vertex_all[prop[6][0]] = labels + ply = PlyData([PlyElement.describe(vertex_all, 'vertex')], text=True) + ply.write(filename) + +def log_string(out_str, log_out): + log_out.write(out_str + '\n') + log_out.flush() + print(out_str) + + +class ModelTester: + def __init__(self, model, dataset, restore_snap=None): + # Tensorflow Saver definition + my_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES) + self.saver = tf.train.Saver(my_vars, max_to_keep=100) + + # Create a session for running Ops on the Graph. + on_cpu = False + if on_cpu: + c_proto = tf.ConfigProto(device_count={'GPU': 0}) + else: + c_proto = tf.ConfigProto() + c_proto.gpu_options.allow_growth = True + self.sess = tf.Session(config=c_proto) + self.sess.run(tf.global_variables_initializer()) + + if restore_snap is not None: + self.saver.restore(self.sess, restore_snap) + print("Model restored from " + restore_snap) + + # Add a softmax operation for predictions + self.prob_logits = tf.nn.softmax(model.logits) + self.test_probs = [np.zeros((l.data.shape[0], model.config.num_classes), dtype=np.float16) + for l in dataset.input_trees['test']] + + self.log_out = open('log_test_' + dataset.name + '.txt', 'a') + + def test(self, model, dataset, num_votes=100): + + # Smoothing parameter for votes + test_smooth = 0.98 + + # Initialise iterator with train data + self.sess.run(dataset.test_init_op) + + # Test saving path + saving_path = time.strftime('results/Log_%Y-%m-%d_%H-%M-%S', time.gmtime()) + test_path = join('test', saving_path.split('/')[-1]) + makedirs(test_path) if not exists(test_path) else None + makedirs(join(test_path, 'predictions')) if not exists(join(test_path, 'predictions')) else None + makedirs(join(test_path, 'probs')) if not exists(join(test_path, 'probs')) else None + + ##################### + # Network predictions + ##################### + + step_id = 0 + epoch_id = 0 + last_min = -0.5 + + while last_min < num_votes: + + try: + ops = (self.prob_logits, + model.labels, + model.inputs['input_inds'], + model.inputs['cloud_inds'],) + + stacked_probs, stacked_labels, point_idx, cloud_idx = self.sess.run(ops, {model.is_training: False}) + stacked_probs = np.reshape(stacked_probs, [model.config.val_batch_size, model.config.num_points, + model.config.num_classes]) + + for j in range(np.shape(stacked_probs)[0]): + probs = stacked_probs[j, :, :] + inds = point_idx[j, :] + c_i = cloud_idx[j][0] + self.test_probs[c_i][inds] = test_smooth * self.test_probs[c_i][inds] + (1 - test_smooth) * probs + step_id += 1 + log_string('Epoch {:3d}, step {:3d}. min possibility = {:.1f}'.format(epoch_id, step_id, np.min( + dataset.min_possibility['test'])), self.log_out) + + except tf.errors.OutOfRangeError: + + # Save predicted cloud + new_min = np.min(dataset.min_possibility['test']) + log_string('Epoch {:3d}, end. Min possibility = {:.1f}'.format(epoch_id, new_min), self.log_out) + + if last_min + 4 < new_min: + + print('Saving clouds') + + # Update last_min + last_min = new_min + + # Project predictions + print('\nReproject Vote #{:d}'.format(int(np.floor(new_min)))) + t1 = time.time() + files = dataset.test_files + i_test = 0 + for i, file_path in enumerate(files): + # Get file + points = self.load_evaluation_points(file_path) + points = points.astype(np.float16) + + # Reproject probs + probs = np.zeros(shape=[np.shape(points)[0], 8], dtype=np.float16) + proj_index = dataset.test_proj[i_test] + + probs = self.test_probs[i_test][proj_index, :] + + # Insert false columns for ignored labels + probs2 = probs + for l_ind, label_value in enumerate(dataset.label_values): + if label_value in dataset.ignored_labels: + probs2 = np.insert(probs2, l_ind, 0, axis=1) + + # Get the predicted labels + preds = dataset.label_values[np.argmax(probs2, axis=1)].astype(np.uint8) + + # Save plys + xyz, rgb = read_ply_with_plyfilelib(file_path) + labels = np.squeeze(preds) + + for li, ind in zip(labels, range(len(labels))): + for c in COLOR_MAP: + if li == c: + rgb[ind] = COLOR_MAP[c] + break + + # Save to ply + file_name = os.path.splitext(os.path.basename(file_path))[0] + "_pred.ply" + ply_file = join(test_path, 'predictions', file_name) + write_ply_with_plyfilelib(ply_file, xyz, rgb, labels) + + # Subsample to save space + cloud_name = file_path.split('/')[-1] + + # # Save ascii preds + # ascii_name = join(test_path, 'predictions', dataset.ascii_files[cloud_name]) + # np.savetxt(ascii_name, preds, fmt='%d') + log_string(cloud_name + ' has saved', self.log_out) + i_test += 1 + + t2 = time.time() + print('Done in {:.1f} s\n'.format(t2 - t1)) + self.sess.close() + return + + self.sess.run(dataset.test_init_op) + epoch_id += 1 + step_id = 0 + continue + return + + @staticmethod + def load_evaluation_points(file_path): + data = read_ply(file_path) + return np.vstack((data['x'], data['y'], data['z'])).T diff --git a/competing_methods/my_RandLANet/utils/6_fold_cv.py b/competing_methods/my_RandLANet/utils/6_fold_cv.py new file mode 100644 index 00000000..7393efdb --- /dev/null +++ b/competing_methods/my_RandLANet/utils/6_fold_cv.py @@ -0,0 +1,66 @@ +import numpy as np +import glob, os, sys + +BASE_DIR = os.path.dirname(os.path.abspath(__file__)) +ROOT_DIR = os.path.dirname(BASE_DIR) +sys.path.append(ROOT_DIR) +from helper_ply import read_ply +from helper_tool import Plot + +if __name__ == '__main__': + base_dir = '/data/S3DIS/results' + original_data_dir = '/data/S3DIS/original_ply' + data_path = glob.glob(os.path.join(base_dir, '*.ply')) + data_path = np.sort(data_path) + + test_total_correct = 0 + test_total_seen = 0 + gt_classes = [0 for _ in range(13)] + positive_classes = [0 for _ in range(13)] + true_positive_classes = [0 for _ in range(13)] + visualization = False + + for file_name in data_path: + pred_data = read_ply(file_name) + pred = pred_data['pred'] + original_data = read_ply(os.path.join(original_data_dir, file_name.split('/')[-1][:-4] + '.ply')) + labels = original_data['class'] + points = np.vstack((original_data['x'], original_data['y'], original_data['z'])).T + + ################## + # Visualize data # + ################## + if visualization: + colors = np.vstack((original_data['red'], original_data['green'], original_data['blue'])).T + xyzrgb = np.concatenate([points, colors], axis=-1) + Plot.draw_pc(xyzrgb) # visualize raw point clouds + Plot.draw_pc_sem_ins(points, labels) # visualize ground-truth + Plot.draw_pc_sem_ins(points, pred) # visualize prediction + + correct = np.sum(pred == labels) + print(str(file_name.split('/')[-1][:-4]) + '_acc:' + str(correct / float(len(labels)))) + test_total_correct += correct + test_total_seen += len(labels) + + for j in range(len(labels)): + gt_l = int(labels[j]) + pred_l = int(pred[j]) + gt_classes[gt_l] += 1 + positive_classes[pred_l] += 1 + true_positive_classes[gt_l] += int(gt_l == pred_l) + + iou_list = [] + for n in range(13): + iou = true_positive_classes[n] / float(gt_classes[n] + positive_classes[n] - true_positive_classes[n]) + iou_list.append(iou) + mean_iou = sum(iou_list) / 13.0 + print('eval accuracy: {}'.format(test_total_correct / float(test_total_seen))) + print('mean IOU:{}'.format(mean_iou)) + print(iou_list) + + acc_list = [] + for n in range(13): + acc = true_positive_classes[n] / float(gt_classes[n]) + acc_list.append(acc) + mean_acc = sum(acc_list) / 13.0 + print('mAcc value is :{}'.format(mean_acc)) diff --git a/competing_methods/my_RandLANet/utils/cpp_wrappers/compile_wrappers.sh b/competing_methods/my_RandLANet/utils/cpp_wrappers/compile_wrappers.sh new file mode 100644 index 00000000..55848128 --- /dev/null +++ b/competing_methods/my_RandLANet/utils/cpp_wrappers/compile_wrappers.sh @@ -0,0 +1,7 @@ +#!/bin/bash + +# Compile cpp subsampling +cd cpp_subsampling +python setup.py build_ext --inplace +cd .. + diff --git a/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_subsampling/CMakeLists.txt b/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_subsampling/CMakeLists.txt new file mode 100644 index 00000000..aa1496e3 --- /dev/null +++ b/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_subsampling/CMakeLists.txt @@ -0,0 +1,57 @@ +# Graph for Cut Pursuit +# author: Loic Landrieu +# date: 2017 + +CMAKE_MINIMUM_REQUIRED(VERSION 3.5) + +PROJECT(grid_subsampling) + +SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -std=c++11 -fopenmp -O3") + +############################## +### Find required packages ### +############################## +set(CMAKE_MODULE_PATH ${CMAKE_CURRENT_SOURCE_DIR} ${CMAKE_MODULE_PATH}) + +find_package(PythonLibs) +find_package(PythonInterp) +find_package(NumPy 1.5 REQUIRED) + +find_package(Boost 1.65.0 COMPONENTS graph REQUIRED) #system filesystem thread serialization +if (${Boost_MINOR_VERSION} LESS 67 ) + find_package(Boost 1.65.0 COMPONENTS numpy${PYTHON_VERSION_MAJOR} REQUIRED) #system filesystem thread serialization +else() + set(PYTHONVERSION ${PYTHON_VERSION_MAJOR}${PYTHON_VERSION_MINOR}) + find_package(Boost 1.67.0 COMPONENTS numpy${PYTHONVERSION} REQUIRED) +endif() + +include_directories(C:/dev/boost_numpy/Boost.NumPy) +include_directories(C:/dev/eigen3) + +include_directories(${Boost_INCLUDE_DIRS}) +link_directories(${Boost_LIBRARY_DIRS}) + +message("Boost includes ARE " ${Boost_INCLUDE_DIRS}) +message("Boost LIBRARIES ARE " ${Boost_LIBRARY_DIRS}) + +find_package(Eigen3 REQUIRED NO_MODULE) +INCLUDE_DIRECTORIES(${EIGEN3_INCLUDE_DIR}) +#LINK_DIRECTORIES(${EIGEN3_LIBRARY_DIRS}) + +#SET(PYTHON_LIBRARIES /usr/lib/x86_64-linux-gnu/libpython2.7.so) +#SET(PYTHON_INCLUDE_DIRS /usr/include/python2.7) + +message("PYTHON LIBRARIES ARE " ${PYTHON_LIBRARIES}) +INCLUDE_DIRECTORIES(${PYTHON_INCLUDE_DIRS} ${PYTHON_NUMPY_INCLUDE_DIR}) +LINK_DIRECTORIES(${PYTHON_LIBRARY_DIRS}) +############################## +### Build target library ### +############################## + +set(CMAKE_LD_FLAG "${CMAKE_LD_FLAGS} -shared -Wl -fPIC --export-dynamic -fopenmp -O3 -Wall") + +add_library(wrapper SHARED wrapper.cpp) +target_link_libraries(ply_c + ${Boost_LIBRARIES} + ${PYTHON_LIBRARIES} + ) diff --git a/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_subsampling/grid_subsampling.cp36-win_amd64.pyd b/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_subsampling/grid_subsampling.cp36-win_amd64.pyd new file mode 100644 index 00000000..8ea8e7d5 Binary files /dev/null and b/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_subsampling/grid_subsampling.cp36-win_amd64.pyd differ diff --git a/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_subsampling/grid_subsampling.cp37-win_amd64.pyd b/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_subsampling/grid_subsampling.cp37-win_amd64.pyd new file mode 100644 index 00000000..b2c44c51 Binary files /dev/null and b/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_subsampling/grid_subsampling.cp37-win_amd64.pyd differ diff --git a/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_subsampling/grid_subsampling/grid_subsampling.cpp b/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_subsampling/grid_subsampling/grid_subsampling.cpp new file mode 100644 index 00000000..7c00396f --- /dev/null +++ b/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_subsampling/grid_subsampling/grid_subsampling.cpp @@ -0,0 +1,106 @@ + +#include "grid_subsampling.h" + + +void grid_subsampling(vector& original_points, + vector& subsampled_points, + vector& original_features, + vector& subsampled_features, + vector& original_classes, + vector& subsampled_classes, + float sampleDl, + int verbose) { + + // Initiate variables + // ****************** + + // Number of points in the cloud + size_t N = original_points.size(); + + // Dimension of the features + size_t fdim = original_features.size() / N; + size_t ldim = original_classes.size() / N; + + // Limits of the cloud + PointXYZ minCorner = min_point(original_points); + PointXYZ maxCorner = max_point(original_points); + PointXYZ originCorner = floor(minCorner * (1/sampleDl)) * sampleDl; + + // Dimensions of the grid + size_t sampleNX = (size_t)floor((maxCorner.x - originCorner.x) / sampleDl) + 1; + size_t sampleNY = (size_t)floor((maxCorner.y - originCorner.y) / sampleDl) + 1; + //size_t sampleNZ = (size_t)floor((maxCorner.z - originCorner.z) / sampleDl) + 1; + + // Check if features and classes need to be processed + bool use_feature = original_features.size() > 0; + bool use_classes = original_classes.size() > 0; + + + // Create the sampled map + // ********************** + + // Verbose parameters + int i = 0; + int nDisp = N / 100; + + // Initiate variables + size_t iX, iY, iZ, mapIdx; + unordered_map data; + + for (auto& p : original_points) + { + // Position of point in sample map + iX = (size_t)floor((p.x - originCorner.x) / sampleDl); + iY = (size_t)floor((p.y - originCorner.y) / sampleDl); + iZ = (size_t)floor((p.z - originCorner.z) / sampleDl); + mapIdx = iX + sampleNX*iY + sampleNX*sampleNY*iZ; + + // If not already created, create key + if (data.count(mapIdx) < 1) + data.emplace(mapIdx, SampledData(fdim, ldim)); + + // Fill the sample map + if (use_feature && use_classes) + data[mapIdx].update_all(p, original_features.begin() + i * fdim, original_classes.begin() + i * ldim); + else if (use_feature) + data[mapIdx].update_features(p, original_features.begin() + i * fdim); + else if (use_classes) + data[mapIdx].update_classes(p, original_classes.begin() + i * ldim); + else + data[mapIdx].update_points(p); + + // Display + i++; + if (verbose > 1 && i%nDisp == 0) + std::cout << "\rSampled Map : " << std::setw(3) << i / nDisp << "%"; + + } + + // Divide for barycentre and transfer to a vector + subsampled_points.reserve(data.size()); + if (use_feature) + subsampled_features.reserve(data.size() * fdim); + if (use_classes) + subsampled_classes.reserve(data.size() * ldim); + for (auto& v : data) + { + subsampled_points.push_back(v.second.point * (1.0 / v.second.count)); + if (use_feature) + { + float count = (float)v.second.count; + transform(v.second.features.begin(), + v.second.features.end(), + v.second.features.begin(), + [count](float f) { return f / count;}); + subsampled_features.insert(subsampled_features.end(),v.second.features.begin(),v.second.features.end()); + } + if (use_classes) + { + for (int i = 0; i < ldim; i++) + subsampled_classes.push_back(max_element(v.second.labels[i].begin(), v.second.labels[i].end(), + [](const pair&a, const pair&b){return a.second < b.second;})->first); + } + } + + return; +} diff --git a/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_subsampling/grid_subsampling/grid_subsampling.h b/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_subsampling/grid_subsampling/grid_subsampling.h new file mode 100644 index 00000000..b1c84d1b --- /dev/null +++ b/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_subsampling/grid_subsampling/grid_subsampling.h @@ -0,0 +1,92 @@ + + +#include "../../cpp_utils/cloud/cloud.h" + +#include +#include + +using namespace std; + +class SampledData +{ +public: + + // Elements + // ******** + + int count; + PointXYZ point; + vector features; + vector> labels; + + + // Methods + // ******* + + // Constructor + SampledData() + { + count = 0; + point = PointXYZ(); + } + + SampledData(const size_t fdim, const size_t ldim) + { + count = 0; + point = PointXYZ(); + features = vector(fdim); + labels = vector>(ldim); + } + + // Method Update + void update_all(const PointXYZ p, vector::iterator f_begin, vector::iterator l_begin) + { + count += 1; + point += p; + transform (features.begin(), features.end(), f_begin, features.begin(), plus()); + int i = 0; + for(vector::iterator it = l_begin; it != l_begin + labels.size(); ++it) + { + labels[i][*it] += 1; + i++; + } + return; + } + void update_features(const PointXYZ p, vector::iterator f_begin) + { + count += 1; + point += p; + transform (features.begin(), features.end(), f_begin, features.begin(), plus()); + return; + } + void update_classes(const PointXYZ p, vector::iterator l_begin) + { + count += 1; + point += p; + int i = 0; + for(vector::iterator it = l_begin; it != l_begin + labels.size(); ++it) + { + labels[i][*it] += 1; + i++; + } + return; + } + void update_points(const PointXYZ p) + { + count += 1; + point += p; + return; + } +}; + + + +void grid_subsampling(vector& original_points, + vector& subsampled_points, + vector& original_features, + vector& subsampled_features, + vector& original_classes, + vector& subsampled_classes, + float sampleDl, + int verbose); + diff --git a/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_subsampling/setup.py b/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_subsampling/setup.py new file mode 100644 index 00000000..083d9d4f --- /dev/null +++ b/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_subsampling/setup.py @@ -0,0 +1,29 @@ +from distutils.core import setup, Extension +import numpy.distutils.misc_util + +# Adding OpenCV to project +# ************************ + +# Adding sources of the project +# ***************************** + +m_name = "grid_subsampling" + +SOURCES = ["../cpp_utils/cloud/cloud.cpp", + "grid_subsampling/grid_subsampling.cpp", + "wrapper.cpp"] + +module = Extension(m_name, + sources=SOURCES, + extra_compile_args=['-std=c++11', + '-D_GLIBCXX_USE_CXX11_ABI=0']) + +setup(ext_modules=[module], include_dirs=numpy.distutils.misc_util.get_numpy_include_dirs()) + + + + + + + + diff --git a/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_subsampling/wrapper.cpp b/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_subsampling/wrapper.cpp new file mode 100644 index 00000000..f879059e --- /dev/null +++ b/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_subsampling/wrapper.cpp @@ -0,0 +1,286 @@ +#include +#include +#include "grid_subsampling/grid_subsampling.h" +#include + + + +// docstrings for our module +// ************************* + +static char module_docstring[] = "This module provides an interface for the subsampling of a pointcloud"; + +static char compute_docstring[] = "function subsampling a pointcloud"; + + +// Declare the functions +// ********************* + +static PyObject *grid_subsampling_compute(PyObject *self, PyObject *args, PyObject *keywds); + + +// Specify the members of the module +// ********************************* + +static PyMethodDef module_methods[] = +{ + { "compute", (PyCFunction)grid_subsampling_compute, METH_VARARGS | METH_KEYWORDS, compute_docstring }, + {NULL, NULL, 0, NULL} +}; + + +// Initialize the module +// ********************* + +static struct PyModuleDef moduledef = +{ + PyModuleDef_HEAD_INIT, + "grid_subsampling", // m_name + module_docstring, // m_doc + -1, // m_size + module_methods, // m_methods + NULL, // m_reload + NULL, // m_traverse + NULL, // m_clear + NULL, // m_free +}; + +PyMODINIT_FUNC PyInit_grid_subsampling(void) +{ + import_array(); + return PyModule_Create(&moduledef); +} + + +// Actual wrapper +// ************** + +static PyObject *grid_subsampling_compute(PyObject *self, PyObject *args, PyObject *keywds) +{ + + // Manage inputs + // ************* + + // Args containers + PyObject *points_obj = NULL; + PyObject *features_obj = NULL; + PyObject *classes_obj = NULL; + + // Keywords containers + static char *kwlist[] = {"points", "features", "classes", "sampleDl", "method", "verbose", NULL }; + float sampleDl = 0.1; + const char *method_buffer = "barycenters"; + int verbose = 0; + + // Parse the input + if (!PyArg_ParseTupleAndKeywords(args, keywds, "O|$OOfsi", kwlist, &points_obj, &features_obj, &classes_obj, &sampleDl, &method_buffer, &verbose)) + { + PyErr_SetString(PyExc_RuntimeError, "Error parsing arguments"); + return NULL; + } + + // Get the method argument + string method(method_buffer); + + // Interpret method + if (method.compare("barycenters") && method.compare("voxelcenters")) + { + PyErr_SetString(PyExc_RuntimeError, "Error parsing method. Valid method names are \"barycenters\" and \"voxelcenters\" "); + return NULL; + } + + // Check if using features or classes + bool use_feature = true, use_classes = true; + if (features_obj == NULL) + use_feature = false; + if (classes_obj == NULL) + use_classes = false; + + // Interpret the input objects as numpy arrays. + PyObject *points_array = PyArray_FROM_OTF(points_obj, NPY_FLOAT, NPY_IN_ARRAY); + PyObject *features_array = NULL; + PyObject *classes_array = NULL; + if (use_feature) + features_array = PyArray_FROM_OTF(features_obj, NPY_FLOAT, NPY_IN_ARRAY); + if (use_classes) + classes_array = PyArray_FROM_OTF(classes_obj, NPY_INT, NPY_IN_ARRAY); + + // Verify data was load correctly. + if (points_array == NULL) + { + Py_XDECREF(points_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Error converting input points to numpy arrays of type float32"); + return NULL; + } + if (use_feature && features_array == NULL) + { + Py_XDECREF(points_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Error converting input features to numpy arrays of type float32"); + return NULL; + } + if (use_classes && classes_array == NULL) + { + Py_XDECREF(points_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Error converting input classes to numpy arrays of type int32"); + return NULL; + } + + // Check that the input array respect the dims + if ((int)PyArray_NDIM(points_array) != 2 || (int)PyArray_DIM(points_array, 1) != 3) + { + Py_XDECREF(points_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Wrong dimensions : points.shape is not (N, 3)"); + return NULL; + } + if (use_feature && ((int)PyArray_NDIM(features_array) != 2)) + { + Py_XDECREF(points_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Wrong dimensions : features.shape is not (N, d)"); + return NULL; + } + + if (use_classes && (int)PyArray_NDIM(classes_array) > 2) + { + Py_XDECREF(points_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Wrong dimensions : classes.shape is not (N,) or (N, d)"); + return NULL; + } + + // Number of points + int N = (int)PyArray_DIM(points_array, 0); + + // Dimension of the features + int fdim = 0; + if (use_feature) + fdim = (int)PyArray_DIM(features_array, 1); + + //Dimension of labels + int ldim = 1; + if (use_classes && (int)PyArray_NDIM(classes_array) == 2) + ldim = (int)PyArray_DIM(classes_array, 1); + + // Check that the input array respect the number of points + if (use_feature && (int)PyArray_DIM(features_array, 0) != N) + { + Py_XDECREF(points_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Wrong dimensions : features.shape is not (N, d)"); + return NULL; + } + if (use_classes && (int)PyArray_DIM(classes_array, 0) != N) + { + Py_XDECREF(points_array); + Py_XDECREF(classes_array); + Py_XDECREF(features_array); + PyErr_SetString(PyExc_RuntimeError, "Wrong dimensions : classes.shape is not (N,) or (N, d)"); + return NULL; + } + + + // Call the C++ function + // ********************* + + // Create pyramid + if (verbose > 0) + cout << "Computing cloud pyramid with support points: " << endl; + + + // Convert PyArray to Cloud C++ class + vector original_points; + vector original_features; + vector original_classes; + original_points = vector((PointXYZ*)PyArray_DATA(points_array), (PointXYZ*)PyArray_DATA(points_array) + N); + if (use_feature) + original_features = vector((float*)PyArray_DATA(features_array), (float*)PyArray_DATA(features_array) + N*fdim); + if (use_classes) + original_classes = vector((int*)PyArray_DATA(classes_array), (int*)PyArray_DATA(classes_array) + N*ldim); + + // Subsample + vector subsampled_points; + vector subsampled_features; + vector subsampled_classes; + grid_subsampling(original_points, + subsampled_points, + original_features, + subsampled_features, + original_classes, + subsampled_classes, + sampleDl, + verbose); + + // Check result + if (subsampled_points.size() < 1) + { + PyErr_SetString(PyExc_RuntimeError, "Error"); + return NULL; + } + + // Manage outputs + // ************** + + // Dimension of input containers + npy_intp* point_dims = new npy_intp[2]; + point_dims[0] = subsampled_points.size(); + point_dims[1] = 3; + npy_intp* feature_dims = new npy_intp[2]; + feature_dims[0] = subsampled_points.size(); + feature_dims[1] = fdim; + npy_intp* classes_dims = new npy_intp[2]; + classes_dims[0] = subsampled_points.size(); + classes_dims[1] = ldim; + + // Create output array + PyObject *res_points_obj = PyArray_SimpleNew(2, point_dims, NPY_FLOAT); + PyObject *res_features_obj = NULL; + PyObject *res_classes_obj = NULL; + PyObject *ret = NULL; + + // Fill output array with values + size_t size_in_bytes = subsampled_points.size() * 3 * sizeof(float); + memcpy(PyArray_DATA(res_points_obj), subsampled_points.data(), size_in_bytes); + if (use_feature) + { + size_in_bytes = subsampled_points.size() * fdim * sizeof(float); + res_features_obj = PyArray_SimpleNew(2, feature_dims, NPY_FLOAT); + memcpy(PyArray_DATA(res_features_obj), subsampled_features.data(), size_in_bytes); + } + if (use_classes) + { + size_in_bytes = subsampled_points.size() * ldim * sizeof(int); + res_classes_obj = PyArray_SimpleNew(2, classes_dims, NPY_INT); + memcpy(PyArray_DATA(res_classes_obj), subsampled_classes.data(), size_in_bytes); + } + + + // Merge results + if (use_feature && use_classes) + ret = Py_BuildValue("NNN", res_points_obj, res_features_obj, res_classes_obj); + else if (use_feature) + ret = Py_BuildValue("NN", res_points_obj, res_features_obj); + else if (use_classes) + ret = Py_BuildValue("NN", res_points_obj, res_classes_obj); + else + ret = Py_BuildValue("N", res_points_obj); + + // Clean up + // ******** + + Py_DECREF(points_array); + Py_XDECREF(features_array); + Py_XDECREF(classes_array); + + return ret; +} \ No newline at end of file diff --git a/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_utils/cloud/cloud.cpp b/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_utils/cloud/cloud.cpp new file mode 100644 index 00000000..bdb65679 --- /dev/null +++ b/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_utils/cloud/cloud.cpp @@ -0,0 +1,67 @@ +// +// +// 0==========================0 +// | Local feature test | +// 0==========================0 +// +// version 1.0 : +// > +// +//--------------------------------------------------- +// +// Cloud source : +// Define usefull Functions/Methods +// +//---------------------------------------------------- +// +// Hugues THOMAS - 10/02/2017 +// + + +#include "cloud.h" + + +// Getters +// ******* + +PointXYZ max_point(std::vector points) +{ + // Initiate limits + PointXYZ maxP(points[0]); + + // Loop over all points + for (auto p : points) + { + if (p.x > maxP.x) + maxP.x = p.x; + + if (p.y > maxP.y) + maxP.y = p.y; + + if (p.z > maxP.z) + maxP.z = p.z; + } + + return maxP; +} + +PointXYZ min_point(std::vector points) +{ + // Initiate limits + PointXYZ minP(points[0]); + + // Loop over all points + for (auto p : points) + { + if (p.x < minP.x) + minP.x = p.x; + + if (p.y < minP.y) + minP.y = p.y; + + if (p.z < minP.z) + minP.z = p.z; + } + + return minP; +} \ No newline at end of file diff --git a/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_utils/cloud/cloud.h b/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_utils/cloud/cloud.h new file mode 100644 index 00000000..39ab05b6 --- /dev/null +++ b/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_utils/cloud/cloud.h @@ -0,0 +1,158 @@ +// +// +// 0==========================0 +// | Local feature test | +// 0==========================0 +// +// version 1.0 : +// > +// +//--------------------------------------------------- +// +// Cloud header +// +//---------------------------------------------------- +// +// Hugues THOMAS - 10/02/2017 +// + + +# pragma once + +#include +#include +#include +#include +#include +#include +#include +#include + +#include + + + + +// Point class +// *********** + + +class PointXYZ +{ +public: + + // Elements + // ******** + + float x, y, z; + + + // Methods + // ******* + + // Constructor + PointXYZ() { x = 0; y = 0; z = 0; } + PointXYZ(float x0, float y0, float z0) { x = x0; y = y0; z = z0; } + + // array type accessor + float operator [] (int i) const + { + if (i == 0) return x; + else if (i == 1) return y; + else return z; + } + + // opperations + float dot(const PointXYZ P) const + { + return x * P.x + y * P.y + z * P.z; + } + + float sq_norm() + { + return x*x + y*y + z*z; + } + + PointXYZ cross(const PointXYZ P) const + { + return PointXYZ(y*P.z - z*P.y, z*P.x - x*P.z, x*P.y - y*P.x); + } + + PointXYZ& operator+=(const PointXYZ& P) + { + x += P.x; + y += P.y; + z += P.z; + return *this; + } + + PointXYZ& operator-=(const PointXYZ& P) + { + x -= P.x; + y -= P.y; + z -= P.z; + return *this; + } + + PointXYZ& operator*=(const float& a) + { + x *= a; + y *= a; + z *= a; + return *this; + } +}; + + +// Point Opperations +// ***************** + +inline PointXYZ operator + (const PointXYZ A, const PointXYZ B) +{ + return PointXYZ(A.x + B.x, A.y + B.y, A.z + B.z); +} + +inline PointXYZ operator - (const PointXYZ A, const PointXYZ B) +{ + return PointXYZ(A.x - B.x, A.y - B.y, A.z - B.z); +} + +inline PointXYZ operator * (const PointXYZ P, const float a) +{ + return PointXYZ(P.x * a, P.y * a, P.z * a); +} + +inline PointXYZ operator * (const float a, const PointXYZ P) +{ + return PointXYZ(P.x * a, P.y * a, P.z * a); +} + +inline std::ostream& operator << (std::ostream& os, const PointXYZ P) +{ + return os << "[" << P.x << ", " << P.y << ", " << P.z << "]"; +} + +inline bool operator == (const PointXYZ A, const PointXYZ B) +{ + return A.x == B.x && A.y == B.y && A.z == B.z; +} + +inline PointXYZ floor(const PointXYZ P) +{ + return PointXYZ(std::floor(P.x), std::floor(P.y), std::floor(P.z)); +} + + +PointXYZ max_point(std::vector points); +PointXYZ min_point(std::vector points); + + + + + + + + + + + diff --git a/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_utils/nanoflann/nanoflann.hpp b/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_utils/nanoflann/nanoflann.hpp new file mode 100644 index 00000000..8d2ab6cc --- /dev/null +++ b/competing_methods/my_RandLANet/utils/cpp_wrappers/cpp_utils/nanoflann/nanoflann.hpp @@ -0,0 +1,2043 @@ +/*********************************************************************** + * Software License Agreement (BSD License) + * + * Copyright 2008-2009 Marius Muja (mariusm@cs.ubc.ca). All rights reserved. + * Copyright 2008-2009 David G. Lowe (lowe@cs.ubc.ca). All rights reserved. + * Copyright 2011-2016 Jose Luis Blanco (joseluisblancoc@gmail.com). + * All rights reserved. + * + * THE BSD LICENSE + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR + * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES + * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. + * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, + * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT + * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF + * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + *************************************************************************/ + +/** \mainpage nanoflann C++ API documentation + * nanoflann is a C++ header-only library for building KD-Trees, mostly + * optimized for 2D or 3D point clouds. + * + * nanoflann does not require compiling or installing, just an + * #include in your code. + * + * See: + * - C++ API organized by modules + * - Online README + * - Doxygen + * documentation + */ + +#ifndef NANOFLANN_HPP_ +#define NANOFLANN_HPP_ + +#include +#include +#include +#include // for abs() +#include // for fwrite() +#include // for abs() +#include +#include // std::reference_wrapper +#include +#include + +/** Library version: 0xMmP (M=Major,m=minor,P=patch) */ +#define NANOFLANN_VERSION 0x130 + +// Avoid conflicting declaration of min/max macros in windows headers +#if !defined(NOMINMAX) && \ + (defined(_WIN32) || defined(_WIN32_) || defined(WIN32) || defined(_WIN64)) +#define NOMINMAX +#ifdef max +#undef max +#undef min +#endif +#endif + +namespace nanoflann { +/** @addtogroup nanoflann_grp nanoflann C++ library for ANN + * @{ */ + +/** the PI constant (required to avoid MSVC missing symbols) */ +template T pi_const() { + return static_cast(3.14159265358979323846); +} + +/** + * Traits if object is resizable and assignable (typically has a resize | assign + * method) + */ +template struct has_resize : std::false_type {}; + +template +struct has_resize().resize(1), 0)> + : std::true_type {}; + +template struct has_assign : std::false_type {}; + +template +struct has_assign().assign(1, 0), 0)> + : std::true_type {}; + +/** + * Free function to resize a resizable object + */ +template +inline typename std::enable_if::value, void>::type +resize(Container &c, const size_t nElements) { + c.resize(nElements); +} + +/** + * Free function that has no effects on non resizable containers (e.g. + * std::array) It raises an exception if the expected size does not match + */ +template +inline typename std::enable_if::value, void>::type +resize(Container &c, const size_t nElements) { + if (nElements != c.size()) + throw std::logic_error("Try to change the size of a std::array."); +} + +/** + * Free function to assign to a container + */ +template +inline typename std::enable_if::value, void>::type +assign(Container &c, const size_t nElements, const T &value) { + c.assign(nElements, value); +} + +/** + * Free function to assign to a std::array + */ +template +inline typename std::enable_if::value, void>::type +assign(Container &c, const size_t nElements, const T &value) { + for (size_t i = 0; i < nElements; i++) + c[i] = value; +} + +/** @addtogroup result_sets_grp Result set classes + * @{ */ +template +class KNNResultSet { +public: + typedef _DistanceType DistanceType; + typedef _IndexType IndexType; + typedef _CountType CountType; + +private: + IndexType *indices; + DistanceType *dists; + CountType capacity; + CountType count; + +public: + inline KNNResultSet(CountType capacity_) + : indices(0), dists(0), capacity(capacity_), count(0) {} + + inline void init(IndexType *indices_, DistanceType *dists_) { + indices = indices_; + dists = dists_; + count = 0; + if (capacity) + dists[capacity - 1] = (std::numeric_limits::max)(); + } + + inline CountType size() const { return count; } + + inline bool full() const { return count == capacity; } + + /** + * Called during search to add an element matching the criteria. + * @return true if the search should be continued, false if the results are + * sufficient + */ + inline bool addPoint(DistanceType dist, IndexType index) { + CountType i; + for (i = count; i > 0; --i) { +#ifdef NANOFLANN_FIRST_MATCH // If defined and two points have the same + // distance, the one with the lowest-index will be + // returned first. + if ((dists[i - 1] > dist) || + ((dist == dists[i - 1]) && (indices[i - 1] > index))) { +#else + if (dists[i - 1] > dist) { +#endif + if (i < capacity) { + dists[i] = dists[i - 1]; + indices[i] = indices[i - 1]; + } + } else + break; + } + if (i < capacity) { + dists[i] = dist; + indices[i] = index; + } + if (count < capacity) + count++; + + // tell caller that the search shall continue + return true; + } + + inline DistanceType worstDist() const { return dists[capacity - 1]; } +}; + +/** operator "<" for std::sort() */ +struct IndexDist_Sorter { + /** PairType will be typically: std::pair */ + template + inline bool operator()(const PairType &p1, const PairType &p2) const { + return p1.second < p2.second; + } +}; + +/** + * A result-set class used when performing a radius based search. + */ +template +class RadiusResultSet { +public: + typedef _DistanceType DistanceType; + typedef _IndexType IndexType; + +public: + const DistanceType radius; + + std::vector> &m_indices_dists; + + inline RadiusResultSet( + DistanceType radius_, + std::vector> &indices_dists) + : radius(radius_), m_indices_dists(indices_dists) { + init(); + } + + inline void init() { clear(); } + inline void clear() { m_indices_dists.clear(); } + + inline size_t size() const { return m_indices_dists.size(); } + + inline bool full() const { return true; } + + /** + * Called during search to add an element matching the criteria. + * @return true if the search should be continued, false if the results are + * sufficient + */ + inline bool addPoint(DistanceType dist, IndexType index) { + if (dist < radius) + m_indices_dists.push_back(std::make_pair(index, dist)); + return true; + } + + inline DistanceType worstDist() const { return radius; } + + /** + * Find the worst result (furtherest neighbor) without copying or sorting + * Pre-conditions: size() > 0 + */ + std::pair worst_item() const { + if (m_indices_dists.empty()) + throw std::runtime_error("Cannot invoke RadiusResultSet::worst_item() on " + "an empty list of results."); + typedef + typename std::vector>::const_iterator + DistIt; + DistIt it = std::max_element(m_indices_dists.begin(), m_indices_dists.end(), + IndexDist_Sorter()); + return *it; + } +}; + +/** @} */ + +/** @addtogroup loadsave_grp Load/save auxiliary functions + * @{ */ +template +void save_value(FILE *stream, const T &value, size_t count = 1) { + fwrite(&value, sizeof(value), count, stream); +} + +template +void save_value(FILE *stream, const std::vector &value) { + size_t size = value.size(); + fwrite(&size, sizeof(size_t), 1, stream); + fwrite(&value[0], sizeof(T), size, stream); +} + +template +void load_value(FILE *stream, T &value, size_t count = 1) { + size_t read_cnt = fread(&value, sizeof(value), count, stream); + if (read_cnt != count) { + throw std::runtime_error("Cannot read from file"); + } +} + +template void load_value(FILE *stream, std::vector &value) { + size_t size; + size_t read_cnt = fread(&size, sizeof(size_t), 1, stream); + if (read_cnt != 1) { + throw std::runtime_error("Cannot read from file"); + } + value.resize(size); + read_cnt = fread(&value[0], sizeof(T), size, stream); + if (read_cnt != size) { + throw std::runtime_error("Cannot read from file"); + } +} +/** @} */ + +/** @addtogroup metric_grp Metric (distance) classes + * @{ */ + +struct Metric {}; + +/** Manhattan distance functor (generic version, optimized for + * high-dimensionality data sets). Corresponding distance traits: + * nanoflann::metric_L1 \tparam T Type of the elements (e.g. double, float, + * uint8_t) \tparam _DistanceType Type of distance variables (must be signed) + * (e.g. float, double, int64_t) + */ +template +struct L1_Adaptor { + typedef T ElementType; + typedef _DistanceType DistanceType; + + const DataSource &data_source; + + L1_Adaptor(const DataSource &_data_source) : data_source(_data_source) {} + + inline DistanceType evalMetric(const T *a, const size_t b_idx, size_t size, + DistanceType worst_dist = -1) const { + DistanceType result = DistanceType(); + const T *last = a + size; + const T *lastgroup = last - 3; + size_t d = 0; + + /* Process 4 items with each loop for efficiency. */ + while (a < lastgroup) { + const DistanceType diff0 = + std::abs(a[0] - data_source.kdtree_get_pt(b_idx, d++)); + const DistanceType diff1 = + std::abs(a[1] - data_source.kdtree_get_pt(b_idx, d++)); + const DistanceType diff2 = + std::abs(a[2] - data_source.kdtree_get_pt(b_idx, d++)); + const DistanceType diff3 = + std::abs(a[3] - data_source.kdtree_get_pt(b_idx, d++)); + result += diff0 + diff1 + diff2 + diff3; + a += 4; + if ((worst_dist > 0) && (result > worst_dist)) { + return result; + } + } + /* Process last 0-3 components. Not needed for standard vector lengths. */ + while (a < last) { + result += std::abs(*a++ - data_source.kdtree_get_pt(b_idx, d++)); + } + return result; + } + + template + inline DistanceType accum_dist(const U a, const V b, const size_t) const { + return std::abs(a - b); + } +}; + +/** Squared Euclidean distance functor (generic version, optimized for + * high-dimensionality data sets). Corresponding distance traits: + * nanoflann::metric_L2 \tparam T Type of the elements (e.g. double, float, + * uint8_t) \tparam _DistanceType Type of distance variables (must be signed) + * (e.g. float, double, int64_t) + */ +template +struct L2_Adaptor { + typedef T ElementType; + typedef _DistanceType DistanceType; + + const DataSource &data_source; + + L2_Adaptor(const DataSource &_data_source) : data_source(_data_source) {} + + inline DistanceType evalMetric(const T *a, const size_t b_idx, size_t size, + DistanceType worst_dist = -1) const { + DistanceType result = DistanceType(); + const T *last = a + size; + const T *lastgroup = last - 3; + size_t d = 0; + + /* Process 4 items with each loop for efficiency. */ + while (a < lastgroup) { + const DistanceType diff0 = a[0] - data_source.kdtree_get_pt(b_idx, d++); + const DistanceType diff1 = a[1] - data_source.kdtree_get_pt(b_idx, d++); + const DistanceType diff2 = a[2] - data_source.kdtree_get_pt(b_idx, d++); + const DistanceType diff3 = a[3] - data_source.kdtree_get_pt(b_idx, d++); + result += diff0 * diff0 + diff1 * diff1 + diff2 * diff2 + diff3 * diff3; + a += 4; + if ((worst_dist > 0) && (result > worst_dist)) { + return result; + } + } + /* Process last 0-3 components. Not needed for standard vector lengths. */ + while (a < last) { + const DistanceType diff0 = *a++ - data_source.kdtree_get_pt(b_idx, d++); + result += diff0 * diff0; + } + return result; + } + + template + inline DistanceType accum_dist(const U a, const V b, const size_t) const { + return (a - b) * (a - b); + } +}; + +/** Squared Euclidean (L2) distance functor (suitable for low-dimensionality + * datasets, like 2D or 3D point clouds) Corresponding distance traits: + * nanoflann::metric_L2_Simple \tparam T Type of the elements (e.g. double, + * float, uint8_t) \tparam _DistanceType Type of distance variables (must be + * signed) (e.g. float, double, int64_t) + */ +template +struct L2_Simple_Adaptor { + typedef T ElementType; + typedef _DistanceType DistanceType; + + const DataSource &data_source; + + L2_Simple_Adaptor(const DataSource &_data_source) + : data_source(_data_source) {} + + inline DistanceType evalMetric(const T *a, const size_t b_idx, + size_t size) const { + DistanceType result = DistanceType(); + for (size_t i = 0; i < size; ++i) { + const DistanceType diff = a[i] - data_source.kdtree_get_pt(b_idx, i); + result += diff * diff; + } + return result; + } + + template + inline DistanceType accum_dist(const U a, const V b, const size_t) const { + return (a - b) * (a - b); + } +}; + +/** SO2 distance functor + * Corresponding distance traits: nanoflann::metric_SO2 + * \tparam T Type of the elements (e.g. double, float) + * \tparam _DistanceType Type of distance variables (must be signed) (e.g. + * float, double) orientation is constrained to be in [-pi, pi] + */ +template +struct SO2_Adaptor { + typedef T ElementType; + typedef _DistanceType DistanceType; + + const DataSource &data_source; + + SO2_Adaptor(const DataSource &_data_source) : data_source(_data_source) {} + + inline DistanceType evalMetric(const T *a, const size_t b_idx, + size_t size) const { + return accum_dist(a[size - 1], data_source.kdtree_get_pt(b_idx, size - 1), + size - 1); + } + + /** Note: this assumes that input angles are already in the range [-pi,pi] */ + template + inline DistanceType accum_dist(const U a, const V b, const size_t) const { + DistanceType result = DistanceType(), PI = pi_const(); + result = b - a; + if (result > PI) + result -= 2 * PI; + else if (result < -PI) + result += 2 * PI; + return result; + } +}; + +/** SO3 distance functor (Uses L2_Simple) + * Corresponding distance traits: nanoflann::metric_SO3 + * \tparam T Type of the elements (e.g. double, float) + * \tparam _DistanceType Type of distance variables (must be signed) (e.g. + * float, double) + */ +template +struct SO3_Adaptor { + typedef T ElementType; + typedef _DistanceType DistanceType; + + L2_Simple_Adaptor distance_L2_Simple; + + SO3_Adaptor(const DataSource &_data_source) + : distance_L2_Simple(_data_source) {} + + inline DistanceType evalMetric(const T *a, const size_t b_idx, + size_t size) const { + return distance_L2_Simple.evalMetric(a, b_idx, size); + } + + template + inline DistanceType accum_dist(const U a, const V b, const size_t idx) const { + return distance_L2_Simple.accum_dist(a, b, idx); + } +}; + +/** Metaprogramming helper traits class for the L1 (Manhattan) metric */ +struct metric_L1 : public Metric { + template struct traits { + typedef L1_Adaptor distance_t; + }; +}; +/** Metaprogramming helper traits class for the L2 (Euclidean) metric */ +struct metric_L2 : public Metric { + template struct traits { + typedef L2_Adaptor distance_t; + }; +}; +/** Metaprogramming helper traits class for the L2_simple (Euclidean) metric */ +struct metric_L2_Simple : public Metric { + template struct traits { + typedef L2_Simple_Adaptor distance_t; + }; +}; +/** Metaprogramming helper traits class for the SO3_InnerProdQuat metric */ +struct metric_SO2 : public Metric { + template struct traits { + typedef SO2_Adaptor distance_t; + }; +}; +/** Metaprogramming helper traits class for the SO3_InnerProdQuat metric */ +struct metric_SO3 : public Metric { + template struct traits { + typedef SO3_Adaptor distance_t; + }; +}; + +/** @} */ + +/** @addtogroup param_grp Parameter structs + * @{ */ + +/** Parameters (see README.md) */ +struct KDTreeSingleIndexAdaptorParams { + KDTreeSingleIndexAdaptorParams(size_t _leaf_max_size = 10) + : leaf_max_size(_leaf_max_size) {} + + size_t leaf_max_size; +}; + +/** Search options for KDTreeSingleIndexAdaptor::findNeighbors() */ +struct SearchParams { + /** Note: The first argument (checks_IGNORED_) is ignored, but kept for + * compatibility with the FLANN interface */ + SearchParams(int checks_IGNORED_ = 32, float eps_ = 0, bool sorted_ = true) + : checks(checks_IGNORED_), eps(eps_), sorted(sorted_) {} + + int checks; //!< Ignored parameter (Kept for compatibility with the FLANN + //!< interface). + float eps; //!< search for eps-approximate neighbours (default: 0) + bool sorted; //!< only for radius search, require neighbours sorted by + //!< distance (default: true) +}; +/** @} */ + +/** @addtogroup memalloc_grp Memory allocation + * @{ */ + +/** + * Allocates (using C's malloc) a generic type T. + * + * Params: + * count = number of instances to allocate. + * Returns: pointer (of type T*) to memory buffer + */ +template inline T *allocate(size_t count = 1) { + T *mem = static_cast(::malloc(sizeof(T) * count)); + return mem; +} + +/** + * Pooled storage allocator + * + * The following routines allow for the efficient allocation of storage in + * small chunks from a specified pool. Rather than allowing each structure + * to be freed individually, an entire pool of storage is freed at once. + * This method has two advantages over just using malloc() and free(). First, + * it is far more efficient for allocating small objects, as there is + * no overhead for remembering all the information needed to free each + * object or consolidating fragmented memory. Second, the decision about + * how long to keep an object is made at the time of allocation, and there + * is no need to track down all the objects to free them. + * + */ + +const size_t WORDSIZE = 16; +const size_t BLOCKSIZE = 8192; + +class PooledAllocator { + /* We maintain memory alignment to word boundaries by requiring that all + allocations be in multiples of the machine wordsize. */ + /* Size of machine word in bytes. Must be power of 2. */ + /* Minimum number of bytes requested at a time from the system. Must be + * multiple of WORDSIZE. */ + + size_t remaining; /* Number of bytes left in current block of storage. */ + void *base; /* Pointer to base of current block of storage. */ + void *loc; /* Current location in block to next allocate memory. */ + + void internal_init() { + remaining = 0; + base = NULL; + usedMemory = 0; + wastedMemory = 0; + } + +public: + size_t usedMemory; + size_t wastedMemory; + + /** + Default constructor. Initializes a new pool. + */ + PooledAllocator() { internal_init(); } + + /** + * Destructor. Frees all the memory allocated in this pool. + */ + ~PooledAllocator() { free_all(); } + + /** Frees all allocated memory chunks */ + void free_all() { + while (base != NULL) { + void *prev = + *(static_cast(base)); /* Get pointer to prev block. */ + ::free(base); + base = prev; + } + internal_init(); + } + + /** + * Returns a pointer to a piece of new memory of the given size in bytes + * allocated from the pool. + */ + void *malloc(const size_t req_size) { + /* Round size up to a multiple of wordsize. The following expression + only works for WORDSIZE that is a power of 2, by masking last bits of + incremented size to zero. + */ + const size_t size = (req_size + (WORDSIZE - 1)) & ~(WORDSIZE - 1); + + /* Check whether a new block must be allocated. Note that the first word + of a block is reserved for a pointer to the previous block. + */ + if (size > remaining) { + + wastedMemory += remaining; + + /* Allocate new storage. */ + const size_t blocksize = + (size + sizeof(void *) + (WORDSIZE - 1) > BLOCKSIZE) + ? size + sizeof(void *) + (WORDSIZE - 1) + : BLOCKSIZE; + + // use the standard C malloc to allocate memory + void *m = ::malloc(blocksize); + if (!m) { + fprintf(stderr, "Failed to allocate memory.\n"); + return NULL; + } + + /* Fill first word of new block with pointer to previous block. */ + static_cast(m)[0] = base; + base = m; + + size_t shift = 0; + // int size_t = (WORDSIZE - ( (((size_t)m) + sizeof(void*)) & + // (WORDSIZE-1))) & (WORDSIZE-1); + + remaining = blocksize - sizeof(void *) - shift; + loc = (static_cast(m) + sizeof(void *) + shift); + } + void *rloc = loc; + loc = static_cast(loc) + size; + remaining -= size; + + usedMemory += size; + + return rloc; + } + + /** + * Allocates (using this pool) a generic type T. + * + * Params: + * count = number of instances to allocate. + * Returns: pointer (of type T*) to memory buffer + */ + template T *allocate(const size_t count = 1) { + T *mem = static_cast(this->malloc(sizeof(T) * count)); + return mem; + } +}; +/** @} */ + +/** @addtogroup nanoflann_metaprog_grp Auxiliary metaprogramming stuff + * @{ */ + +/** Used to declare fixed-size arrays when DIM>0, dynamically-allocated vectors + * when DIM=-1. Fixed size version for a generic DIM: + */ +template struct array_or_vector_selector { + typedef std::array container_t; +}; +/** Dynamic size version */ +template struct array_or_vector_selector<-1, T> { + typedef std::vector container_t; +}; + +/** @} */ + +/** kd-tree base-class + * + * Contains the member functions common to the classes KDTreeSingleIndexAdaptor + * and KDTreeSingleIndexDynamicAdaptor_. + * + * \tparam Derived The name of the class which inherits this class. + * \tparam DatasetAdaptor The user-provided adaptor (see comments above). + * \tparam Distance The distance metric to use, these are all classes derived + * from nanoflann::Metric \tparam DIM Dimensionality of data points (e.g. 3 for + * 3D points) \tparam IndexType Will be typically size_t or int + */ + +template +class KDTreeBaseClass { + +public: + /** Frees the previously-built index. Automatically called within + * buildIndex(). */ + void freeIndex(Derived &obj) { + obj.pool.free_all(); + obj.root_node = NULL; + obj.m_size_at_index_build = 0; + } + + typedef typename Distance::ElementType ElementType; + typedef typename Distance::DistanceType DistanceType; + + /*--------------------- Internal Data Structures --------------------------*/ + struct Node { + /** Union used because a node can be either a LEAF node or a non-leaf node, + * so both data fields are never used simultaneously */ + union { + struct leaf { + IndexType left, right; //!< Indices of points in leaf node + } lr; + struct nonleaf { + int divfeat; //!< Dimension used for subdivision. + DistanceType divlow, divhigh; //!< The values used for subdivision. + } sub; + } node_type; + Node *child1, *child2; //!< Child nodes (both=NULL mean its a leaf node) + }; + + typedef Node *NodePtr; + + struct Interval { + ElementType low, high; + }; + + /** + * Array of indices to vectors in the dataset. + */ + std::vector vind; + + NodePtr root_node; + + size_t m_leaf_max_size; + + size_t m_size; //!< Number of current points in the dataset + size_t m_size_at_index_build; //!< Number of points in the dataset when the + //!< index was built + int dim; //!< Dimensionality of each data point + + /** Define "BoundingBox" as a fixed-size or variable-size container depending + * on "DIM" */ + typedef + typename array_or_vector_selector::container_t BoundingBox; + + /** Define "distance_vector_t" as a fixed-size or variable-size container + * depending on "DIM" */ + typedef typename array_or_vector_selector::container_t + distance_vector_t; + + /** The KD-tree used to find neighbours */ + + BoundingBox root_bbox; + + /** + * Pooled memory allocator. + * + * Using a pooled memory allocator is more efficient + * than allocating memory directly when there is a large + * number small of memory allocations. + */ + PooledAllocator pool; + + /** Returns number of points in dataset */ + size_t size(const Derived &obj) const { return obj.m_size; } + + /** Returns the length of each point in the dataset */ + size_t veclen(const Derived &obj) { + return static_cast(DIM > 0 ? DIM : obj.dim); + } + + /// Helper accessor to the dataset points: + inline ElementType dataset_get(const Derived &obj, size_t idx, + int component) const { + return obj.dataset.kdtree_get_pt(idx, component); + } + + /** + * Computes the inde memory usage + * Returns: memory used by the index + */ + size_t usedMemory(Derived &obj) { + return obj.pool.usedMemory + obj.pool.wastedMemory + + obj.dataset.kdtree_get_point_count() * + sizeof(IndexType); // pool memory and vind array memory + } + + void computeMinMax(const Derived &obj, IndexType *ind, IndexType count, + int element, ElementType &min_elem, + ElementType &max_elem) { + min_elem = dataset_get(obj, ind[0], element); + max_elem = dataset_get(obj, ind[0], element); + for (IndexType i = 1; i < count; ++i) { + ElementType val = dataset_get(obj, ind[i], element); + if (val < min_elem) + min_elem = val; + if (val > max_elem) + max_elem = val; + } + } + + /** + * Create a tree node that subdivides the list of vecs from vind[first] + * to vind[last]. The routine is called recursively on each sublist. + * + * @param left index of the first vector + * @param right index of the last vector + */ + NodePtr divideTree(Derived &obj, const IndexType left, const IndexType right, + BoundingBox &bbox) { + NodePtr node = obj.pool.template allocate(); // allocate memory + + /* If too few exemplars remain, then make this a leaf node. */ + if ((right - left) <= static_cast(obj.m_leaf_max_size)) { + node->child1 = node->child2 = NULL; /* Mark as leaf node. */ + node->node_type.lr.left = left; + node->node_type.lr.right = right; + + // compute bounding-box of leaf points + for (int i = 0; i < (DIM > 0 ? DIM : obj.dim); ++i) { + bbox[i].low = dataset_get(obj, obj.vind[left], i); + bbox[i].high = dataset_get(obj, obj.vind[left], i); + } + for (IndexType k = left + 1; k < right; ++k) { + for (int i = 0; i < (DIM > 0 ? DIM : obj.dim); ++i) { + if (bbox[i].low > dataset_get(obj, obj.vind[k], i)) + bbox[i].low = dataset_get(obj, obj.vind[k], i); + if (bbox[i].high < dataset_get(obj, obj.vind[k], i)) + bbox[i].high = dataset_get(obj, obj.vind[k], i); + } + } + } else { + IndexType idx; + int cutfeat; + DistanceType cutval; + middleSplit_(obj, &obj.vind[0] + left, right - left, idx, cutfeat, cutval, + bbox); + + node->node_type.sub.divfeat = cutfeat; + + BoundingBox left_bbox(bbox); + left_bbox[cutfeat].high = cutval; + node->child1 = divideTree(obj, left, left + idx, left_bbox); + + BoundingBox right_bbox(bbox); + right_bbox[cutfeat].low = cutval; + node->child2 = divideTree(obj, left + idx, right, right_bbox); + + node->node_type.sub.divlow = left_bbox[cutfeat].high; + node->node_type.sub.divhigh = right_bbox[cutfeat].low; + + for (int i = 0; i < (DIM > 0 ? DIM : obj.dim); ++i) { + bbox[i].low = std::min(left_bbox[i].low, right_bbox[i].low); + bbox[i].high = std::max(left_bbox[i].high, right_bbox[i].high); + } + } + + return node; + } + + void middleSplit_(Derived &obj, IndexType *ind, IndexType count, + IndexType &index, int &cutfeat, DistanceType &cutval, + const BoundingBox &bbox) { + const DistanceType EPS = static_cast(0.00001); + ElementType max_span = bbox[0].high - bbox[0].low; + for (int i = 1; i < (DIM > 0 ? DIM : obj.dim); ++i) { + ElementType span = bbox[i].high - bbox[i].low; + if (span > max_span) { + max_span = span; + } + } + ElementType max_spread = -1; + cutfeat = 0; + for (int i = 0; i < (DIM > 0 ? DIM : obj.dim); ++i) { + ElementType span = bbox[i].high - bbox[i].low; + if (span > (1 - EPS) * max_span) { + ElementType min_elem, max_elem; + computeMinMax(obj, ind, count, i, min_elem, max_elem); + ElementType spread = max_elem - min_elem; + ; + if (spread > max_spread) { + cutfeat = i; + max_spread = spread; + } + } + } + // split in the middle + DistanceType split_val = (bbox[cutfeat].low + bbox[cutfeat].high) / 2; + ElementType min_elem, max_elem; + computeMinMax(obj, ind, count, cutfeat, min_elem, max_elem); + + if (split_val < min_elem) + cutval = min_elem; + else if (split_val > max_elem) + cutval = max_elem; + else + cutval = split_val; + + IndexType lim1, lim2; + planeSplit(obj, ind, count, cutfeat, cutval, lim1, lim2); + + if (lim1 > count / 2) + index = lim1; + else if (lim2 < count / 2) + index = lim2; + else + index = count / 2; + } + + /** + * Subdivide the list of points by a plane perpendicular on axe corresponding + * to the 'cutfeat' dimension at 'cutval' position. + * + * On return: + * dataset[ind[0..lim1-1]][cutfeat]cutval + */ + void planeSplit(Derived &obj, IndexType *ind, const IndexType count, + int cutfeat, DistanceType &cutval, IndexType &lim1, + IndexType &lim2) { + /* Move vector indices for left subtree to front of list. */ + IndexType left = 0; + IndexType right = count - 1; + for (;;) { + while (left <= right && dataset_get(obj, ind[left], cutfeat) < cutval) + ++left; + while (right && left <= right && + dataset_get(obj, ind[right], cutfeat) >= cutval) + --right; + if (left > right || !right) + break; // "!right" was added to support unsigned Index types + std::swap(ind[left], ind[right]); + ++left; + --right; + } + /* If either list is empty, it means that all remaining features + * are identical. Split in the middle to maintain a balanced tree. + */ + lim1 = left; + right = count - 1; + for (;;) { + while (left <= right && dataset_get(obj, ind[left], cutfeat) <= cutval) + ++left; + while (right && left <= right && + dataset_get(obj, ind[right], cutfeat) > cutval) + --right; + if (left > right || !right) + break; // "!right" was added to support unsigned Index types + std::swap(ind[left], ind[right]); + ++left; + --right; + } + lim2 = left; + } + + DistanceType computeInitialDistances(const Derived &obj, + const ElementType *vec, + distance_vector_t &dists) const { + assert(vec); + DistanceType distsq = DistanceType(); + + for (int i = 0; i < (DIM > 0 ? DIM : obj.dim); ++i) { + if (vec[i] < obj.root_bbox[i].low) { + dists[i] = obj.distance.accum_dist(vec[i], obj.root_bbox[i].low, i); + distsq += dists[i]; + } + if (vec[i] > obj.root_bbox[i].high) { + dists[i] = obj.distance.accum_dist(vec[i], obj.root_bbox[i].high, i); + distsq += dists[i]; + } + } + return distsq; + } + + void save_tree(Derived &obj, FILE *stream, NodePtr tree) { + save_value(stream, *tree); + if (tree->child1 != NULL) { + save_tree(obj, stream, tree->child1); + } + if (tree->child2 != NULL) { + save_tree(obj, stream, tree->child2); + } + } + + void load_tree(Derived &obj, FILE *stream, NodePtr &tree) { + tree = obj.pool.template allocate(); + load_value(stream, *tree); + if (tree->child1 != NULL) { + load_tree(obj, stream, tree->child1); + } + if (tree->child2 != NULL) { + load_tree(obj, stream, tree->child2); + } + } + + /** Stores the index in a binary file. + * IMPORTANT NOTE: The set of data points is NOT stored in the file, so when + * loading the index object it must be constructed associated to the same + * source of data points used while building it. See the example: + * examples/saveload_example.cpp \sa loadIndex */ + void saveIndex_(Derived &obj, FILE *stream) { + save_value(stream, obj.m_size); + save_value(stream, obj.dim); + save_value(stream, obj.root_bbox); + save_value(stream, obj.m_leaf_max_size); + save_value(stream, obj.vind); + save_tree(obj, stream, obj.root_node); + } + + /** Loads a previous index from a binary file. + * IMPORTANT NOTE: The set of data points is NOT stored in the file, so the + * index object must be constructed associated to the same source of data + * points used while building the index. See the example: + * examples/saveload_example.cpp \sa loadIndex */ + void loadIndex_(Derived &obj, FILE *stream) { + load_value(stream, obj.m_size); + load_value(stream, obj.dim); + load_value(stream, obj.root_bbox); + load_value(stream, obj.m_leaf_max_size); + load_value(stream, obj.vind); + load_tree(obj, stream, obj.root_node); + } +}; + +/** @addtogroup kdtrees_grp KD-tree classes and adaptors + * @{ */ + +/** kd-tree static index + * + * Contains the k-d trees and other information for indexing a set of points + * for nearest-neighbor matching. + * + * The class "DatasetAdaptor" must provide the following interface (can be + * non-virtual, inlined methods): + * + * \code + * // Must return the number of data poins + * inline size_t kdtree_get_point_count() const { ... } + * + * + * // Must return the dim'th component of the idx'th point in the class: + * inline T kdtree_get_pt(const size_t idx, const size_t dim) const { ... } + * + * // Optional bounding-box computation: return false to default to a standard + * bbox computation loop. + * // Return true if the BBOX was already computed by the class and returned + * in "bb" so it can be avoided to redo it again. + * // Look at bb.size() to find out the expected dimensionality (e.g. 2 or 3 + * for point clouds) template bool kdtree_get_bbox(BBOX &bb) const + * { + * bb[0].low = ...; bb[0].high = ...; // 0th dimension limits + * bb[1].low = ...; bb[1].high = ...; // 1st dimension limits + * ... + * return true; + * } + * + * \endcode + * + * \tparam DatasetAdaptor The user-provided adaptor (see comments above). + * \tparam Distance The distance metric to use: nanoflann::metric_L1, + * nanoflann::metric_L2, nanoflann::metric_L2_Simple, etc. \tparam DIM + * Dimensionality of data points (e.g. 3 for 3D points) \tparam IndexType Will + * be typically size_t or int + */ +template +class KDTreeSingleIndexAdaptor + : public KDTreeBaseClass< + KDTreeSingleIndexAdaptor, + Distance, DatasetAdaptor, DIM, IndexType> { +public: + /** Deleted copy constructor*/ + KDTreeSingleIndexAdaptor( + const KDTreeSingleIndexAdaptor + &) = delete; + + /** + * The dataset used by this index + */ + const DatasetAdaptor &dataset; //!< The source of our data + + const KDTreeSingleIndexAdaptorParams index_params; + + Distance distance; + + typedef typename nanoflann::KDTreeBaseClass< + nanoflann::KDTreeSingleIndexAdaptor, + Distance, DatasetAdaptor, DIM, IndexType> + BaseClassRef; + + typedef typename BaseClassRef::ElementType ElementType; + typedef typename BaseClassRef::DistanceType DistanceType; + + typedef typename BaseClassRef::Node Node; + typedef Node *NodePtr; + + typedef typename BaseClassRef::Interval Interval; + /** Define "BoundingBox" as a fixed-size or variable-size container depending + * on "DIM" */ + typedef typename BaseClassRef::BoundingBox BoundingBox; + + /** Define "distance_vector_t" as a fixed-size or variable-size container + * depending on "DIM" */ + typedef typename BaseClassRef::distance_vector_t distance_vector_t; + + /** + * KDTree constructor + * + * Refer to docs in README.md or online in + * https://github.com/jlblancoc/nanoflann + * + * The KD-Tree point dimension (the length of each point in the datase, e.g. 3 + * for 3D points) is determined by means of: + * - The \a DIM template parameter if >0 (highest priority) + * - Otherwise, the \a dimensionality parameter of this constructor. + * + * @param inputData Dataset with the input features + * @param params Basically, the maximum leaf node size + */ + KDTreeSingleIndexAdaptor(const int dimensionality, + const DatasetAdaptor &inputData, + const KDTreeSingleIndexAdaptorParams ¶ms = + KDTreeSingleIndexAdaptorParams()) + : dataset(inputData), index_params(params), distance(inputData) { + BaseClassRef::root_node = NULL; + BaseClassRef::m_size = dataset.kdtree_get_point_count(); + BaseClassRef::m_size_at_index_build = BaseClassRef::m_size; + BaseClassRef::dim = dimensionality; + if (DIM > 0) + BaseClassRef::dim = DIM; + BaseClassRef::m_leaf_max_size = params.leaf_max_size; + + // Create a permutable array of indices to the input vectors. + init_vind(); + } + + /** + * Builds the index + */ + void buildIndex() { + BaseClassRef::m_size = dataset.kdtree_get_point_count(); + BaseClassRef::m_size_at_index_build = BaseClassRef::m_size; + init_vind(); + this->freeIndex(*this); + BaseClassRef::m_size_at_index_build = BaseClassRef::m_size; + if (BaseClassRef::m_size == 0) + return; + computeBoundingBox(BaseClassRef::root_bbox); + BaseClassRef::root_node = + this->divideTree(*this, 0, BaseClassRef::m_size, + BaseClassRef::root_bbox); // construct the tree + } + + /** \name Query methods + * @{ */ + + /** + * Find set of nearest neighbors to vec[0:dim-1]. Their indices are stored + * inside the result object. + * + * Params: + * result = the result object in which the indices of the + * nearest-neighbors are stored vec = the vector for which to search the + * nearest neighbors + * + * \tparam RESULTSET Should be any ResultSet + * \return True if the requested neighbors could be found. + * \sa knnSearch, radiusSearch + */ + template + bool findNeighbors(RESULTSET &result, const ElementType *vec, + const SearchParams &searchParams) const { + assert(vec); + if (this->size(*this) == 0) + return false; + if (!BaseClassRef::root_node) + throw std::runtime_error( + "[nanoflann] findNeighbors() called before building the index."); + float epsError = 1 + searchParams.eps; + + distance_vector_t + dists; // fixed or variable-sized container (depending on DIM) + auto zero = static_cast(0); + assign(dists, (DIM > 0 ? DIM : BaseClassRef::dim), + zero); // Fill it with zeros. + DistanceType distsq = this->computeInitialDistances(*this, vec, dists); + + searchLevel(result, vec, BaseClassRef::root_node, distsq, dists, + epsError); // "count_leaf" parameter removed since was neither + // used nor returned to the user. + + return result.full(); + } + + /** + * Find the "num_closest" nearest neighbors to the \a query_point[0:dim-1]. + * Their indices are stored inside the result object. \sa radiusSearch, + * findNeighbors \note nChecks_IGNORED is ignored but kept for compatibility + * with the original FLANN interface. \return Number `N` of valid points in + * the result set. Only the first `N` entries in `out_indices` and + * `out_distances_sq` will be valid. Return may be less than `num_closest` + * only if the number of elements in the tree is less than `num_closest`. + */ + size_t knnSearch(const ElementType *query_point, const size_t num_closest, + IndexType *out_indices, DistanceType *out_distances_sq, + const int /* nChecks_IGNORED */ = 10) const { + nanoflann::KNNResultSet resultSet(num_closest); + resultSet.init(out_indices, out_distances_sq); + this->findNeighbors(resultSet, query_point, nanoflann::SearchParams()); + return resultSet.size(); + } + + /** + * Find all the neighbors to \a query_point[0:dim-1] within a maximum radius. + * The output is given as a vector of pairs, of which the first element is a + * point index and the second the corresponding distance. Previous contents of + * \a IndicesDists are cleared. + * + * If searchParams.sorted==true, the output list is sorted by ascending + * distances. + * + * For a better performance, it is advisable to do a .reserve() on the vector + * if you have any wild guess about the number of expected matches. + * + * \sa knnSearch, findNeighbors, radiusSearchCustomCallback + * \return The number of points within the given radius (i.e. indices.size() + * or dists.size() ) + */ + size_t + radiusSearch(const ElementType *query_point, const DistanceType &radius, + std::vector> &IndicesDists, + const SearchParams &searchParams) const { + RadiusResultSet resultSet(radius, IndicesDists); + const size_t nFound = + radiusSearchCustomCallback(query_point, resultSet, searchParams); + if (searchParams.sorted) + std::sort(IndicesDists.begin(), IndicesDists.end(), IndexDist_Sorter()); + return nFound; + } + + /** + * Just like radiusSearch() but with a custom callback class for each point + * found in the radius of the query. See the source of RadiusResultSet<> as a + * start point for your own classes. \sa radiusSearch + */ + template + size_t radiusSearchCustomCallback( + const ElementType *query_point, SEARCH_CALLBACK &resultSet, + const SearchParams &searchParams = SearchParams()) const { + this->findNeighbors(resultSet, query_point, searchParams); + return resultSet.size(); + } + + /** @} */ + +public: + /** Make sure the auxiliary list \a vind has the same size than the current + * dataset, and re-generate if size has changed. */ + void init_vind() { + // Create a permutable array of indices to the input vectors. + BaseClassRef::m_size = dataset.kdtree_get_point_count(); + if (BaseClassRef::vind.size() != BaseClassRef::m_size) + BaseClassRef::vind.resize(BaseClassRef::m_size); + for (size_t i = 0; i < BaseClassRef::m_size; i++) + BaseClassRef::vind[i] = i; + } + + void computeBoundingBox(BoundingBox &bbox) { + resize(bbox, (DIM > 0 ? DIM : BaseClassRef::dim)); + if (dataset.kdtree_get_bbox(bbox)) { + // Done! It was implemented in derived class + } else { + const size_t N = dataset.kdtree_get_point_count(); + if (!N) + throw std::runtime_error("[nanoflann] computeBoundingBox() called but " + "no data points found."); + for (int i = 0; i < (DIM > 0 ? DIM : BaseClassRef::dim); ++i) { + bbox[i].low = bbox[i].high = this->dataset_get(*this, 0, i); + } + for (size_t k = 1; k < N; ++k) { + for (int i = 0; i < (DIM > 0 ? DIM : BaseClassRef::dim); ++i) { + if (this->dataset_get(*this, k, i) < bbox[i].low) + bbox[i].low = this->dataset_get(*this, k, i); + if (this->dataset_get(*this, k, i) > bbox[i].high) + bbox[i].high = this->dataset_get(*this, k, i); + } + } + } + } + + /** + * Performs an exact search in the tree starting from a node. + * \tparam RESULTSET Should be any ResultSet + * \return true if the search should be continued, false if the results are + * sufficient + */ + template + bool searchLevel(RESULTSET &result_set, const ElementType *vec, + const NodePtr node, DistanceType mindistsq, + distance_vector_t &dists, const float epsError) const { + /* If this is a leaf node, then do check and return. */ + if ((node->child1 == NULL) && (node->child2 == NULL)) { + // count_leaf += (node->lr.right-node->lr.left); // Removed since was + // neither used nor returned to the user. + DistanceType worst_dist = result_set.worstDist(); + for (IndexType i = node->node_type.lr.left; i < node->node_type.lr.right; + ++i) { + const IndexType index = BaseClassRef::vind[i]; // reorder... : i; + DistanceType dist = distance.evalMetric( + vec, index, (DIM > 0 ? DIM : BaseClassRef::dim)); + if (dist < worst_dist) { + if (!result_set.addPoint(dist, BaseClassRef::vind[i])) { + // the resultset doesn't want to receive any more points, we're done + // searching! + return false; + } + } + } + return true; + } + + /* Which child branch should be taken first? */ + int idx = node->node_type.sub.divfeat; + ElementType val = vec[idx]; + DistanceType diff1 = val - node->node_type.sub.divlow; + DistanceType diff2 = val - node->node_type.sub.divhigh; + + NodePtr bestChild; + NodePtr otherChild; + DistanceType cut_dist; + if ((diff1 + diff2) < 0) { + bestChild = node->child1; + otherChild = node->child2; + cut_dist = distance.accum_dist(val, node->node_type.sub.divhigh, idx); + } else { + bestChild = node->child2; + otherChild = node->child1; + cut_dist = distance.accum_dist(val, node->node_type.sub.divlow, idx); + } + + /* Call recursively to search next level down. */ + if (!searchLevel(result_set, vec, bestChild, mindistsq, dists, epsError)) { + // the resultset doesn't want to receive any more points, we're done + // searching! + return false; + } + + DistanceType dst = dists[idx]; + mindistsq = mindistsq + cut_dist - dst; + dists[idx] = cut_dist; + if (mindistsq * epsError <= result_set.worstDist()) { + if (!searchLevel(result_set, vec, otherChild, mindistsq, dists, + epsError)) { + // the resultset doesn't want to receive any more points, we're done + // searching! + return false; + } + } + dists[idx] = dst; + return true; + } + +public: + /** Stores the index in a binary file. + * IMPORTANT NOTE: The set of data points is NOT stored in the file, so when + * loading the index object it must be constructed associated to the same + * source of data points used while building it. See the example: + * examples/saveload_example.cpp \sa loadIndex */ + void saveIndex(FILE *stream) { this->saveIndex_(*this, stream); } + + /** Loads a previous index from a binary file. + * IMPORTANT NOTE: The set of data points is NOT stored in the file, so the + * index object must be constructed associated to the same source of data + * points used while building the index. See the example: + * examples/saveload_example.cpp \sa loadIndex */ + void loadIndex(FILE *stream) { this->loadIndex_(*this, stream); } + +}; // class KDTree + +/** kd-tree dynamic index + * + * Contains the k-d trees and other information for indexing a set of points + * for nearest-neighbor matching. + * + * The class "DatasetAdaptor" must provide the following interface (can be + * non-virtual, inlined methods): + * + * \code + * // Must return the number of data poins + * inline size_t kdtree_get_point_count() const { ... } + * + * // Must return the dim'th component of the idx'th point in the class: + * inline T kdtree_get_pt(const size_t idx, const size_t dim) const { ... } + * + * // Optional bounding-box computation: return false to default to a standard + * bbox computation loop. + * // Return true if the BBOX was already computed by the class and returned + * in "bb" so it can be avoided to redo it again. + * // Look at bb.size() to find out the expected dimensionality (e.g. 2 or 3 + * for point clouds) template bool kdtree_get_bbox(BBOX &bb) const + * { + * bb[0].low = ...; bb[0].high = ...; // 0th dimension limits + * bb[1].low = ...; bb[1].high = ...; // 1st dimension limits + * ... + * return true; + * } + * + * \endcode + * + * \tparam DatasetAdaptor The user-provided adaptor (see comments above). + * \tparam Distance The distance metric to use: nanoflann::metric_L1, + * nanoflann::metric_L2, nanoflann::metric_L2_Simple, etc. \tparam DIM + * Dimensionality of data points (e.g. 3 for 3D points) \tparam IndexType Will + * be typically size_t or int + */ +template +class KDTreeSingleIndexDynamicAdaptor_ + : public KDTreeBaseClass, + Distance, DatasetAdaptor, DIM, IndexType> { +public: + /** + * The dataset used by this index + */ + const DatasetAdaptor &dataset; //!< The source of our data + + KDTreeSingleIndexAdaptorParams index_params; + + std::vector &treeIndex; + + Distance distance; + + typedef typename nanoflann::KDTreeBaseClass< + nanoflann::KDTreeSingleIndexDynamicAdaptor_, + Distance, DatasetAdaptor, DIM, IndexType> + BaseClassRef; + + typedef typename BaseClassRef::ElementType ElementType; + typedef typename BaseClassRef::DistanceType DistanceType; + + typedef typename BaseClassRef::Node Node; + typedef Node *NodePtr; + + typedef typename BaseClassRef::Interval Interval; + /** Define "BoundingBox" as a fixed-size or variable-size container depending + * on "DIM" */ + typedef typename BaseClassRef::BoundingBox BoundingBox; + + /** Define "distance_vector_t" as a fixed-size or variable-size container + * depending on "DIM" */ + typedef typename BaseClassRef::distance_vector_t distance_vector_t; + + /** + * KDTree constructor + * + * Refer to docs in README.md or online in + * https://github.com/jlblancoc/nanoflann + * + * The KD-Tree point dimension (the length of each point in the datase, e.g. 3 + * for 3D points) is determined by means of: + * - The \a DIM template parameter if >0 (highest priority) + * - Otherwise, the \a dimensionality parameter of this constructor. + * + * @param inputData Dataset with the input features + * @param params Basically, the maximum leaf node size + */ + KDTreeSingleIndexDynamicAdaptor_( + const int dimensionality, const DatasetAdaptor &inputData, + std::vector &treeIndex_, + const KDTreeSingleIndexAdaptorParams ¶ms = + KDTreeSingleIndexAdaptorParams()) + : dataset(inputData), index_params(params), treeIndex(treeIndex_), + distance(inputData) { + BaseClassRef::root_node = NULL; + BaseClassRef::m_size = 0; + BaseClassRef::m_size_at_index_build = 0; + BaseClassRef::dim = dimensionality; + if (DIM > 0) + BaseClassRef::dim = DIM; + BaseClassRef::m_leaf_max_size = params.leaf_max_size; + } + + /** Assignment operator definiton */ + KDTreeSingleIndexDynamicAdaptor_ + operator=(const KDTreeSingleIndexDynamicAdaptor_ &rhs) { + KDTreeSingleIndexDynamicAdaptor_ tmp(rhs); + std::swap(BaseClassRef::vind, tmp.BaseClassRef::vind); + std::swap(BaseClassRef::m_leaf_max_size, tmp.BaseClassRef::m_leaf_max_size); + std::swap(index_params, tmp.index_params); + std::swap(treeIndex, tmp.treeIndex); + std::swap(BaseClassRef::m_size, tmp.BaseClassRef::m_size); + std::swap(BaseClassRef::m_size_at_index_build, + tmp.BaseClassRef::m_size_at_index_build); + std::swap(BaseClassRef::root_node, tmp.BaseClassRef::root_node); + std::swap(BaseClassRef::root_bbox, tmp.BaseClassRef::root_bbox); + std::swap(BaseClassRef::pool, tmp.BaseClassRef::pool); + return *this; + } + + /** + * Builds the index + */ + void buildIndex() { + BaseClassRef::m_size = BaseClassRef::vind.size(); + this->freeIndex(*this); + BaseClassRef::m_size_at_index_build = BaseClassRef::m_size; + if (BaseClassRef::m_size == 0) + return; + computeBoundingBox(BaseClassRef::root_bbox); + BaseClassRef::root_node = + this->divideTree(*this, 0, BaseClassRef::m_size, + BaseClassRef::root_bbox); // construct the tree + } + + /** \name Query methods + * @{ */ + + /** + * Find set of nearest neighbors to vec[0:dim-1]. Their indices are stored + * inside the result object. + * + * Params: + * result = the result object in which the indices of the + * nearest-neighbors are stored vec = the vector for which to search the + * nearest neighbors + * + * \tparam RESULTSET Should be any ResultSet + * \return True if the requested neighbors could be found. + * \sa knnSearch, radiusSearch + */ + template + bool findNeighbors(RESULTSET &result, const ElementType *vec, + const SearchParams &searchParams) const { + assert(vec); + if (this->size(*this) == 0) + return false; + if (!BaseClassRef::root_node) + return false; + float epsError = 1 + searchParams.eps; + + // fixed or variable-sized container (depending on DIM) + distance_vector_t dists; + // Fill it with zeros. + assign(dists, (DIM > 0 ? DIM : BaseClassRef::dim), + static_cast(0)); + DistanceType distsq = this->computeInitialDistances(*this, vec, dists); + + searchLevel(result, vec, BaseClassRef::root_node, distsq, dists, + epsError); // "count_leaf" parameter removed since was neither + // used nor returned to the user. + + return result.full(); + } + + /** + * Find the "num_closest" nearest neighbors to the \a query_point[0:dim-1]. + * Their indices are stored inside the result object. \sa radiusSearch, + * findNeighbors \note nChecks_IGNORED is ignored but kept for compatibility + * with the original FLANN interface. \return Number `N` of valid points in + * the result set. Only the first `N` entries in `out_indices` and + * `out_distances_sq` will be valid. Return may be less than `num_closest` + * only if the number of elements in the tree is less than `num_closest`. + */ + size_t knnSearch(const ElementType *query_point, const size_t num_closest, + IndexType *out_indices, DistanceType *out_distances_sq, + const int /* nChecks_IGNORED */ = 10) const { + nanoflann::KNNResultSet resultSet(num_closest); + resultSet.init(out_indices, out_distances_sq); + this->findNeighbors(resultSet, query_point, nanoflann::SearchParams()); + return resultSet.size(); + } + + /** + * Find all the neighbors to \a query_point[0:dim-1] within a maximum radius. + * The output is given as a vector of pairs, of which the first element is a + * point index and the second the corresponding distance. Previous contents of + * \a IndicesDists are cleared. + * + * If searchParams.sorted==true, the output list is sorted by ascending + * distances. + * + * For a better performance, it is advisable to do a .reserve() on the vector + * if you have any wild guess about the number of expected matches. + * + * \sa knnSearch, findNeighbors, radiusSearchCustomCallback + * \return The number of points within the given radius (i.e. indices.size() + * or dists.size() ) + */ + size_t + radiusSearch(const ElementType *query_point, const DistanceType &radius, + std::vector> &IndicesDists, + const SearchParams &searchParams) const { + RadiusResultSet resultSet(radius, IndicesDists); + const size_t nFound = + radiusSearchCustomCallback(query_point, resultSet, searchParams); + if (searchParams.sorted) + std::sort(IndicesDists.begin(), IndicesDists.end(), IndexDist_Sorter()); + return nFound; + } + + /** + * Just like radiusSearch() but with a custom callback class for each point + * found in the radius of the query. See the source of RadiusResultSet<> as a + * start point for your own classes. \sa radiusSearch + */ + template + size_t radiusSearchCustomCallback( + const ElementType *query_point, SEARCH_CALLBACK &resultSet, + const SearchParams &searchParams = SearchParams()) const { + this->findNeighbors(resultSet, query_point, searchParams); + return resultSet.size(); + } + + /** @} */ + +public: + void computeBoundingBox(BoundingBox &bbox) { + resize(bbox, (DIM > 0 ? DIM : BaseClassRef::dim)); + + if (dataset.kdtree_get_bbox(bbox)) { + // Done! It was implemented in derived class + } else { + const size_t N = BaseClassRef::m_size; + if (!N) + throw std::runtime_error("[nanoflann] computeBoundingBox() called but " + "no data points found."); + for (int i = 0; i < (DIM > 0 ? DIM : BaseClassRef::dim); ++i) { + bbox[i].low = bbox[i].high = + this->dataset_get(*this, BaseClassRef::vind[0], i); + } + for (size_t k = 1; k < N; ++k) { + for (int i = 0; i < (DIM > 0 ? DIM : BaseClassRef::dim); ++i) { + if (this->dataset_get(*this, BaseClassRef::vind[k], i) < bbox[i].low) + bbox[i].low = this->dataset_get(*this, BaseClassRef::vind[k], i); + if (this->dataset_get(*this, BaseClassRef::vind[k], i) > bbox[i].high) + bbox[i].high = this->dataset_get(*this, BaseClassRef::vind[k], i); + } + } + } + } + + /** + * Performs an exact search in the tree starting from a node. + * \tparam RESULTSET Should be any ResultSet + */ + template + void searchLevel(RESULTSET &result_set, const ElementType *vec, + const NodePtr node, DistanceType mindistsq, + distance_vector_t &dists, const float epsError) const { + /* If this is a leaf node, then do check and return. */ + if ((node->child1 == NULL) && (node->child2 == NULL)) { + // count_leaf += (node->lr.right-node->lr.left); // Removed since was + // neither used nor returned to the user. + DistanceType worst_dist = result_set.worstDist(); + for (IndexType i = node->node_type.lr.left; i < node->node_type.lr.right; + ++i) { + const IndexType index = BaseClassRef::vind[i]; // reorder... : i; + if (treeIndex[index] == -1) + continue; + DistanceType dist = distance.evalMetric( + vec, index, (DIM > 0 ? DIM : BaseClassRef::dim)); + if (dist < worst_dist) { + if (!result_set.addPoint( + static_cast(dist), + static_cast( + BaseClassRef::vind[i]))) { + // the resultset doesn't want to receive any more points, we're done + // searching! + return; // false; + } + } + } + return; + } + + /* Which child branch should be taken first? */ + int idx = node->node_type.sub.divfeat; + ElementType val = vec[idx]; + DistanceType diff1 = val - node->node_type.sub.divlow; + DistanceType diff2 = val - node->node_type.sub.divhigh; + + NodePtr bestChild; + NodePtr otherChild; + DistanceType cut_dist; + if ((diff1 + diff2) < 0) { + bestChild = node->child1; + otherChild = node->child2; + cut_dist = distance.accum_dist(val, node->node_type.sub.divhigh, idx); + } else { + bestChild = node->child2; + otherChild = node->child1; + cut_dist = distance.accum_dist(val, node->node_type.sub.divlow, idx); + } + + /* Call recursively to search next level down. */ + searchLevel(result_set, vec, bestChild, mindistsq, dists, epsError); + + DistanceType dst = dists[idx]; + mindistsq = mindistsq + cut_dist - dst; + dists[idx] = cut_dist; + if (mindistsq * epsError <= result_set.worstDist()) { + searchLevel(result_set, vec, otherChild, mindistsq, dists, epsError); + } + dists[idx] = dst; + } + +public: + /** Stores the index in a binary file. + * IMPORTANT NOTE: The set of data points is NOT stored in the file, so when + * loading the index object it must be constructed associated to the same + * source of data points used while building it. See the example: + * examples/saveload_example.cpp \sa loadIndex */ + void saveIndex(FILE *stream) { this->saveIndex_(*this, stream); } + + /** Loads a previous index from a binary file. + * IMPORTANT NOTE: The set of data points is NOT stored in the file, so the + * index object must be constructed associated to the same source of data + * points used while building the index. See the example: + * examples/saveload_example.cpp \sa loadIndex */ + void loadIndex(FILE *stream) { this->loadIndex_(*this, stream); } +}; + +/** kd-tree dynaimic index + * + * class to create multiple static index and merge their results to behave as + * single dynamic index as proposed in Logarithmic Approach. + * + * Example of usage: + * examples/dynamic_pointcloud_example.cpp + * + * \tparam DatasetAdaptor The user-provided adaptor (see comments above). + * \tparam Distance The distance metric to use: nanoflann::metric_L1, + * nanoflann::metric_L2, nanoflann::metric_L2_Simple, etc. \tparam DIM + * Dimensionality of data points (e.g. 3 for 3D points) \tparam IndexType Will + * be typically size_t or int + */ +template +class KDTreeSingleIndexDynamicAdaptor { +public: + typedef typename Distance::ElementType ElementType; + typedef typename Distance::DistanceType DistanceType; + +protected: + size_t m_leaf_max_size; + size_t treeCount; + size_t pointCount; + + /** + * The dataset used by this index + */ + const DatasetAdaptor &dataset; //!< The source of our data + + std::vector treeIndex; //!< treeIndex[idx] is the index of tree in which + //!< point at idx is stored. treeIndex[idx]=-1 + //!< means that point has been removed. + + KDTreeSingleIndexAdaptorParams index_params; + + int dim; //!< Dimensionality of each data point + + typedef KDTreeSingleIndexDynamicAdaptor_ + index_container_t; + std::vector index; + +public: + /** Get a const ref to the internal list of indices; the number of indices is + * adapted dynamically as the dataset grows in size. */ + const std::vector &getAllIndices() const { return index; } + +private: + /** finds position of least significant unset bit */ + int First0Bit(IndexType num) { + int pos = 0; + while (num & 1) { + num = num >> 1; + pos++; + } + return pos; + } + + /** Creates multiple empty trees to handle dynamic support */ + void init() { + typedef KDTreeSingleIndexDynamicAdaptor_ + my_kd_tree_t; + std::vector index_( + treeCount, my_kd_tree_t(dim /*dim*/, dataset, treeIndex, index_params)); + index = index_; + } + +public: + Distance distance; + + /** + * KDTree constructor + * + * Refer to docs in README.md or online in + * https://github.com/jlblancoc/nanoflann + * + * The KD-Tree point dimension (the length of each point in the datase, e.g. 3 + * for 3D points) is determined by means of: + * - The \a DIM template parameter if >0 (highest priority) + * - Otherwise, the \a dimensionality parameter of this constructor. + * + * @param inputData Dataset with the input features + * @param params Basically, the maximum leaf node size + */ + KDTreeSingleIndexDynamicAdaptor(const int dimensionality, + const DatasetAdaptor &inputData, + const KDTreeSingleIndexAdaptorParams ¶ms = + KDTreeSingleIndexAdaptorParams(), + const size_t maximumPointCount = 1000000000U) + : dataset(inputData), index_params(params), distance(inputData) { + treeCount = static_cast(std::log2(maximumPointCount)); + pointCount = 0U; + dim = dimensionality; + treeIndex.clear(); + if (DIM > 0) + dim = DIM; + m_leaf_max_size = params.leaf_max_size; + init(); + const size_t num_initial_points = dataset.kdtree_get_point_count(); + if (num_initial_points > 0) { + addPoints(0, num_initial_points - 1); + } + } + + /** Deleted copy constructor*/ + KDTreeSingleIndexDynamicAdaptor( + const KDTreeSingleIndexDynamicAdaptor &) = delete; + + /** Add points to the set, Inserts all points from [start, end] */ + void addPoints(IndexType start, IndexType end) { + size_t count = end - start + 1; + treeIndex.resize(treeIndex.size() + count); + for (IndexType idx = start; idx <= end; idx++) { + int pos = First0Bit(pointCount); + index[pos].vind.clear(); + treeIndex[pointCount] = pos; + for (int i = 0; i < pos; i++) { + for (int j = 0; j < static_cast(index[i].vind.size()); j++) { + index[pos].vind.push_back(index[i].vind[j]); + if (treeIndex[index[i].vind[j]] != -1) + treeIndex[index[i].vind[j]] = pos; + } + index[i].vind.clear(); + index[i].freeIndex(index[i]); + } + index[pos].vind.push_back(idx); + index[pos].buildIndex(); + pointCount++; + } + } + + /** Remove a point from the set (Lazy Deletion) */ + void removePoint(size_t idx) { + if (idx >= pointCount) + return; + treeIndex[idx] = -1; + } + + /** + * Find set of nearest neighbors to vec[0:dim-1]. Their indices are stored + * inside the result object. + * + * Params: + * result = the result object in which the indices of the + * nearest-neighbors are stored vec = the vector for which to search the + * nearest neighbors + * + * \tparam RESULTSET Should be any ResultSet + * \return True if the requested neighbors could be found. + * \sa knnSearch, radiusSearch + */ + template + bool findNeighbors(RESULTSET &result, const ElementType *vec, + const SearchParams &searchParams) const { + for (size_t i = 0; i < treeCount; i++) { + index[i].findNeighbors(result, &vec[0], searchParams); + } + return result.full(); + } +}; + +/** An L2-metric KD-tree adaptor for working with data directly stored in an + * Eigen Matrix, without duplicating the data storage. Each row in the matrix + * represents a point in the state space. + * + * Example of usage: + * \code + * Eigen::Matrix mat; + * // Fill out "mat"... + * + * typedef KDTreeEigenMatrixAdaptor< Eigen::Matrix > + * my_kd_tree_t; const int max_leaf = 10; my_kd_tree_t mat_index(mat, max_leaf + * ); mat_index.index->buildIndex(); mat_index.index->... \endcode + * + * \tparam DIM If set to >0, it specifies a compile-time fixed dimensionality + * for the points in the data set, allowing more compiler optimizations. \tparam + * Distance The distance metric to use: nanoflann::metric_L1, + * nanoflann::metric_L2, nanoflann::metric_L2_Simple, etc. + */ +template +struct KDTreeEigenMatrixAdaptor { + typedef KDTreeEigenMatrixAdaptor self_t; + typedef typename MatrixType::Scalar num_t; + typedef typename MatrixType::Index IndexType; + typedef + typename Distance::template traits::distance_t metric_t; + typedef KDTreeSingleIndexAdaptor + index_t; + + index_t *index; //! The kd-tree index for the user to call its methods as + //! usual with any other FLANN index. + + /// Constructor: takes a const ref to the matrix object with the data points + KDTreeEigenMatrixAdaptor(const size_t dimensionality, + const std::reference_wrapper &mat, + const int leaf_max_size = 10) + : m_data_matrix(mat) { + const auto dims = mat.get().cols(); + if (size_t(dims) != dimensionality) + throw std::runtime_error( + "Error: 'dimensionality' must match column count in data matrix"); + if (DIM > 0 && int(dims) != DIM) + throw std::runtime_error( + "Data set dimensionality does not match the 'DIM' template argument"); + index = + new index_t(static_cast(dims), *this /* adaptor */, + nanoflann::KDTreeSingleIndexAdaptorParams(leaf_max_size)); + index->buildIndex(); + } + +public: + /** Deleted copy constructor */ + KDTreeEigenMatrixAdaptor(const self_t &) = delete; + + ~KDTreeEigenMatrixAdaptor() { delete index; } + + const std::reference_wrapper m_data_matrix; + + /** Query for the \a num_closest closest points to a given point (entered as + * query_point[0:dim-1]). Note that this is a short-cut method for + * index->findNeighbors(). The user can also call index->... methods as + * desired. \note nChecks_IGNORED is ignored but kept for compatibility with + * the original FLANN interface. + */ + inline void query(const num_t *query_point, const size_t num_closest, + IndexType *out_indices, num_t *out_distances_sq, + const int /* nChecks_IGNORED */ = 10) const { + nanoflann::KNNResultSet resultSet(num_closest); + resultSet.init(out_indices, out_distances_sq); + index->findNeighbors(resultSet, query_point, nanoflann::SearchParams()); + } + + /** @name Interface expected by KDTreeSingleIndexAdaptor + * @{ */ + + const self_t &derived() const { return *this; } + self_t &derived() { return *this; } + + // Must return the number of data points + inline size_t kdtree_get_point_count() const { + return m_data_matrix.get().rows(); + } + + // Returns the dim'th component of the idx'th point in the class: + inline num_t kdtree_get_pt(const IndexType idx, size_t dim) const { + return m_data_matrix.get().coeff(idx, IndexType(dim)); + } + + // Optional bounding-box computation: return false to default to a standard + // bbox computation loop. + // Return true if the BBOX was already computed by the class and returned in + // "bb" so it can be avoided to redo it again. Look at bb.size() to find out + // the expected dimensionality (e.g. 2 or 3 for point clouds) + template bool kdtree_get_bbox(BBOX & /*bb*/) const { + return false; + } + + /** @} */ + +}; // end of KDTreeEigenMatrixAdaptor + /** @} */ + +/** @} */ // end of grouping +} // namespace nanoflann + +#endif /* NANOFLANN_HPP_ */ diff --git a/competing_methods/my_RandLANet/utils/data_prepare_s3dis.py b/competing_methods/my_RandLANet/utils/data_prepare_s3dis.py new file mode 100644 index 00000000..f0394a5a --- /dev/null +++ b/competing_methods/my_RandLANet/utils/data_prepare_s3dis.py @@ -0,0 +1,80 @@ +from sklearn.neighbors import KDTree +from os.path import join, exists, dirname, abspath +import numpy as np +import pandas as pd +import os, sys, glob, pickle + +BASE_DIR = dirname(abspath(__file__)) +ROOT_DIR = dirname(BASE_DIR) +sys.path.append(BASE_DIR) +sys.path.append(ROOT_DIR) +from helper_ply import write_ply +from helper_tool import DataProcessing as DP + +dataset_path = '/data/S3DIS/Stanford3dDataset_v1.2_Aligned_Version' +anno_paths = [line.rstrip() for line in open(join(BASE_DIR, 'meta/anno_paths.txt'))] +anno_paths = [join(dataset_path, p) for p in anno_paths] + +gt_class = [x.rstrip() for x in open(join(BASE_DIR, 'meta/class_names.txt'))] +gt_class2label = {cls: i for i, cls in enumerate(gt_class)} + +sub_grid_size = 0.04 +original_pc_folder = join(dirname(dataset_path), 'original_ply') +sub_pc_folder = join(dirname(dataset_path), 'input_{:.3f}'.format(sub_grid_size)) +os.mkdir(original_pc_folder) if not exists(original_pc_folder) else None +os.mkdir(sub_pc_folder) if not exists(sub_pc_folder) else None +out_format = '.ply' + + +def convert_pc2ply(anno_path, save_path): + """ + Convert original dataset files to ply file (each line is XYZRGBL). + We aggregated all the points from each instance in the room. + :param anno_path: path to annotations. e.g. Area_1/office_2/Annotations/ + :param save_path: path to save original point clouds (each line is XYZRGBL) + :return: None + """ + data_list = [] + + for f in glob.glob(join(anno_path, '*.txt')): + class_name = os.path.basename(f).split('_')[0] + if class_name not in gt_class: # note: in some room there is 'staris' class.. + class_name = 'clutter' + pc = pd.read_csv(f, header=None, delim_whitespace=True).values + labels = np.ones((pc.shape[0], 1)) * gt_class2label[class_name] + data_list.append(np.concatenate([pc, labels], 1)) # Nx7 + + pc_label = np.concatenate(data_list, 0) + xyz_min = np.amin(pc_label, axis=0)[0:3] + pc_label[:, 0:3] -= xyz_min + + xyz = pc_label[:, :3].astype(np.float32) + colors = pc_label[:, 3:6].astype(np.uint8) + labels = pc_label[:, 6].astype(np.uint8) + write_ply(save_path, (xyz, colors, labels), ['x', 'y', 'z', 'red', 'green', 'blue', 'class']) + + # save sub_cloud and KDTree file + sub_xyz, sub_colors, sub_labels = DP.grid_sub_sampling(xyz, colors, labels, sub_grid_size) + sub_colors = sub_colors / 255.0 + sub_ply_file = join(sub_pc_folder, save_path.split('/')[-1][:-4] + '.ply') + write_ply(sub_ply_file, [sub_xyz, sub_colors, sub_labels], ['x', 'y', 'z', 'red', 'green', 'blue', 'class']) + + search_tree = KDTree(sub_xyz) + kd_tree_file = join(sub_pc_folder, str(save_path.split('/')[-1][:-4]) + '_KDTree.pkl') + with open(kd_tree_file, 'wb') as f: + pickle.dump(search_tree, f) + + proj_idx = np.squeeze(search_tree.query(xyz, return_distance=False)) + proj_idx = proj_idx.astype(np.int32) + proj_save = join(sub_pc_folder, str(save_path.split('/')[-1][:-4]) + '_proj.pkl') + with open(proj_save, 'wb') as f: + pickle.dump([proj_idx, labels], f) + + +if __name__ == '__main__': + # Note: there is an extra character in the v1.2 data in Area_5/hallway_6. It's fixed manually. + for annotation_path in anno_paths: + print(annotation_path) + elements = str(annotation_path).split('/') + out_file_name = elements[-3] + '_' + elements[-2] + out_format + convert_pc2ply(annotation_path, join(original_pc_folder, out_file_name)) diff --git a/competing_methods/my_RandLANet/utils/data_prepare_semantic3d.py b/competing_methods/my_RandLANet/utils/data_prepare_semantic3d.py new file mode 100644 index 00000000..31bc1a3e --- /dev/null +++ b/competing_methods/my_RandLANet/utils/data_prepare_semantic3d.py @@ -0,0 +1,83 @@ +from sklearn.neighbors import KDTree +from os.path import join, exists, dirname, abspath +import numpy as np +import os, glob, pickle +import sys + +BASE_DIR = dirname(abspath(__file__)) +ROOT_DIR = dirname(BASE_DIR) +sys.path.append(BASE_DIR) +sys.path.append(ROOT_DIR) +from helper_ply import write_ply +from helper_tool import DataProcessing as DP + +grid_size = 0.06 +dataset_path = '/data/semantic3d/original_data' +original_pc_folder = join(dirname(dataset_path), 'original_ply') +sub_pc_folder = join(dirname(dataset_path), 'input_{:.3f}'.format(grid_size)) +os.mkdir(original_pc_folder) if not exists(original_pc_folder) else None +os.mkdir(sub_pc_folder) if not exists(sub_pc_folder) else None + +for pc_path in glob.glob(join(dataset_path, '*.txt')): + print(pc_path) + file_name = pc_path.split('/')[-1][:-4] + + # check if it has already calculated + if exists(join(sub_pc_folder, file_name + '_KDTree.pkl')): + continue + + pc = DP.load_pc_semantic3d(pc_path) + # check if label exists + label_path = pc_path[:-4] + '.labels' + if exists(label_path): + labels = DP.load_label_semantic3d(label_path) + full_ply_path = join(original_pc_folder, file_name + '.ply') + + #  Subsample to save space + sub_points, sub_colors, sub_labels = DP.grid_sub_sampling(pc[:, :3].astype(np.float32), + pc[:, 4:7].astype(np.uint8), labels, 0.01) + sub_labels = np.squeeze(sub_labels) + + write_ply(full_ply_path, (sub_points, sub_colors, sub_labels), ['x', 'y', 'z', 'red', 'green', 'blue', 'class']) + + # save sub_cloud and KDTree file + sub_xyz, sub_colors, sub_labels = DP.grid_sub_sampling(sub_points, sub_colors, sub_labels, grid_size) + sub_colors = sub_colors / 255.0 + sub_labels = np.squeeze(sub_labels) + sub_ply_file = join(sub_pc_folder, file_name + '.ply') + write_ply(sub_ply_file, [sub_xyz, sub_colors, sub_labels], ['x', 'y', 'z', 'red', 'green', 'blue', 'class']) + + search_tree = KDTree(sub_xyz, leaf_size=50) + kd_tree_file = join(sub_pc_folder, file_name + '_KDTree.pkl') + with open(kd_tree_file, 'wb') as f: + pickle.dump(search_tree, f) + + proj_idx = np.squeeze(search_tree.query(sub_points, return_distance=False)) + proj_idx = proj_idx.astype(np.int32) + proj_save = join(sub_pc_folder, file_name + '_proj.pkl') + with open(proj_save, 'wb') as f: + pickle.dump([proj_idx, labels], f) + + else: + full_ply_path = join(original_pc_folder, file_name + '.ply') + write_ply(full_ply_path, (pc[:, :3].astype(np.float32), pc[:, 4:7].astype(np.uint8)), + ['x', 'y', 'z', 'red', 'green', 'blue']) + + # save sub_cloud and KDTree file + sub_xyz, sub_colors = DP.grid_sub_sampling(pc[:, :3].astype(np.float32), pc[:, 4:7].astype(np.uint8), + grid_size=grid_size) + sub_colors = sub_colors / 255.0 + sub_ply_file = join(sub_pc_folder, file_name + '.ply') + write_ply(sub_ply_file, [sub_xyz, sub_colors], ['x', 'y', 'z', 'red', 'green', 'blue']) + labels = np.zeros(pc.shape[0], dtype=np.uint8) + + search_tree = KDTree(sub_xyz, leaf_size=50) + kd_tree_file = join(sub_pc_folder, file_name + '_KDTree.pkl') + with open(kd_tree_file, 'wb') as f: + pickle.dump(search_tree, f) + + proj_idx = np.squeeze(search_tree.query(pc[:, :3].astype(np.float32), return_distance=False)) + proj_idx = proj_idx.astype(np.int32) + proj_save = join(sub_pc_folder, file_name + '_proj.pkl') + with open(proj_save, 'wb') as f: + pickle.dump([proj_idx, labels], f) diff --git a/competing_methods/my_RandLANet/utils/data_prepare_semantickitti.py b/competing_methods/my_RandLANet/utils/data_prepare_semantickitti.py new file mode 100644 index 00000000..9ba76405 --- /dev/null +++ b/competing_methods/my_RandLANet/utils/data_prepare_semantickitti.py @@ -0,0 +1,76 @@ +import pickle, yaml, os, sys +import numpy as np +from os.path import join, exists, dirname, abspath +from sklearn.neighbors import KDTree + +BASE_DIR = dirname(abspath(__file__)) +ROOT_DIR = dirname(BASE_DIR) +sys.path.append(BASE_DIR) +sys.path.append(ROOT_DIR) +from helper_tool import DataProcessing as DP + +data_config = os.path.join(BASE_DIR, 'semantic-kitti.yaml') +DATA = yaml.safe_load(open(data_config, 'r')) +remap_dict = DATA["learning_map"] +max_key = max(remap_dict.keys()) +remap_lut = np.zeros((max_key + 100), dtype=np.int32) +remap_lut[list(remap_dict.keys())] = list(remap_dict.values()) + +grid_size = 0.06 +dataset_path = '/data/semantic_kitti/dataset/sequences' +output_path = '/data/semantic_kitti/dataset/sequences' + '_' + str(grid_size) +seq_list = np.sort(os.listdir(dataset_path)) + +for seq_id in seq_list: + print('sequence' + seq_id + ' start') + seq_path = join(dataset_path, seq_id) + seq_path_out = join(output_path, seq_id) + pc_path = join(seq_path, 'velodyne') + pc_path_out = join(seq_path_out, 'velodyne') + KDTree_path_out = join(seq_path_out, 'KDTree') + os.makedirs(seq_path_out) if not exists(seq_path_out) else None + os.makedirs(pc_path_out) if not exists(pc_path_out) else None + os.makedirs(KDTree_path_out) if not exists(KDTree_path_out) else None + + if int(seq_id) < 11: + label_path = join(seq_path, 'labels') + label_path_out = join(seq_path_out, 'labels') + os.makedirs(label_path_out) if not exists(label_path_out) else None + scan_list = np.sort(os.listdir(pc_path)) + for scan_id in scan_list: + print(scan_id) + points = DP.load_pc_kitti(join(pc_path, scan_id)) + labels = DP.load_label_kitti(join(label_path, str(scan_id[:-4]) + '.label'), remap_lut) + sub_points, sub_labels = DP.grid_sub_sampling(points, labels=labels, grid_size=grid_size) + search_tree = KDTree(sub_points) + KDTree_save = join(KDTree_path_out, str(scan_id[:-4]) + '.pkl') + np.save(join(pc_path_out, scan_id)[:-4], sub_points) + np.save(join(label_path_out, scan_id)[:-4], sub_labels) + with open(KDTree_save, 'wb') as f: + pickle.dump(search_tree, f) + if seq_id == '08': + proj_path = join(seq_path_out, 'proj') + os.makedirs(proj_path) if not exists(proj_path) else None + proj_inds = np.squeeze(search_tree.query(points, return_distance=False)) + proj_inds = proj_inds.astype(np.int32) + proj_save = join(proj_path, str(scan_id[:-4]) + '_proj.pkl') + with open(proj_save, 'wb') as f: + pickle.dump([proj_inds], f) + else: + proj_path = join(seq_path_out, 'proj') + os.makedirs(proj_path) if not exists(proj_path) else None + scan_list = np.sort(os.listdir(pc_path)) + for scan_id in scan_list: + print(scan_id) + points = DP.load_pc_kitti(join(pc_path, scan_id)) + sub_points = DP.grid_sub_sampling(points, grid_size=0.06) + search_tree = KDTree(sub_points) + proj_inds = np.squeeze(search_tree.query(points, return_distance=False)) + proj_inds = proj_inds.astype(np.int32) + KDTree_save = join(KDTree_path_out, str(scan_id[:-4]) + '.pkl') + proj_save = join(proj_path, str(scan_id[:-4]) + '_proj.pkl') + np.save(join(pc_path_out, scan_id)[:-4], sub_points) + with open(KDTree_save, 'wb') as f: + pickle.dump(search_tree, f) + with open(proj_save, 'wb') as f: + pickle.dump([proj_inds], f) diff --git a/competing_methods/my_RandLANet/utils/data_prepare_urbanmesh.py b/competing_methods/my_RandLANet/utils/data_prepare_urbanmesh.py new file mode 100644 index 00000000..5cd878ef --- /dev/null +++ b/competing_methods/my_RandLANet/utils/data_prepare_urbanmesh.py @@ -0,0 +1,101 @@ +from sklearn.neighbors import KDTree +from os.path import join, exists, dirname, abspath +import numpy as np +import os, glob, pickle +import sys + +BASE_DIR = dirname(abspath(__file__)) +ROOT_DIR = dirname(BASE_DIR) +sys.path.append(BASE_DIR) +sys.path.append(ROOT_DIR) +from plyfile import PlyData, PlyElement +from helper_ply import write_ply +from helper_tool import DataProcessing as DP + + +################################### UTILS Functions ####################################### +def read_ply_with_plyfilelib(filename): + """convert from a ply file. include the label and the object number""" + # ---read the ply file-------- + plydata = PlyData.read(filename) + xyz = np.stack([plydata['vertex'][n] for n in ['x', 'y', 'z']], axis=1) + try: + rgb = np.stack([plydata['vertex'][n] + for n in ['red', 'green', 'blue']] + , axis=1).astype(np.uint8) + except ValueError: + rgb = np.stack([plydata['vertex'][n] + for n in ['r', 'g', 'b']] + , axis=1).astype(np.float32) + if np.max(rgb) > 1: + rgb = rgb + try: + object_indices = plydata['vertex']['object_index'] + labels = plydata['vertex']['label'] + return xyz, rgb, labels, object_indices + except ValueError: + try: + labels = plydata['vertex']['label'] + return xyz, rgb, labels + except ValueError: + return xyz, rgb + + +grid_size = 0.01 +dataset_path = ROOT_DIR + '/data/raw' +original_pc_folder = join(dirname(dataset_path), 'original_ply') +sub_pc_folder = join(dirname(dataset_path), 'input_{:.3f}'.format(grid_size)) +os.mkdir(original_pc_folder) if not exists(original_pc_folder) else None +os.mkdir(sub_pc_folder) if not exists(sub_pc_folder) else None +label_statistics = np.zeros((6,), dtype=int) +min_input_points = np.iinfo(np.int32(10)).max +folders = ["train/", "test/", "validate/"] +print("Prepare urbanmesh point cloud ...") +for folder in folders: + print("=================\n " + folder + "\n=================") + data_folder = dataset_path + "/" + folder + files = glob.glob(data_folder + "*.ply") + n_files = len(files) + i_file = 1 + for file in files: + file_name = os.path.splitext(os.path.basename(file))[0] + print(str(i_file) + " / " + str(n_files) + "---> " + file_name) + i_file += 1 + # check if it has already calculated + if exists(join(sub_pc_folder, file_name + '_KDTree.pkl')): + continue + + xyz, rgb, rawlabels = read_ply_with_plyfilelib(file) + labels = np.array([x if x > 0 else x + 1 for x in rawlabels]) + rgb = 255 * rgb # Now scale by 255 + full_ply_path = join(original_pc_folder, file_name + '.ply') + + #  Subsample to save space, save sub_cloud and KDTree file + sub_points, sub_colors, sub_labels = DP.grid_sub_sampling(xyz[:, :].astype(np.float32), + rgb[:, :].astype(np.uint8), labels.astype(np.uint8), grid_size) + + sub_colors = sub_colors / 255.0 + sub_labels = np.squeeze(sub_labels) + sub_ply_file = join(sub_pc_folder, file_name + '.ply') + write_ply(sub_ply_file, [sub_points, sub_colors, sub_labels], ['x', 'y', 'z', 'red', 'green', 'blue', 'class']) + write_ply(full_ply_path, [sub_points, sub_colors, sub_labels], ['x', 'y', 'z', 'red', 'green', 'blue', 'class']) + if folder != "test/": + for sub_l in sub_labels: + if sub_l != 0: + label_statistics[sub_l - 1] += 1 + if len(sub_xyz) < min_input_points: + min_input_points = len(sub_xyz) + + search_tree = KDTree(sub_xyz, leaf_size=50) + kd_tree_file = join(sub_pc_folder, file_name + '_KDTree.pkl') + with open(kd_tree_file, 'wb') as f: + pickle.dump(search_tree, f) + + proj_idx = np.squeeze(search_tree.query(sub_points, return_distance=False)) + proj_idx = proj_idx.astype(np.int32) + proj_save = join(sub_pc_folder, file_name + '_proj.pkl') + with open(proj_save, 'wb') as f: + pickle.dump([proj_idx, labels], f) + +print("Minimum input points per tile: " + str(min_input_points)) +print("Num pts per class: " + str(label_statistics)) \ No newline at end of file diff --git a/competing_methods/my_RandLANet/utils/download_semantic3d.sh b/competing_methods/my_RandLANet/utils/download_semantic3d.sh new file mode 100644 index 00000000..806bd39a --- /dev/null +++ b/competing_methods/my_RandLANet/utils/download_semantic3d.sh @@ -0,0 +1,57 @@ +BASE_DIR=${1-/data/semantic3d/original_data} + +# Training data +wget -c -N http://semantic3d.net/data/point-clouds/training1/bildstein_station1_xyz_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/training1/bildstein_station3_xyz_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/training1/bildstein_station5_xyz_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/training1/domfountain_station1_xyz_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/training1/domfountain_station2_xyz_intensity_rgb.7z -P $BASE_DIR/ +wget -c -N http://semantic3d.net/data/point-clouds/training1/domfountain_station3_xyz_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/training1/neugasse_station1_xyz_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/training1/sg27_station1_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/training1/sg27_station2_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/training1/sg27_station4_intensity_rgb.7z -P $BASE_DIR/ +wget -c -N http://semantic3d.net/data/point-clouds/training1/sg27_station5_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/training1/sg27_station9_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/training1/sg28_station4_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/training1/untermaederbrunnen_station1_xyz_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/training1/untermaederbrunnen_station3_xyz_intensity_rgb.7z -P $BASE_DIR/ +wget -c -N http://semantic3d.net/data/sem8_labels_training.7z -P $BASE_DIR + + +# Test data +wget -c -N http://semantic3d.net/data/point-clouds/testing1/birdfountain_station1_xyz_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/testing1/castleblatten_station1_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/testing1/castleblatten_station5_xyz_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/testing1/marketplacefeldkirch_station1_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/testing1/marketplacefeldkirch_station4_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/testing1/marketplacefeldkirch_station7_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/testing1/sg27_station10_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/testing1/sg27_station3_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/testing1/sg27_station6_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/testing1/sg27_station8_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/testing1/sg28_station2_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/testing1/sg28_station5_xyz_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/testing1/stgallencathedral_station1_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/testing1/stgallencathedral_station3_intensity_rgb.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/testing1/stgallencathedral_station6_intensity_rgb.7z -P $BASE_DIR + +# reduced-8 +wget -c -N http://semantic3d.net/data/point-clouds/testing2/MarketplaceFeldkirch_Station4_rgb_intensity-reduced.txt.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/testing2/StGallenCathedral_station6_rgb_intensity-reduced.txt.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/testing2/sg27_station10_rgb_intensity-reduced.txt.7z -P $BASE_DIR +wget -c -N http://semantic3d.net/data/point-clouds/testing2/sg28_Station2_rgb_intensity-reduced.txt.7z -P $BASE_DIR + + + +for entry in "$BASE_DIR"/* +do + 7z x "$entry" -o$(dirname "$entry") -y +done + +mv $BASE_DIR/station1_xyz_intensity_rgb.txt $BASE_DIR/neugasse_station1_xyz_intensity_rgb.txt + +for entry in "$BASE_DIR"/*.7z +do + rm "$entry" +done \ No newline at end of file diff --git a/competing_methods/my_RandLANet/utils/meta/anno_paths.txt b/competing_methods/my_RandLANet/utils/meta/anno_paths.txt new file mode 100644 index 00000000..0ad2f259 --- /dev/null +++ b/competing_methods/my_RandLANet/utils/meta/anno_paths.txt @@ -0,0 +1,272 @@ +Area_1/conferenceRoom_1/Annotations +Area_1/conferenceRoom_2/Annotations +Area_1/copyRoom_1/Annotations +Area_1/hallway_1/Annotations +Area_1/hallway_2/Annotations +Area_1/hallway_3/Annotations +Area_1/hallway_4/Annotations +Area_1/hallway_5/Annotations +Area_1/hallway_6/Annotations +Area_1/hallway_7/Annotations +Area_1/hallway_8/Annotations +Area_1/office_10/Annotations +Area_1/office_11/Annotations +Area_1/office_12/Annotations +Area_1/office_13/Annotations +Area_1/office_14/Annotations +Area_1/office_15/Annotations +Area_1/office_16/Annotations +Area_1/office_17/Annotations +Area_1/office_18/Annotations +Area_1/office_19/Annotations +Area_1/office_1/Annotations +Area_1/office_20/Annotations +Area_1/office_21/Annotations +Area_1/office_22/Annotations +Area_1/office_23/Annotations +Area_1/office_24/Annotations +Area_1/office_25/Annotations +Area_1/office_26/Annotations +Area_1/office_27/Annotations +Area_1/office_28/Annotations +Area_1/office_29/Annotations +Area_1/office_2/Annotations +Area_1/office_30/Annotations +Area_1/office_31/Annotations +Area_1/office_3/Annotations +Area_1/office_4/Annotations +Area_1/office_5/Annotations +Area_1/office_6/Annotations +Area_1/office_7/Annotations +Area_1/office_8/Annotations +Area_1/office_9/Annotations +Area_1/pantry_1/Annotations +Area_1/WC_1/Annotations +Area_2/auditorium_1/Annotations +Area_2/auditorium_2/Annotations +Area_2/conferenceRoom_1/Annotations +Area_2/hallway_10/Annotations +Area_2/hallway_11/Annotations +Area_2/hallway_12/Annotations +Area_2/hallway_1/Annotations +Area_2/hallway_2/Annotations +Area_2/hallway_3/Annotations +Area_2/hallway_4/Annotations +Area_2/hallway_5/Annotations +Area_2/hallway_6/Annotations +Area_2/hallway_7/Annotations +Area_2/hallway_8/Annotations +Area_2/hallway_9/Annotations +Area_2/office_10/Annotations +Area_2/office_11/Annotations +Area_2/office_12/Annotations +Area_2/office_13/Annotations +Area_2/office_14/Annotations +Area_2/office_1/Annotations +Area_2/office_2/Annotations +Area_2/office_3/Annotations +Area_2/office_4/Annotations +Area_2/office_5/Annotations +Area_2/office_6/Annotations +Area_2/office_7/Annotations +Area_2/office_8/Annotations +Area_2/office_9/Annotations +Area_2/storage_1/Annotations +Area_2/storage_2/Annotations +Area_2/storage_3/Annotations +Area_2/storage_4/Annotations +Area_2/storage_5/Annotations +Area_2/storage_6/Annotations +Area_2/storage_7/Annotations +Area_2/storage_8/Annotations +Area_2/storage_9/Annotations +Area_2/WC_1/Annotations +Area_2/WC_2/Annotations +Area_3/conferenceRoom_1/Annotations +Area_3/hallway_1/Annotations +Area_3/hallway_2/Annotations +Area_3/hallway_3/Annotations +Area_3/hallway_4/Annotations +Area_3/hallway_5/Annotations +Area_3/hallway_6/Annotations +Area_3/lounge_1/Annotations +Area_3/lounge_2/Annotations +Area_3/office_10/Annotations +Area_3/office_1/Annotations +Area_3/office_2/Annotations +Area_3/office_3/Annotations +Area_3/office_4/Annotations +Area_3/office_5/Annotations +Area_3/office_6/Annotations +Area_3/office_7/Annotations +Area_3/office_8/Annotations +Area_3/office_9/Annotations +Area_3/storage_1/Annotations +Area_3/storage_2/Annotations +Area_3/WC_1/Annotations +Area_3/WC_2/Annotations +Area_4/conferenceRoom_1/Annotations +Area_4/conferenceRoom_2/Annotations +Area_4/conferenceRoom_3/Annotations +Area_4/hallway_10/Annotations +Area_4/hallway_11/Annotations +Area_4/hallway_12/Annotations +Area_4/hallway_13/Annotations +Area_4/hallway_14/Annotations +Area_4/hallway_1/Annotations +Area_4/hallway_2/Annotations +Area_4/hallway_3/Annotations +Area_4/hallway_4/Annotations +Area_4/hallway_5/Annotations +Area_4/hallway_6/Annotations +Area_4/hallway_7/Annotations +Area_4/hallway_8/Annotations +Area_4/hallway_9/Annotations +Area_4/lobby_1/Annotations +Area_4/lobby_2/Annotations +Area_4/office_10/Annotations +Area_4/office_11/Annotations +Area_4/office_12/Annotations +Area_4/office_13/Annotations +Area_4/office_14/Annotations +Area_4/office_15/Annotations +Area_4/office_16/Annotations +Area_4/office_17/Annotations +Area_4/office_18/Annotations +Area_4/office_19/Annotations +Area_4/office_1/Annotations +Area_4/office_20/Annotations +Area_4/office_21/Annotations +Area_4/office_22/Annotations +Area_4/office_2/Annotations +Area_4/office_3/Annotations +Area_4/office_4/Annotations +Area_4/office_5/Annotations +Area_4/office_6/Annotations +Area_4/office_7/Annotations +Area_4/office_8/Annotations +Area_4/office_9/Annotations +Area_4/storage_1/Annotations +Area_4/storage_2/Annotations +Area_4/storage_3/Annotations +Area_4/storage_4/Annotations +Area_4/WC_1/Annotations +Area_4/WC_2/Annotations +Area_4/WC_3/Annotations +Area_4/WC_4/Annotations +Area_5/conferenceRoom_1/Annotations +Area_5/conferenceRoom_2/Annotations +Area_5/conferenceRoom_3/Annotations +Area_5/hallway_10/Annotations +Area_5/hallway_11/Annotations +Area_5/hallway_12/Annotations +Area_5/hallway_13/Annotations +Area_5/hallway_14/Annotations +Area_5/hallway_15/Annotations +Area_5/hallway_1/Annotations +Area_5/hallway_2/Annotations +Area_5/hallway_3/Annotations +Area_5/hallway_4/Annotations +Area_5/hallway_5/Annotations +Area_5/hallway_6/Annotations +Area_5/hallway_7/Annotations +Area_5/hallway_8/Annotations +Area_5/hallway_9/Annotations +Area_5/lobby_1/Annotations +Area_5/office_10/Annotations +Area_5/office_11/Annotations +Area_5/office_12/Annotations +Area_5/office_13/Annotations +Area_5/office_14/Annotations +Area_5/office_15/Annotations +Area_5/office_16/Annotations +Area_5/office_17/Annotations +Area_5/office_18/Annotations +Area_5/office_19/Annotations +Area_5/office_1/Annotations +Area_5/office_20/Annotations +Area_5/office_21/Annotations +Area_5/office_22/Annotations +Area_5/office_23/Annotations +Area_5/office_24/Annotations +Area_5/office_25/Annotations +Area_5/office_26/Annotations +Area_5/office_27/Annotations +Area_5/office_28/Annotations +Area_5/office_29/Annotations +Area_5/office_2/Annotations +Area_5/office_30/Annotations +Area_5/office_31/Annotations +Area_5/office_32/Annotations +Area_5/office_33/Annotations +Area_5/office_34/Annotations +Area_5/office_35/Annotations +Area_5/office_36/Annotations +Area_5/office_37/Annotations +Area_5/office_38/Annotations +Area_5/office_39/Annotations +Area_5/office_3/Annotations +Area_5/office_40/Annotations +Area_5/office_41/Annotations +Area_5/office_42/Annotations +Area_5/office_4/Annotations +Area_5/office_5/Annotations +Area_5/office_6/Annotations +Area_5/office_7/Annotations +Area_5/office_8/Annotations +Area_5/office_9/Annotations +Area_5/pantry_1/Annotations +Area_5/storage_1/Annotations +Area_5/storage_2/Annotations +Area_5/storage_3/Annotations +Area_5/storage_4/Annotations +Area_5/WC_1/Annotations +Area_5/WC_2/Annotations +Area_6/conferenceRoom_1/Annotations +Area_6/copyRoom_1/Annotations +Area_6/hallway_1/Annotations +Area_6/hallway_2/Annotations +Area_6/hallway_3/Annotations +Area_6/hallway_4/Annotations +Area_6/hallway_5/Annotations +Area_6/hallway_6/Annotations +Area_6/lounge_1/Annotations +Area_6/office_10/Annotations +Area_6/office_11/Annotations +Area_6/office_12/Annotations +Area_6/office_13/Annotations +Area_6/office_14/Annotations +Area_6/office_15/Annotations +Area_6/office_16/Annotations +Area_6/office_17/Annotations +Area_6/office_18/Annotations +Area_6/office_19/Annotations +Area_6/office_1/Annotations +Area_6/office_20/Annotations +Area_6/office_21/Annotations +Area_6/office_22/Annotations +Area_6/office_23/Annotations +Area_6/office_24/Annotations +Area_6/office_25/Annotations +Area_6/office_26/Annotations +Area_6/office_27/Annotations +Area_6/office_28/Annotations +Area_6/office_29/Annotations +Area_6/office_2/Annotations +Area_6/office_30/Annotations +Area_6/office_31/Annotations +Area_6/office_32/Annotations +Area_6/office_33/Annotations +Area_6/office_34/Annotations +Area_6/office_35/Annotations +Area_6/office_36/Annotations +Area_6/office_37/Annotations +Area_6/office_3/Annotations +Area_6/office_4/Annotations +Area_6/office_5/Annotations +Area_6/office_6/Annotations +Area_6/office_7/Annotations +Area_6/office_8/Annotations +Area_6/office_9/Annotations +Area_6/openspace_1/Annotations +Area_6/pantry_1/Annotations diff --git a/competing_methods/my_RandLANet/utils/meta/class_names.txt b/competing_methods/my_RandLANet/utils/meta/class_names.txt new file mode 100644 index 00000000..ca1d1788 --- /dev/null +++ b/competing_methods/my_RandLANet/utils/meta/class_names.txt @@ -0,0 +1,13 @@ +ceiling +floor +wall +beam +column +window +door +table +chair +sofa +bookcase +board +clutter diff --git a/competing_methods/my_RandLANet/utils/nearest_neighbors/KDTreeTableAdaptor.h b/competing_methods/my_RandLANet/utils/nearest_neighbors/KDTreeTableAdaptor.h new file mode 100644 index 00000000..8cdc787f --- /dev/null +++ b/competing_methods/my_RandLANet/utils/nearest_neighbors/KDTreeTableAdaptor.h @@ -0,0 +1,189 @@ +/*********************************************************************** + * Software License Agreement (BSD License) + * + * Copyright 2011-16 Jose Luis Blanco (joseluisblancoc@gmail.com). + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR + * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES + * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. + * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, + * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT + * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF + * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + *************************************************************************/ + +#pragma once + +#include "nanoflann.hpp" + +// #include + +// ===== This example shows how to use nanoflann with these types of containers: ======= +//typedef std::vector > my_vector_of_vectors_t; +//typedef std::vector my_vector_of_vectors_t; // This requires #include +// ===================================================================================== + + +/** A simple vector-of-vectors adaptor for nanoflann, without duplicating the storage. + * The i'th vector represents a point in the state space. + * + * \tparam DIM If set to >0, it specifies a compile-time fixed dimensionality for the points in the data set, allowing more compiler optimizations. + * \tparam num_t The type of the point coordinates (typically, double or float). + * \tparam Distance The distance metric to use: nanoflann::metric_L1, nanoflann::metric_L2, nanoflann::metric_L2_Simple, etc. + * \tparam IndexType The type for indices in the KD-tree index (typically, size_t of int) + */ +// template +// struct KDTreeVectorAdaptor +// { +// typedef KDTreeVectorAdaptor self_t; +// typedef typename Distance::template traits::distance_t metric_t; +// typedef nanoflann::KDTreeSingleIndexAdaptor< metric_t,self_t,DIM,IndexType> index_t; + +// index_t* index; //! The kd-tree index for the user to call its methods as usual with any other FLANN index. +// size_t dims; + +// /// Constructor: takes a const ref to the vector of vectors object with the data points +// KDTreeVectorAdaptor(const size_t dims /* dimensionality */, const VectorType &mat, const int leaf_max_size = 10) : m_data(mat) +// { +// assert(mat.size() != 0); +// this->dims= dims; +// index = new index_t( static_cast(dims), *this /* adaptor */, nanoflann::KDTreeSingleIndexAdaptorParams(leaf_max_size ) ); +// index->buildIndex(); +// } + +// ~KDTreeVectorAdaptor() { +// delete index; +// } + +// const VectorType &m_data; + +// /** Query for the \a num_closest closest points to a given point (entered as query_point[0:dim-1]). +// * Note that this is a short-cut method for index->findNeighbors(). +// * The user can also call index->... methods as desired. +// * \note nChecks_IGNORED is ignored but kept for compatibility with the original FLANN interface. +// */ +// inline void query(const num_t *query_point, const size_t num_closest, IndexType *out_indices, num_t *out_distances_sq, const int nChecks_IGNORED = 10) const +// { +// nanoflann::KNNResultSet resultSet(num_closest); +// resultSet.init(out_indices, out_distances_sq); +// index->findNeighbors(resultSet, query_point, nanoflann::SearchParams()); +// } + +// /** @name Interface expected by KDTreeSingleIndexAdaptor +// * @{ */ + +// const self_t & derived() const { +// return *this; +// } +// self_t & derived() { +// return *this; +// } + +// // Must return the number of data points +// inline size_t kdtree_get_point_count() const { +// return m_data.size()/this->dims; +// } + +// // Returns the dim'th component of the idx'th point in the class: +// inline num_t kdtree_get_pt(const size_t idx, const size_t dim) const { +// return m_data[idx*this->dims + dim]; +// } + +// // Optional bounding-box computation: return false to default to a standard bbox computation loop. +// // Return true if the BBOX was already computed by the class and returned in "bb" so it can be avoided to redo it again. +// // Look at bb.size() to find out the expected dimensionality (e.g. 2 or 3 for point clouds) +// template +// bool kdtree_get_bbox(BBOX & /*bb*/) const { +// return false; +// } + +// /** @} */ + +// }; // end of KDTreeVectorOfVectorsAdaptor + + + + +template +struct KDTreeTableAdaptor +{ + typedef KDTreeTableAdaptor self_t; + typedef typename Distance::template traits::distance_t metric_t; + typedef nanoflann::KDTreeSingleIndexAdaptor< metric_t,self_t,DIM,IndexType> index_t; + + index_t* index; //! The kd-tree index for the user to call its methods as usual with any other FLANN index. + size_t dim; + size_t npts; + const TableType* m_data; + + /// Constructor: takes a const ref to the vector of vectors object with the data points + KDTreeTableAdaptor(const size_t npts, const size_t dim, const TableType* mat, const int leaf_max_size = 10) : m_data(mat), dim(dim), npts(npts) + { + assert(npts != 0); + index = new index_t( static_cast(dim), *this /* adaptor */, nanoflann::KDTreeSingleIndexAdaptorParams(leaf_max_size ) ); + index->buildIndex(); + } + + ~KDTreeTableAdaptor() { + delete index; + } + + + /** Query for the \a num_closest closest points to a given point (entered as query_point[0:dim-1]). + * Note that this is a short-cut method for index->findNeighbors(). + * The user can also call index->... methods as desired. + * \note nChecks_IGNORED is ignored but kept for compatibility with the original FLANN interface. + */ + inline void query(const num_t *query_point, const size_t num_closest, IndexType *out_indices, num_t *out_distances_sq, const int nChecks_IGNORED = 10) const + { + nanoflann::KNNResultSet resultSet(num_closest); + resultSet.init(out_indices, out_distances_sq); + index->findNeighbors(resultSet, query_point, nanoflann::SearchParams()); + } + + /** @name Interface expected by KDTreeSingleIndexAdaptor + * @{ */ + + const self_t & derived() const { + return *this; + } + self_t & derived() { + return *this; + } + + // Must return the number of data points + inline size_t kdtree_get_point_count() const { + return this->npts; + } + + // Returns the dim'th component of the idx'th point in the class: + inline num_t kdtree_get_pt(const size_t pts_id, const size_t coord_id) const { + return m_data[pts_id*this->dim + coord_id]; + } + + // Optional bounding-box computation: return false to default to a standard bbox computation loop. + // Return true if the BBOX was already computed by the class and returned in "bb" so it can be avoided to redo it again. + // Look at bb.size() to find out the expected dimensionality (e.g. 2 or 3 for point clouds) + template + bool kdtree_get_bbox(BBOX & /*bb*/) const { + return false; + } + + /** @} */ + +}; // end of KDTreeVectorOfVectorsAdaptor + diff --git a/competing_methods/my_RandLANet/utils/nearest_neighbors/knn.cpp b/competing_methods/my_RandLANet/utils/nearest_neighbors/knn.cpp new file mode 100644 index 00000000..39236cd1 --- /dev/null +++ b/competing_methods/my_RandLANet/utils/nearest_neighbors/knn.cpp @@ -0,0 +1,7610 @@ +/* Generated by Cython 0.29.15 */ + +#define PY_SSIZE_T_CLEAN +#include "Python.h" +#ifndef Py_PYTHON_H + #error Python headers needed to compile C extensions, please install development version of Python. +#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000) + #error Cython requires Python 2.6+ or Python 3.3+. +#else +#define CYTHON_ABI "0_29_15" +#define CYTHON_HEX_VERSION 0x001D0FF0 +#define CYTHON_FUTURE_DIVISION 0 +#include +#ifndef offsetof + #define offsetof(type, member) ( (size_t) & ((type*)0) -> member ) +#endif +#if !defined(WIN32) && !defined(MS_WINDOWS) + #ifndef __stdcall + #define __stdcall + #endif + #ifndef __cdecl + #define __cdecl + #endif + #ifndef __fastcall + #define __fastcall + #endif +#endif +#ifndef DL_IMPORT + #define DL_IMPORT(t) t +#endif +#ifndef DL_EXPORT + #define DL_EXPORT(t) t +#endif +#define __PYX_COMMA , +#ifndef HAVE_LONG_LONG + #if PY_VERSION_HEX >= 0x02070000 + #define HAVE_LONG_LONG + #endif +#endif +#ifndef PY_LONG_LONG + #define PY_LONG_LONG LONG_LONG +#endif +#ifndef Py_HUGE_VAL + #define Py_HUGE_VAL HUGE_VAL +#endif +#ifdef PYPY_VERSION + #define CYTHON_COMPILING_IN_PYPY 1 + #define CYTHON_COMPILING_IN_PYSTON 0 + #define CYTHON_COMPILING_IN_CPYTHON 0 + #undef CYTHON_USE_TYPE_SLOTS + #define CYTHON_USE_TYPE_SLOTS 0 + #undef CYTHON_USE_PYTYPE_LOOKUP + #define CYTHON_USE_PYTYPE_LOOKUP 0 + #if PY_VERSION_HEX < 0x03050000 + #undef CYTHON_USE_ASYNC_SLOTS + #define CYTHON_USE_ASYNC_SLOTS 0 + #elif !defined(CYTHON_USE_ASYNC_SLOTS) + #define CYTHON_USE_ASYNC_SLOTS 1 + #endif + #undef CYTHON_USE_PYLIST_INTERNALS + #define CYTHON_USE_PYLIST_INTERNALS 0 + #undef CYTHON_USE_UNICODE_INTERNALS + #define CYTHON_USE_UNICODE_INTERNALS 0 + #undef CYTHON_USE_UNICODE_WRITER + #define CYTHON_USE_UNICODE_WRITER 0 + #undef CYTHON_USE_PYLONG_INTERNALS + #define CYTHON_USE_PYLONG_INTERNALS 0 + #undef CYTHON_AVOID_BORROWED_REFS + #define CYTHON_AVOID_BORROWED_REFS 1 + #undef CYTHON_ASSUME_SAFE_MACROS + #define CYTHON_ASSUME_SAFE_MACROS 0 + #undef CYTHON_UNPACK_METHODS + #define CYTHON_UNPACK_METHODS 0 + #undef CYTHON_FAST_THREAD_STATE + #define CYTHON_FAST_THREAD_STATE 0 + #undef CYTHON_FAST_PYCALL + #define CYTHON_FAST_PYCALL 0 + #undef CYTHON_PEP489_MULTI_PHASE_INIT + #define CYTHON_PEP489_MULTI_PHASE_INIT 0 + #undef CYTHON_USE_TP_FINALIZE + #define CYTHON_USE_TP_FINALIZE 0 + #undef CYTHON_USE_DICT_VERSIONS + #define CYTHON_USE_DICT_VERSIONS 0 + #undef CYTHON_USE_EXC_INFO_STACK + #define CYTHON_USE_EXC_INFO_STACK 0 +#elif defined(PYSTON_VERSION) + #define CYTHON_COMPILING_IN_PYPY 0 + #define CYTHON_COMPILING_IN_PYSTON 1 + #define CYTHON_COMPILING_IN_CPYTHON 0 + #ifndef CYTHON_USE_TYPE_SLOTS + #define CYTHON_USE_TYPE_SLOTS 1 + #endif + #undef CYTHON_USE_PYTYPE_LOOKUP + #define CYTHON_USE_PYTYPE_LOOKUP 0 + #undef CYTHON_USE_ASYNC_SLOTS + #define CYTHON_USE_ASYNC_SLOTS 0 + #undef CYTHON_USE_PYLIST_INTERNALS + #define CYTHON_USE_PYLIST_INTERNALS 0 + #ifndef CYTHON_USE_UNICODE_INTERNALS + #define CYTHON_USE_UNICODE_INTERNALS 1 + #endif + #undef CYTHON_USE_UNICODE_WRITER + #define CYTHON_USE_UNICODE_WRITER 0 + #undef CYTHON_USE_PYLONG_INTERNALS + #define CYTHON_USE_PYLONG_INTERNALS 0 + #ifndef CYTHON_AVOID_BORROWED_REFS + #define CYTHON_AVOID_BORROWED_REFS 0 + #endif + #ifndef CYTHON_ASSUME_SAFE_MACROS + #define CYTHON_ASSUME_SAFE_MACROS 1 + #endif + #ifndef CYTHON_UNPACK_METHODS + #define CYTHON_UNPACK_METHODS 1 + #endif + #undef CYTHON_FAST_THREAD_STATE + #define CYTHON_FAST_THREAD_STATE 0 + #undef CYTHON_FAST_PYCALL + #define CYTHON_FAST_PYCALL 0 + #undef CYTHON_PEP489_MULTI_PHASE_INIT + #define CYTHON_PEP489_MULTI_PHASE_INIT 0 + #undef CYTHON_USE_TP_FINALIZE + #define CYTHON_USE_TP_FINALIZE 0 + #undef CYTHON_USE_DICT_VERSIONS + #define CYTHON_USE_DICT_VERSIONS 0 + #undef CYTHON_USE_EXC_INFO_STACK + #define CYTHON_USE_EXC_INFO_STACK 0 +#else + #define CYTHON_COMPILING_IN_PYPY 0 + #define CYTHON_COMPILING_IN_PYSTON 0 + #define CYTHON_COMPILING_IN_CPYTHON 1 + #ifndef CYTHON_USE_TYPE_SLOTS + #define CYTHON_USE_TYPE_SLOTS 1 + #endif + #if PY_VERSION_HEX < 0x02070000 + #undef CYTHON_USE_PYTYPE_LOOKUP + #define CYTHON_USE_PYTYPE_LOOKUP 0 + #elif !defined(CYTHON_USE_PYTYPE_LOOKUP) + #define CYTHON_USE_PYTYPE_LOOKUP 1 + #endif + #if PY_MAJOR_VERSION < 3 + #undef CYTHON_USE_ASYNC_SLOTS + #define CYTHON_USE_ASYNC_SLOTS 0 + #elif !defined(CYTHON_USE_ASYNC_SLOTS) + #define CYTHON_USE_ASYNC_SLOTS 1 + #endif + #if PY_VERSION_HEX < 0x02070000 + #undef CYTHON_USE_PYLONG_INTERNALS + #define CYTHON_USE_PYLONG_INTERNALS 0 + #elif !defined(CYTHON_USE_PYLONG_INTERNALS) + #define CYTHON_USE_PYLONG_INTERNALS 1 + #endif + #ifndef CYTHON_USE_PYLIST_INTERNALS + #define CYTHON_USE_PYLIST_INTERNALS 1 + #endif + #ifndef CYTHON_USE_UNICODE_INTERNALS + #define CYTHON_USE_UNICODE_INTERNALS 1 + #endif + #if PY_VERSION_HEX < 0x030300F0 + #undef CYTHON_USE_UNICODE_WRITER + #define CYTHON_USE_UNICODE_WRITER 0 + #elif !defined(CYTHON_USE_UNICODE_WRITER) + #define CYTHON_USE_UNICODE_WRITER 1 + #endif + #ifndef CYTHON_AVOID_BORROWED_REFS + #define CYTHON_AVOID_BORROWED_REFS 0 + #endif + #ifndef CYTHON_ASSUME_SAFE_MACROS + #define CYTHON_ASSUME_SAFE_MACROS 1 + #endif + #ifndef CYTHON_UNPACK_METHODS + #define CYTHON_UNPACK_METHODS 1 + #endif + #ifndef CYTHON_FAST_THREAD_STATE + #define CYTHON_FAST_THREAD_STATE 1 + #endif + #ifndef CYTHON_FAST_PYCALL + #define CYTHON_FAST_PYCALL 1 + #endif + #ifndef CYTHON_PEP489_MULTI_PHASE_INIT + #define CYTHON_PEP489_MULTI_PHASE_INIT (PY_VERSION_HEX >= 0x03050000) + #endif + #ifndef CYTHON_USE_TP_FINALIZE + #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1) + #endif + #ifndef CYTHON_USE_DICT_VERSIONS + #define CYTHON_USE_DICT_VERSIONS (PY_VERSION_HEX >= 0x030600B1) + #endif + #ifndef CYTHON_USE_EXC_INFO_STACK + #define CYTHON_USE_EXC_INFO_STACK (PY_VERSION_HEX >= 0x030700A3) + #endif +#endif +#if !defined(CYTHON_FAST_PYCCALL) +#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1) +#endif +#if CYTHON_USE_PYLONG_INTERNALS + #include "longintrepr.h" + #undef SHIFT + #undef BASE + #undef MASK + #ifdef SIZEOF_VOID_P + enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) }; + #endif +#endif +#ifndef __has_attribute + #define __has_attribute(x) 0 +#endif +#ifndef __has_cpp_attribute + #define __has_cpp_attribute(x) 0 +#endif +#ifndef CYTHON_RESTRICT + #if defined(__GNUC__) + #define CYTHON_RESTRICT __restrict__ + #elif defined(_MSC_VER) && _MSC_VER >= 1400 + #define CYTHON_RESTRICT __restrict + #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L + #define CYTHON_RESTRICT restrict + #else + #define CYTHON_RESTRICT + #endif +#endif +#ifndef CYTHON_UNUSED +# if defined(__GNUC__) +# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) +# define CYTHON_UNUSED __attribute__ ((__unused__)) +# else +# define CYTHON_UNUSED +# endif +# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER)) +# define CYTHON_UNUSED __attribute__ ((__unused__)) +# else +# define CYTHON_UNUSED +# endif +#endif +#ifndef CYTHON_MAYBE_UNUSED_VAR +# if defined(__cplusplus) + template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { } +# else +# define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x) +# endif +#endif +#ifndef CYTHON_NCP_UNUSED +# if CYTHON_COMPILING_IN_CPYTHON +# define CYTHON_NCP_UNUSED +# else +# define CYTHON_NCP_UNUSED CYTHON_UNUSED +# endif +#endif +#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None) +#ifdef _MSC_VER + #ifndef _MSC_STDINT_H_ + #if _MSC_VER < 1300 + typedef unsigned char uint8_t; + typedef unsigned int uint32_t; + #else + typedef unsigned __int8 uint8_t; + typedef unsigned __int32 uint32_t; + #endif + #endif +#else + #include +#endif +#ifndef CYTHON_FALLTHROUGH + #if defined(__cplusplus) && __cplusplus >= 201103L + #if __has_cpp_attribute(fallthrough) + #define CYTHON_FALLTHROUGH [[fallthrough]] + #elif __has_cpp_attribute(clang::fallthrough) + #define CYTHON_FALLTHROUGH [[clang::fallthrough]] + #elif __has_cpp_attribute(gnu::fallthrough) + #define CYTHON_FALLTHROUGH [[gnu::fallthrough]] + #endif + #endif + #ifndef CYTHON_FALLTHROUGH + #if __has_attribute(fallthrough) + #define CYTHON_FALLTHROUGH __attribute__((fallthrough)) + #else + #define CYTHON_FALLTHROUGH + #endif + #endif + #if defined(__clang__ ) && defined(__apple_build_version__) + #if __apple_build_version__ < 7000000 + #undef CYTHON_FALLTHROUGH + #define CYTHON_FALLTHROUGH + #endif + #endif +#endif + +#ifndef __cplusplus + #error "Cython files generated with the C++ option must be compiled with a C++ compiler." +#endif +#ifndef CYTHON_INLINE + #if defined(__clang__) + #define CYTHON_INLINE __inline__ __attribute__ ((__unused__)) + #else + #define CYTHON_INLINE inline + #endif +#endif +template +void __Pyx_call_destructor(T& x) { + x.~T(); +} +template +class __Pyx_FakeReference { + public: + __Pyx_FakeReference() : ptr(NULL) { } + __Pyx_FakeReference(const T& ref) : ptr(const_cast(&ref)) { } + T *operator->() { return ptr; } + T *operator&() { return ptr; } + operator T&() { return *ptr; } + template bool operator ==(U other) { return *ptr == other; } + template bool operator !=(U other) { return *ptr != other; } + private: + T *ptr; +}; + +#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag) + #define Py_OptimizeFlag 0 +#endif +#define __PYX_BUILD_PY_SSIZE_T "n" +#define CYTHON_FORMAT_SSIZE_T "z" +#if PY_MAJOR_VERSION < 3 + #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" + #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ + PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) + #define __Pyx_DefaultClassType PyClass_Type +#else + #define __Pyx_BUILTIN_MODULE_NAME "builtins" +#if PY_VERSION_HEX >= 0x030800A4 && PY_VERSION_HEX < 0x030800B2 + #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ + PyCode_New(a, 0, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) +#else + #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ + PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) +#endif + #define __Pyx_DefaultClassType PyType_Type +#endif +#ifndef Py_TPFLAGS_CHECKTYPES + #define Py_TPFLAGS_CHECKTYPES 0 +#endif +#ifndef Py_TPFLAGS_HAVE_INDEX + #define Py_TPFLAGS_HAVE_INDEX 0 +#endif +#ifndef Py_TPFLAGS_HAVE_NEWBUFFER + #define Py_TPFLAGS_HAVE_NEWBUFFER 0 +#endif +#ifndef Py_TPFLAGS_HAVE_FINALIZE + #define Py_TPFLAGS_HAVE_FINALIZE 0 +#endif +#ifndef METH_STACKLESS + #define METH_STACKLESS 0 +#endif +#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL) + #ifndef METH_FASTCALL + #define METH_FASTCALL 0x80 + #endif + typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs); + typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args, + Py_ssize_t nargs, PyObject *kwnames); +#else + #define __Pyx_PyCFunctionFast _PyCFunctionFast + #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords +#endif +#if CYTHON_FAST_PYCCALL +#define __Pyx_PyFastCFunction_Check(func)\ + ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS))))) +#else +#define __Pyx_PyFastCFunction_Check(func) 0 +#endif +#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc) + #define PyObject_Malloc(s) PyMem_Malloc(s) + #define PyObject_Free(p) PyMem_Free(p) + #define PyObject_Realloc(p) PyMem_Realloc(p) +#endif +#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030400A1 + #define PyMem_RawMalloc(n) PyMem_Malloc(n) + #define PyMem_RawRealloc(p, n) PyMem_Realloc(p, n) + #define PyMem_RawFree(p) PyMem_Free(p) +#endif +#if CYTHON_COMPILING_IN_PYSTON + #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co) + #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno) +#else + #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) + #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno) +#endif +#if !CYTHON_FAST_THREAD_STATE || PY_VERSION_HEX < 0x02070000 + #define __Pyx_PyThreadState_Current PyThreadState_GET() +#elif PY_VERSION_HEX >= 0x03060000 + #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet() +#elif PY_VERSION_HEX >= 0x03000000 + #define __Pyx_PyThreadState_Current PyThreadState_GET() +#else + #define __Pyx_PyThreadState_Current _PyThreadState_Current +#endif +#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT) +#include "pythread.h" +#define Py_tss_NEEDS_INIT 0 +typedef int Py_tss_t; +static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) { + *key = PyThread_create_key(); + return 0; +} +static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) { + Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t)); + *key = Py_tss_NEEDS_INIT; + return key; +} +static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) { + PyObject_Free(key); +} +static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) { + return *key != Py_tss_NEEDS_INIT; +} +static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) { + PyThread_delete_key(*key); + *key = Py_tss_NEEDS_INIT; +} +static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) { + return PyThread_set_key_value(*key, value); +} +static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) { + return PyThread_get_key_value(*key); +} +#endif +#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized) +#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n)) +#else +#define __Pyx_PyDict_NewPresized(n) PyDict_New() +#endif +#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION + #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) + #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) +#else + #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) + #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) +#endif +#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 && CYTHON_USE_UNICODE_INTERNALS +#define __Pyx_PyDict_GetItemStr(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash) +#else +#define __Pyx_PyDict_GetItemStr(dict, name) PyDict_GetItem(dict, name) +#endif +#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND) + #define CYTHON_PEP393_ENABLED 1 + #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\ + 0 : _PyUnicode_Ready((PyObject *)(op))) + #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u) + #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i) + #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u) + #define __Pyx_PyUnicode_KIND(u) PyUnicode_KIND(u) + #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u) + #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i) + #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch) + #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u))) +#else + #define CYTHON_PEP393_ENABLED 0 + #define PyUnicode_1BYTE_KIND 1 + #define PyUnicode_2BYTE_KIND 2 + #define PyUnicode_4BYTE_KIND 4 + #define __Pyx_PyUnicode_READY(op) (0) + #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u) + #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i])) + #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111) + #define __Pyx_PyUnicode_KIND(u) (sizeof(Py_UNICODE)) + #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u)) + #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i])) + #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch) + #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u)) +#endif +#if CYTHON_COMPILING_IN_PYPY + #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b) + #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b) +#else + #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b) + #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\ + PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b)) +#endif +#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains) + #define PyUnicode_Contains(u, s) PySequence_Contains(u, s) +#endif +#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check) + #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type) +#endif +#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format) + #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt) +#endif +#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b)) +#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b)) +#if PY_MAJOR_VERSION >= 3 + #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b) +#else + #define __Pyx_PyString_Format(a, b) PyString_Format(a, b) +#endif +#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII) + #define PyObject_ASCII(o) PyObject_Repr(o) +#endif +#if PY_MAJOR_VERSION >= 3 + #define PyBaseString_Type PyUnicode_Type + #define PyStringObject PyUnicodeObject + #define PyString_Type PyUnicode_Type + #define PyString_Check PyUnicode_Check + #define PyString_CheckExact PyUnicode_CheckExact + #define PyObject_Unicode PyObject_Str +#endif +#if PY_MAJOR_VERSION >= 3 + #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj) + #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj) +#else + #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj)) + #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj)) +#endif +#ifndef PySet_CheckExact + #define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type) +#endif +#if CYTHON_ASSUME_SAFE_MACROS + #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq) +#else + #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq) +#endif +#if PY_MAJOR_VERSION >= 3 + #define PyIntObject PyLongObject + #define PyInt_Type PyLong_Type + #define PyInt_Check(op) PyLong_Check(op) + #define PyInt_CheckExact(op) PyLong_CheckExact(op) + #define PyInt_FromString PyLong_FromString + #define PyInt_FromUnicode PyLong_FromUnicode + #define PyInt_FromLong PyLong_FromLong + #define PyInt_FromSize_t PyLong_FromSize_t + #define PyInt_FromSsize_t PyLong_FromSsize_t + #define PyInt_AsLong PyLong_AsLong + #define PyInt_AS_LONG PyLong_AS_LONG + #define PyInt_AsSsize_t PyLong_AsSsize_t + #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask + #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask + #define PyNumber_Int PyNumber_Long +#endif +#if PY_MAJOR_VERSION >= 3 + #define PyBoolObject PyLongObject +#endif +#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY + #ifndef PyUnicode_InternFromString + #define PyUnicode_InternFromString(s) PyUnicode_FromString(s) + #endif +#endif +#if PY_VERSION_HEX < 0x030200A4 + typedef long Py_hash_t; + #define __Pyx_PyInt_FromHash_t PyInt_FromLong + #define __Pyx_PyInt_AsHash_t PyInt_AsLong +#else + #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t + #define __Pyx_PyInt_AsHash_t PyInt_AsSsize_t +#endif +#if PY_MAJOR_VERSION >= 3 + #define __Pyx_PyMethod_New(func, self, klass) ((self) ? PyMethod_New(func, self) : (Py_INCREF(func), func)) +#else + #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass) +#endif +#if CYTHON_USE_ASYNC_SLOTS + #if PY_VERSION_HEX >= 0x030500B1 + #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods + #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async) + #else + #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved)) + #endif +#else + #define __Pyx_PyType_AsAsync(obj) NULL +#endif +#ifndef __Pyx_PyAsyncMethodsStruct + typedef struct { + unaryfunc am_await; + unaryfunc am_aiter; + unaryfunc am_anext; + } __Pyx_PyAsyncMethodsStruct; +#endif + +#if defined(WIN32) || defined(MS_WINDOWS) + #define _USE_MATH_DEFINES +#endif +#include +#ifdef NAN +#define __PYX_NAN() ((float) NAN) +#else +static CYTHON_INLINE float __PYX_NAN() { + float value; + memset(&value, 0xFF, sizeof(value)); + return value; +} +#endif +#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL) +#define __Pyx_truncl trunc +#else +#define __Pyx_truncl truncl +#endif + + +#define __PYX_ERR(f_index, lineno, Ln_error) \ +{ \ + __pyx_filename = __pyx_f[f_index]; __pyx_lineno = lineno; __pyx_clineno = __LINE__; goto Ln_error; \ +} + +#ifndef __PYX_EXTERN_C + #ifdef __cplusplus + #define __PYX_EXTERN_C extern "C" + #else + #define __PYX_EXTERN_C extern + #endif +#endif + +#define __PYX_HAVE__nearest_neighbors +#define __PYX_HAVE_API__nearest_neighbors +/* Early includes */ +#include +#include +#include "numpy/arrayobject.h" +#include "numpy/ufuncobject.h" + + /* NumPy API declarations from "numpy/__init__.pxd" */ + +#include "knn_.h" +#ifdef _OPENMP +#include +#endif /* _OPENMP */ + +#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS) +#define CYTHON_WITHOUT_ASSERTIONS +#endif + +typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding; + const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; + +#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0 +#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0 +#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT (PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8) +#define __PYX_DEFAULT_STRING_ENCODING "" +#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString +#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize +#define __Pyx_uchar_cast(c) ((unsigned char)c) +#define __Pyx_long_cast(x) ((long)x) +#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\ + (sizeof(type) < sizeof(Py_ssize_t)) ||\ + (sizeof(type) > sizeof(Py_ssize_t) &&\ + likely(v < (type)PY_SSIZE_T_MAX ||\ + v == (type)PY_SSIZE_T_MAX) &&\ + (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\ + v == (type)PY_SSIZE_T_MIN))) ||\ + (sizeof(type) == sizeof(Py_ssize_t) &&\ + (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\ + v == (type)PY_SSIZE_T_MAX))) ) +static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) { + return (size_t) i < (size_t) limit; +} +#if defined (__cplusplus) && __cplusplus >= 201103L + #include + #define __Pyx_sst_abs(value) std::abs(value) +#elif SIZEOF_INT >= SIZEOF_SIZE_T + #define __Pyx_sst_abs(value) abs(value) +#elif SIZEOF_LONG >= SIZEOF_SIZE_T + #define __Pyx_sst_abs(value) labs(value) +#elif defined (_MSC_VER) + #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value)) +#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L + #define __Pyx_sst_abs(value) llabs(value) +#elif defined (__GNUC__) + #define __Pyx_sst_abs(value) __builtin_llabs(value) +#else + #define __Pyx_sst_abs(value) ((value<0) ? -value : value) +#endif +static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*); +static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length); +#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s)) +#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l) +#define __Pyx_PyBytes_FromString PyBytes_FromString +#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize +static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*); +#if PY_MAJOR_VERSION < 3 + #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString + #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize +#else + #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString + #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize +#endif +#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s)) +#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s)) +#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s)) +#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s)) +#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s)) +#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s)) +#define __Pyx_PyObject_AsWritableString(s) ((char*) __Pyx_PyObject_AsString(s)) +#define __Pyx_PyObject_AsWritableSString(s) ((signed char*) __Pyx_PyObject_AsString(s)) +#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s)) +#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s)) +#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s)) +#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s) +#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s) +#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s) +#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s) +#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s) +static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) { + const Py_UNICODE *u_end = u; + while (*u_end++) ; + return (size_t)(u_end - u - 1); +} +#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u)) +#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode +#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode +#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj) +#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None) +static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b); +static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); +static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*); +static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x); +#define __Pyx_PySequence_Tuple(obj)\ + (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj)) +static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); +static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); +#if CYTHON_ASSUME_SAFE_MACROS +#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) +#else +#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x) +#endif +#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x)) +#if PY_MAJOR_VERSION >= 3 +#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x)) +#else +#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x)) +#endif +#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x)) +#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII +static int __Pyx_sys_getdefaultencoding_not_ascii; +static int __Pyx_init_sys_getdefaultencoding_params(void) { + PyObject* sys; + PyObject* default_encoding = NULL; + PyObject* ascii_chars_u = NULL; + PyObject* ascii_chars_b = NULL; + const char* default_encoding_c; + sys = PyImport_ImportModule("sys"); + if (!sys) goto bad; + default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL); + Py_DECREF(sys); + if (!default_encoding) goto bad; + default_encoding_c = PyBytes_AsString(default_encoding); + if (!default_encoding_c) goto bad; + if (strcmp(default_encoding_c, "ascii") == 0) { + __Pyx_sys_getdefaultencoding_not_ascii = 0; + } else { + char ascii_chars[128]; + int c; + for (c = 0; c < 128; c++) { + ascii_chars[c] = c; + } + __Pyx_sys_getdefaultencoding_not_ascii = 1; + ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL); + if (!ascii_chars_u) goto bad; + ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL); + if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) { + PyErr_Format( + PyExc_ValueError, + "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.", + default_encoding_c); + goto bad; + } + Py_DECREF(ascii_chars_u); + Py_DECREF(ascii_chars_b); + } + Py_DECREF(default_encoding); + return 0; +bad: + Py_XDECREF(default_encoding); + Py_XDECREF(ascii_chars_u); + Py_XDECREF(ascii_chars_b); + return -1; +} +#endif +#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3 +#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL) +#else +#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL) +#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT +static char* __PYX_DEFAULT_STRING_ENCODING; +static int __Pyx_init_sys_getdefaultencoding_params(void) { + PyObject* sys; + PyObject* default_encoding = NULL; + char* default_encoding_c; + sys = PyImport_ImportModule("sys"); + if (!sys) goto bad; + default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL); + Py_DECREF(sys); + if (!default_encoding) goto bad; + default_encoding_c = PyBytes_AsString(default_encoding); + if (!default_encoding_c) goto bad; + __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1); + if (!__PYX_DEFAULT_STRING_ENCODING) goto bad; + strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c); + Py_DECREF(default_encoding); + return 0; +bad: + Py_XDECREF(default_encoding); + return -1; +} +#endif +#endif + + +/* Test for GCC > 2.95 */ +#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))) + #define likely(x) __builtin_expect(!!(x), 1) + #define unlikely(x) __builtin_expect(!!(x), 0) +#else /* !__GNUC__ or GCC < 2.95 */ + #define likely(x) (x) + #define unlikely(x) (x) +#endif /* __GNUC__ */ +static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; } + +static PyObject *__pyx_m = NULL; +static PyObject *__pyx_d; +static PyObject *__pyx_b; +static PyObject *__pyx_cython_runtime = NULL; +static PyObject *__pyx_empty_tuple; +static PyObject *__pyx_empty_bytes; +static PyObject *__pyx_empty_unicode; +static int __pyx_lineno; +static int __pyx_clineno = 0; +static const char * __pyx_cfilenm= __FILE__; +static const char *__pyx_filename; + +/* Header.proto */ +#if !defined(CYTHON_CCOMPLEX) + #if defined(__cplusplus) + #define CYTHON_CCOMPLEX 1 + #elif defined(_Complex_I) + #define CYTHON_CCOMPLEX 1 + #else + #define CYTHON_CCOMPLEX 0 + #endif +#endif +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + #include + #else + #include + #endif +#endif +#if CYTHON_CCOMPLEX && !defined(__cplusplus) && defined(__sun__) && defined(__GNUC__) + #undef _Complex_I + #define _Complex_I 1.0fj +#endif + + +static const char *__pyx_f[] = { + "knn.pyx", + "__init__.pxd", + "type.pxd", +}; +/* BufferFormatStructs.proto */ +#define IS_UNSIGNED(type) (((type) -1) > 0) +struct __Pyx_StructField_; +#define __PYX_BUF_FLAGS_PACKED_STRUCT (1 << 0) +typedef struct { + const char* name; + struct __Pyx_StructField_* fields; + size_t size; + size_t arraysize[8]; + int ndim; + char typegroup; + char is_unsigned; + int flags; +} __Pyx_TypeInfo; +typedef struct __Pyx_StructField_ { + __Pyx_TypeInfo* type; + const char* name; + size_t offset; +} __Pyx_StructField; +typedef struct { + __Pyx_StructField* field; + size_t parent_offset; +} __Pyx_BufFmt_StackElem; +typedef struct { + __Pyx_StructField root; + __Pyx_BufFmt_StackElem* head; + size_t fmt_offset; + size_t new_count, enc_count; + size_t struct_alignment; + int is_complex; + char enc_type; + char new_packmode; + char enc_packmode; + char is_valid_array; +} __Pyx_BufFmt_Context; + + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":689 + * # in Cython to enable them only on the right systems. + * + * ctypedef npy_int8 int8_t # <<<<<<<<<<<<<< + * ctypedef npy_int16 int16_t + * ctypedef npy_int32 int32_t + */ +typedef npy_int8 __pyx_t_5numpy_int8_t; + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":690 + * + * ctypedef npy_int8 int8_t + * ctypedef npy_int16 int16_t # <<<<<<<<<<<<<< + * ctypedef npy_int32 int32_t + * ctypedef npy_int64 int64_t + */ +typedef npy_int16 __pyx_t_5numpy_int16_t; + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":691 + * ctypedef npy_int8 int8_t + * ctypedef npy_int16 int16_t + * ctypedef npy_int32 int32_t # <<<<<<<<<<<<<< + * ctypedef npy_int64 int64_t + * #ctypedef npy_int96 int96_t + */ +typedef npy_int32 __pyx_t_5numpy_int32_t; + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":692 + * ctypedef npy_int16 int16_t + * ctypedef npy_int32 int32_t + * ctypedef npy_int64 int64_t # <<<<<<<<<<<<<< + * #ctypedef npy_int96 int96_t + * #ctypedef npy_int128 int128_t + */ +typedef npy_int64 __pyx_t_5numpy_int64_t; + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":696 + * #ctypedef npy_int128 int128_t + * + * ctypedef npy_uint8 uint8_t # <<<<<<<<<<<<<< + * ctypedef npy_uint16 uint16_t + * ctypedef npy_uint32 uint32_t + */ +typedef npy_uint8 __pyx_t_5numpy_uint8_t; + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":697 + * + * ctypedef npy_uint8 uint8_t + * ctypedef npy_uint16 uint16_t # <<<<<<<<<<<<<< + * ctypedef npy_uint32 uint32_t + * ctypedef npy_uint64 uint64_t + */ +typedef npy_uint16 __pyx_t_5numpy_uint16_t; + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":698 + * ctypedef npy_uint8 uint8_t + * ctypedef npy_uint16 uint16_t + * ctypedef npy_uint32 uint32_t # <<<<<<<<<<<<<< + * ctypedef npy_uint64 uint64_t + * #ctypedef npy_uint96 uint96_t + */ +typedef npy_uint32 __pyx_t_5numpy_uint32_t; + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":699 + * ctypedef npy_uint16 uint16_t + * ctypedef npy_uint32 uint32_t + * ctypedef npy_uint64 uint64_t # <<<<<<<<<<<<<< + * #ctypedef npy_uint96 uint96_t + * #ctypedef npy_uint128 uint128_t + */ +typedef npy_uint64 __pyx_t_5numpy_uint64_t; + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":703 + * #ctypedef npy_uint128 uint128_t + * + * ctypedef npy_float32 float32_t # <<<<<<<<<<<<<< + * ctypedef npy_float64 float64_t + * #ctypedef npy_float80 float80_t + */ +typedef npy_float32 __pyx_t_5numpy_float32_t; + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":704 + * + * ctypedef npy_float32 float32_t + * ctypedef npy_float64 float64_t # <<<<<<<<<<<<<< + * #ctypedef npy_float80 float80_t + * #ctypedef npy_float128 float128_t + */ +typedef npy_float64 __pyx_t_5numpy_float64_t; + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":713 + * # The int types are mapped a bit surprising -- + * # numpy.int corresponds to 'l' and numpy.long to 'q' + * ctypedef npy_long int_t # <<<<<<<<<<<<<< + * ctypedef npy_longlong long_t + * ctypedef npy_longlong longlong_t + */ +typedef npy_long __pyx_t_5numpy_int_t; + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":714 + * # numpy.int corresponds to 'l' and numpy.long to 'q' + * ctypedef npy_long int_t + * ctypedef npy_longlong long_t # <<<<<<<<<<<<<< + * ctypedef npy_longlong longlong_t + * + */ +typedef npy_longlong __pyx_t_5numpy_long_t; + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":715 + * ctypedef npy_long int_t + * ctypedef npy_longlong long_t + * ctypedef npy_longlong longlong_t # <<<<<<<<<<<<<< + * + * ctypedef npy_ulong uint_t + */ +typedef npy_longlong __pyx_t_5numpy_longlong_t; + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":717 + * ctypedef npy_longlong longlong_t + * + * ctypedef npy_ulong uint_t # <<<<<<<<<<<<<< + * ctypedef npy_ulonglong ulong_t + * ctypedef npy_ulonglong ulonglong_t + */ +typedef npy_ulong __pyx_t_5numpy_uint_t; + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":718 + * + * ctypedef npy_ulong uint_t + * ctypedef npy_ulonglong ulong_t # <<<<<<<<<<<<<< + * ctypedef npy_ulonglong ulonglong_t + * + */ +typedef npy_ulonglong __pyx_t_5numpy_ulong_t; + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":719 + * ctypedef npy_ulong uint_t + * ctypedef npy_ulonglong ulong_t + * ctypedef npy_ulonglong ulonglong_t # <<<<<<<<<<<<<< + * + * ctypedef npy_intp intp_t + */ +typedef npy_ulonglong __pyx_t_5numpy_ulonglong_t; + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":721 + * ctypedef npy_ulonglong ulonglong_t + * + * ctypedef npy_intp intp_t # <<<<<<<<<<<<<< + * ctypedef npy_uintp uintp_t + * + */ +typedef npy_intp __pyx_t_5numpy_intp_t; + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":722 + * + * ctypedef npy_intp intp_t + * ctypedef npy_uintp uintp_t # <<<<<<<<<<<<<< + * + * ctypedef npy_double float_t + */ +typedef npy_uintp __pyx_t_5numpy_uintp_t; + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":724 + * ctypedef npy_uintp uintp_t + * + * ctypedef npy_double float_t # <<<<<<<<<<<<<< + * ctypedef npy_double double_t + * ctypedef npy_longdouble longdouble_t + */ +typedef npy_double __pyx_t_5numpy_float_t; + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":725 + * + * ctypedef npy_double float_t + * ctypedef npy_double double_t # <<<<<<<<<<<<<< + * ctypedef npy_longdouble longdouble_t + * + */ +typedef npy_double __pyx_t_5numpy_double_t; + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":726 + * ctypedef npy_double float_t + * ctypedef npy_double double_t + * ctypedef npy_longdouble longdouble_t # <<<<<<<<<<<<<< + * + * ctypedef npy_cfloat cfloat_t + */ +typedef npy_longdouble __pyx_t_5numpy_longdouble_t; +/* Declarations.proto */ +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + typedef ::std::complex< float > __pyx_t_float_complex; + #else + typedef float _Complex __pyx_t_float_complex; + #endif +#else + typedef struct { float real, imag; } __pyx_t_float_complex; +#endif +static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float, float); + +/* Declarations.proto */ +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + typedef ::std::complex< double > __pyx_t_double_complex; + #else + typedef double _Complex __pyx_t_double_complex; + #endif +#else + typedef struct { double real, imag; } __pyx_t_double_complex; +#endif +static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double, double); + + +/*--- Type declarations ---*/ + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":728 + * ctypedef npy_longdouble longdouble_t + * + * ctypedef npy_cfloat cfloat_t # <<<<<<<<<<<<<< + * ctypedef npy_cdouble cdouble_t + * ctypedef npy_clongdouble clongdouble_t + */ +typedef npy_cfloat __pyx_t_5numpy_cfloat_t; + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":729 + * + * ctypedef npy_cfloat cfloat_t + * ctypedef npy_cdouble cdouble_t # <<<<<<<<<<<<<< + * ctypedef npy_clongdouble clongdouble_t + * + */ +typedef npy_cdouble __pyx_t_5numpy_cdouble_t; + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":730 + * ctypedef npy_cfloat cfloat_t + * ctypedef npy_cdouble cdouble_t + * ctypedef npy_clongdouble clongdouble_t # <<<<<<<<<<<<<< + * + * ctypedef npy_cdouble complex_t + */ +typedef npy_clongdouble __pyx_t_5numpy_clongdouble_t; + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":732 + * ctypedef npy_clongdouble clongdouble_t + * + * ctypedef npy_cdouble complex_t # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew1(a): + */ +typedef npy_cdouble __pyx_t_5numpy_complex_t; + +/* --- Runtime support code (head) --- */ +/* Refnanny.proto */ +#ifndef CYTHON_REFNANNY + #define CYTHON_REFNANNY 0 +#endif +#if CYTHON_REFNANNY + typedef struct { + void (*INCREF)(void*, PyObject*, int); + void (*DECREF)(void*, PyObject*, int); + void (*GOTREF)(void*, PyObject*, int); + void (*GIVEREF)(void*, PyObject*, int); + void* (*SetupContext)(const char*, int, const char*); + void (*FinishContext)(void**); + } __Pyx_RefNannyAPIStruct; + static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; + static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname); + #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL; +#ifdef WITH_THREAD + #define __Pyx_RefNannySetupContext(name, acquire_gil)\ + if (acquire_gil) {\ + PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ + __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ + PyGILState_Release(__pyx_gilstate_save);\ + } else {\ + __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ + } +#else + #define __Pyx_RefNannySetupContext(name, acquire_gil)\ + __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__) +#endif + #define __Pyx_RefNannyFinishContext()\ + __Pyx_RefNanny->FinishContext(&__pyx_refnanny) + #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_XINCREF(r) do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0) + #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0) + #define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0) + #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0) +#else + #define __Pyx_RefNannyDeclarations + #define __Pyx_RefNannySetupContext(name, acquire_gil) + #define __Pyx_RefNannyFinishContext() + #define __Pyx_INCREF(r) Py_INCREF(r) + #define __Pyx_DECREF(r) Py_DECREF(r) + #define __Pyx_GOTREF(r) + #define __Pyx_GIVEREF(r) + #define __Pyx_XINCREF(r) Py_XINCREF(r) + #define __Pyx_XDECREF(r) Py_XDECREF(r) + #define __Pyx_XGOTREF(r) + #define __Pyx_XGIVEREF(r) +#endif +#define __Pyx_XDECREF_SET(r, v) do {\ + PyObject *tmp = (PyObject *) r;\ + r = v; __Pyx_XDECREF(tmp);\ + } while (0) +#define __Pyx_DECREF_SET(r, v) do {\ + PyObject *tmp = (PyObject *) r;\ + r = v; __Pyx_DECREF(tmp);\ + } while (0) +#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0) +#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0) + +/* RaiseArgTupleInvalid.proto */ +static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, + Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); + +/* RaiseDoubleKeywords.proto */ +static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name); + +/* ParseKeywords.proto */ +static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\ + PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\ + const char* function_name); + +/* PyObjectGetAttrStr.proto */ +#if CYTHON_USE_TYPE_SLOTS +static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name); +#else +#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n) +#endif + +/* GetItemInt.proto */ +#define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ + (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ + __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\ + (is_list ? (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL) :\ + __Pyx_GetItemInt_Generic(o, to_py_func(i)))) +#define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ + (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ + __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ + (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL)) +static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, + int wraparound, int boundscheck); +#define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ + (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ + __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ + (PyErr_SetString(PyExc_IndexError, "tuple index out of range"), (PyObject*)NULL)) +static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, + int wraparound, int boundscheck); +static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j); +static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, + int is_list, int wraparound, int boundscheck); + +/* GetBuiltinName.proto */ +static PyObject *__Pyx_GetBuiltinName(PyObject *name); + +/* PyDictVersioning.proto */ +#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS +#define __PYX_DICT_VERSION_INIT ((PY_UINT64_T) -1) +#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag) +#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\ + (version_var) = __PYX_GET_DICT_VERSION(dict);\ + (cache_var) = (value); +#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\ + static PY_UINT64_T __pyx_dict_version = 0;\ + static PyObject *__pyx_dict_cached_value = NULL;\ + if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\ + (VAR) = __pyx_dict_cached_value;\ + } else {\ + (VAR) = __pyx_dict_cached_value = (LOOKUP);\ + __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\ + }\ +} +static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj); +static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj); +static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version); +#else +#define __PYX_GET_DICT_VERSION(dict) (0) +#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var) +#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) (VAR) = (LOOKUP); +#endif + +/* GetModuleGlobalName.proto */ +#if CYTHON_USE_DICT_VERSIONS +#define __Pyx_GetModuleGlobalName(var, name) {\ + static PY_UINT64_T __pyx_dict_version = 0;\ + static PyObject *__pyx_dict_cached_value = NULL;\ + (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_d))) ?\ + (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\ + __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ +} +#define __Pyx_GetModuleGlobalNameUncached(var, name) {\ + PY_UINT64_T __pyx_dict_version;\ + PyObject *__pyx_dict_cached_value;\ + (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ +} +static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value); +#else +#define __Pyx_GetModuleGlobalName(var, name) (var) = __Pyx__GetModuleGlobalName(name) +#define __Pyx_GetModuleGlobalNameUncached(var, name) (var) = __Pyx__GetModuleGlobalName(name) +static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name); +#endif + +/* PyObjectCall.proto */ +#if CYTHON_COMPILING_IN_CPYTHON +static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw); +#else +#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw) +#endif + +/* ExtTypeTest.proto */ +static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type); + +/* IsLittleEndian.proto */ +static CYTHON_INLINE int __Pyx_Is_Little_Endian(void); + +/* BufferFormatCheck.proto */ +static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts); +static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, + __Pyx_BufFmt_StackElem* stack, + __Pyx_TypeInfo* type); + +/* BufferGetAndValidate.proto */ +#define __Pyx_GetBufferAndValidate(buf, obj, dtype, flags, nd, cast, stack)\ + ((obj == Py_None || obj == NULL) ?\ + (__Pyx_ZeroBuffer(buf), 0) :\ + __Pyx__GetBufferAndValidate(buf, obj, dtype, flags, nd, cast, stack)) +static int __Pyx__GetBufferAndValidate(Py_buffer* buf, PyObject* obj, + __Pyx_TypeInfo* dtype, int flags, int nd, int cast, __Pyx_BufFmt_StackElem* stack); +static void __Pyx_ZeroBuffer(Py_buffer* buf); +static CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info); +static Py_ssize_t __Pyx_minusones[] = { -1, -1, -1, -1, -1, -1, -1, -1 }; +static Py_ssize_t __Pyx_zeros[] = { 0, 0, 0, 0, 0, 0, 0, 0 }; + +/* BufferFallbackError.proto */ +static void __Pyx_RaiseBufferFallbackError(void); + +/* PyThreadStateGet.proto */ +#if CYTHON_FAST_THREAD_STATE +#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate; +#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current; +#define __Pyx_PyErr_Occurred() __pyx_tstate->curexc_type +#else +#define __Pyx_PyThreadState_declare +#define __Pyx_PyThreadState_assign +#define __Pyx_PyErr_Occurred() PyErr_Occurred() +#endif + +/* PyErrFetchRestore.proto */ +#if CYTHON_FAST_THREAD_STATE +#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL) +#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb) +#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb) +#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb) +#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb) +static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); +static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); +#if CYTHON_COMPILING_IN_CPYTHON +#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL)) +#else +#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) +#endif +#else +#define __Pyx_PyErr_Clear() PyErr_Clear() +#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) +#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb) +#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb) +#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb) +#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb) +#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb) +#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb) +#endif + +/* GetTopmostException.proto */ +#if CYTHON_USE_EXC_INFO_STACK +static _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate); +#endif + +/* SaveResetException.proto */ +#if CYTHON_FAST_THREAD_STATE +#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb) +static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); +#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb) +static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); +#else +#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb) +#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb) +#endif + +/* PyErrExceptionMatches.proto */ +#if CYTHON_FAST_THREAD_STATE +#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err) +static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err); +#else +#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err) +#endif + +/* GetException.proto */ +#if CYTHON_FAST_THREAD_STATE +#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb) +static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); +#else +static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb); +#endif + +/* RaiseException.proto */ +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause); + +/* TypeImport.proto */ +#ifndef __PYX_HAVE_RT_ImportType_proto +#define __PYX_HAVE_RT_ImportType_proto +enum __Pyx_ImportType_CheckSize { + __Pyx_ImportType_CheckSize_Error = 0, + __Pyx_ImportType_CheckSize_Warn = 1, + __Pyx_ImportType_CheckSize_Ignore = 2 +}; +static PyTypeObject *__Pyx_ImportType(PyObject* module, const char *module_name, const char *class_name, size_t size, enum __Pyx_ImportType_CheckSize check_size); +#endif + +/* Import.proto */ +static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level); + +/* CLineInTraceback.proto */ +#ifdef CYTHON_CLINE_IN_TRACEBACK +#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0) +#else +static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line); +#endif + +/* CodeObjectCache.proto */ +typedef struct { + PyCodeObject* code_object; + int code_line; +} __Pyx_CodeObjectCacheEntry; +struct __Pyx_CodeObjectCache { + int count; + int max_count; + __Pyx_CodeObjectCacheEntry* entries; +}; +static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL}; +static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line); +static PyCodeObject *__pyx_find_code_object(int code_line); +static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object); + +/* AddTraceback.proto */ +static void __Pyx_AddTraceback(const char *funcname, int c_line, + int py_line, const char *filename); + +/* BufferStructDeclare.proto */ +typedef struct { + Py_ssize_t shape, strides, suboffsets; +} __Pyx_Buf_DimInfo; +typedef struct { + size_t refcount; + Py_buffer pybuffer; +} __Pyx_Buffer; +typedef struct { + __Pyx_Buffer *rcbuffer; + char *data; + __Pyx_Buf_DimInfo diminfo[8]; +} __Pyx_LocalBuf_ND; + +#if PY_MAJOR_VERSION < 3 + static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags); + static void __Pyx_ReleaseBuffer(Py_buffer *view); +#else + #define __Pyx_GetBuffer PyObject_GetBuffer + #define __Pyx_ReleaseBuffer PyBuffer_Release +#endif + + +/* CIntToPy.proto */ +static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value); + +/* CIntToPy.proto */ +static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value); + +/* RealImag.proto */ +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + #define __Pyx_CREAL(z) ((z).real()) + #define __Pyx_CIMAG(z) ((z).imag()) + #else + #define __Pyx_CREAL(z) (__real__(z)) + #define __Pyx_CIMAG(z) (__imag__(z)) + #endif +#else + #define __Pyx_CREAL(z) ((z).real) + #define __Pyx_CIMAG(z) ((z).imag) +#endif +#if defined(__cplusplus) && CYTHON_CCOMPLEX\ + && (defined(_WIN32) || defined(__clang__) || (defined(__GNUC__) && (__GNUC__ >= 5 || __GNUC__ == 4 && __GNUC_MINOR__ >= 4 )) || __cplusplus >= 201103) + #define __Pyx_SET_CREAL(z,x) ((z).real(x)) + #define __Pyx_SET_CIMAG(z,y) ((z).imag(y)) +#else + #define __Pyx_SET_CREAL(z,x) __Pyx_CREAL(z) = (x) + #define __Pyx_SET_CIMAG(z,y) __Pyx_CIMAG(z) = (y) +#endif + +/* Arithmetic.proto */ +#if CYTHON_CCOMPLEX + #define __Pyx_c_eq_float(a, b) ((a)==(b)) + #define __Pyx_c_sum_float(a, b) ((a)+(b)) + #define __Pyx_c_diff_float(a, b) ((a)-(b)) + #define __Pyx_c_prod_float(a, b) ((a)*(b)) + #define __Pyx_c_quot_float(a, b) ((a)/(b)) + #define __Pyx_c_neg_float(a) (-(a)) + #ifdef __cplusplus + #define __Pyx_c_is_zero_float(z) ((z)==(float)0) + #define __Pyx_c_conj_float(z) (::std::conj(z)) + #if 1 + #define __Pyx_c_abs_float(z) (::std::abs(z)) + #define __Pyx_c_pow_float(a, b) (::std::pow(a, b)) + #endif + #else + #define __Pyx_c_is_zero_float(z) ((z)==0) + #define __Pyx_c_conj_float(z) (conjf(z)) + #if 1 + #define __Pyx_c_abs_float(z) (cabsf(z)) + #define __Pyx_c_pow_float(a, b) (cpowf(a, b)) + #endif + #endif +#else + static CYTHON_INLINE int __Pyx_c_eq_float(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sum_float(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_diff_float(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prod_float(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_neg_float(__pyx_t_float_complex); + static CYTHON_INLINE int __Pyx_c_is_zero_float(__pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conj_float(__pyx_t_float_complex); + #if 1 + static CYTHON_INLINE float __Pyx_c_abs_float(__pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_pow_float(__pyx_t_float_complex, __pyx_t_float_complex); + #endif +#endif + +/* Arithmetic.proto */ +#if CYTHON_CCOMPLEX + #define __Pyx_c_eq_double(a, b) ((a)==(b)) + #define __Pyx_c_sum_double(a, b) ((a)+(b)) + #define __Pyx_c_diff_double(a, b) ((a)-(b)) + #define __Pyx_c_prod_double(a, b) ((a)*(b)) + #define __Pyx_c_quot_double(a, b) ((a)/(b)) + #define __Pyx_c_neg_double(a) (-(a)) + #ifdef __cplusplus + #define __Pyx_c_is_zero_double(z) ((z)==(double)0) + #define __Pyx_c_conj_double(z) (::std::conj(z)) + #if 1 + #define __Pyx_c_abs_double(z) (::std::abs(z)) + #define __Pyx_c_pow_double(a, b) (::std::pow(a, b)) + #endif + #else + #define __Pyx_c_is_zero_double(z) ((z)==0) + #define __Pyx_c_conj_double(z) (conj(z)) + #if 1 + #define __Pyx_c_abs_double(z) (cabs(z)) + #define __Pyx_c_pow_double(a, b) (cpow(a, b)) + #endif + #endif +#else + static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_double_complex); + static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_double_complex); + #if 1 + static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_double_complex, __pyx_t_double_complex); + #endif +#endif + +/* CIntFromPy.proto */ +static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *); + +/* CIntFromPy.proto */ +static CYTHON_INLINE size_t __Pyx_PyInt_As_size_t(PyObject *); + +/* CIntFromPy.proto */ +static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *); + +/* FastTypeChecks.proto */ +#if CYTHON_COMPILING_IN_CPYTHON +#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type) +static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b); +static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type); +static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2); +#else +#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type) +#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type) +#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2)) +#endif +#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception) + +/* CheckBinaryVersion.proto */ +static int __Pyx_check_binary_version(void); + +/* InitStrings.proto */ +static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); + + +/* Module declarations from 'cpython.buffer' */ + +/* Module declarations from 'libc.string' */ + +/* Module declarations from 'libc.stdio' */ + +/* Module declarations from '__builtin__' */ + +/* Module declarations from 'cpython.type' */ +static PyTypeObject *__pyx_ptype_7cpython_4type_type = 0; + +/* Module declarations from 'cpython' */ + +/* Module declarations from 'cpython.object' */ + +/* Module declarations from 'cpython.ref' */ + +/* Module declarations from 'cpython.mem' */ + +/* Module declarations from 'numpy' */ + +/* Module declarations from 'numpy' */ +static PyTypeObject *__pyx_ptype_5numpy_dtype = 0; +static PyTypeObject *__pyx_ptype_5numpy_flatiter = 0; +static PyTypeObject *__pyx_ptype_5numpy_broadcast = 0; +static PyTypeObject *__pyx_ptype_5numpy_ndarray = 0; +static PyTypeObject *__pyx_ptype_5numpy_ufunc = 0; + +/* Module declarations from 'cython' */ + +/* Module declarations from 'nearest_neighbors' */ +static __Pyx_TypeInfo __Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t = { "float32_t", NULL, sizeof(__pyx_t_5numpy_float32_t), { 0 }, 0, 'R', 0, 0 }; +static __Pyx_TypeInfo __Pyx_TypeInfo_nn___pyx_t_5numpy_int64_t = { "int64_t", NULL, sizeof(__pyx_t_5numpy_int64_t), { 0 }, 0, IS_UNSIGNED(__pyx_t_5numpy_int64_t) ? 'U' : 'I', IS_UNSIGNED(__pyx_t_5numpy_int64_t), 0 }; +#define __Pyx_MODULE_NAME "nearest_neighbors" +extern int __pyx_module_is_main_nearest_neighbors; +int __pyx_module_is_main_nearest_neighbors = 0; + +/* Implementation of 'nearest_neighbors' */ +static PyObject *__pyx_builtin_ImportError; +static const char __pyx_k_K[] = "K"; +static const char __pyx_k_np[] = "np"; +static const char __pyx_k_dim[] = "dim"; +static const char __pyx_k_knn[] = "knn"; +static const char __pyx_k_omp[] = "omp"; +static const char __pyx_k_pts[] = "pts"; +static const char __pyx_k_long[] = "long"; +static const char __pyx_k_main[] = "__main__"; +static const char __pyx_k_name[] = "__name__"; +static const char __pyx_k_npts[] = "npts"; +static const char __pyx_k_test[] = "__test__"; +static const char __pyx_k_K_cpp[] = "K_cpp"; +static const char __pyx_k_dtype[] = "dtype"; +static const char __pyx_k_int64[] = "int64"; +static const char __pyx_k_numpy[] = "numpy"; +static const char __pyx_k_shape[] = "shape"; +static const char __pyx_k_zeros[] = "zeros"; +static const char __pyx_k_import[] = "__import__"; +static const char __pyx_k_float32[] = "float32"; +static const char __pyx_k_indices[] = "indices"; +static const char __pyx_k_knn_pyx[] = "knn.pyx"; +static const char __pyx_k_pts_cpp[] = "pts_cpp"; +static const char __pyx_k_queries[] = "queries"; +static const char __pyx_k_nqueries[] = "nqueries"; +static const char __pyx_k_knn_batch[] = "knn_batch"; +static const char __pyx_k_batch_size[] = "batch_size"; +static const char __pyx_k_ImportError[] = "ImportError"; +static const char __pyx_k_indices_cpp[] = "indices_cpp"; +static const char __pyx_k_queries_cpp[] = "queries_cpp"; +static const char __pyx_k_nqueries_cpp[] = "nqueries_cpp"; +static const char __pyx_k_ascontiguousarray[] = "ascontiguousarray"; +static const char __pyx_k_nearest_neighbors[] = "nearest_neighbors"; +static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback"; +static const char __pyx_k_knn_batch_distance_pick[] = "knn_batch_distance_pick"; +static const char __pyx_k_numpy_core_multiarray_failed_to[] = "numpy.core.multiarray failed to import"; +static const char __pyx_k_numpy_core_umath_failed_to_impor[] = "numpy.core.umath failed to import"; +static PyObject *__pyx_n_s_ImportError; +static PyObject *__pyx_n_s_K; +static PyObject *__pyx_n_s_K_cpp; +static PyObject *__pyx_n_s_ascontiguousarray; +static PyObject *__pyx_n_s_batch_size; +static PyObject *__pyx_n_s_cline_in_traceback; +static PyObject *__pyx_n_s_dim; +static PyObject *__pyx_n_s_dtype; +static PyObject *__pyx_n_s_float32; +static PyObject *__pyx_n_s_import; +static PyObject *__pyx_n_s_indices; +static PyObject *__pyx_n_s_indices_cpp; +static PyObject *__pyx_n_s_int64; +static PyObject *__pyx_n_s_knn; +static PyObject *__pyx_n_s_knn_batch; +static PyObject *__pyx_n_s_knn_batch_distance_pick; +static PyObject *__pyx_kp_s_knn_pyx; +static PyObject *__pyx_n_s_long; +static PyObject *__pyx_n_s_main; +static PyObject *__pyx_n_s_name; +static PyObject *__pyx_n_s_nearest_neighbors; +static PyObject *__pyx_n_s_np; +static PyObject *__pyx_n_s_npts; +static PyObject *__pyx_n_s_nqueries; +static PyObject *__pyx_n_s_nqueries_cpp; +static PyObject *__pyx_n_s_numpy; +static PyObject *__pyx_kp_s_numpy_core_multiarray_failed_to; +static PyObject *__pyx_kp_s_numpy_core_umath_failed_to_impor; +static PyObject *__pyx_n_s_omp; +static PyObject *__pyx_n_s_pts; +static PyObject *__pyx_n_s_pts_cpp; +static PyObject *__pyx_n_s_queries; +static PyObject *__pyx_n_s_queries_cpp; +static PyObject *__pyx_n_s_shape; +static PyObject *__pyx_n_s_test; +static PyObject *__pyx_n_s_zeros; +static PyObject *__pyx_pf_17nearest_neighbors_knn(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pts, PyObject *__pyx_v_queries, PyObject *__pyx_v_K, PyObject *__pyx_v_omp); /* proto */ +static PyObject *__pyx_pf_17nearest_neighbors_2knn_batch(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pts, PyObject *__pyx_v_queries, PyObject *__pyx_v_K, PyObject *__pyx_v_omp); /* proto */ +static PyObject *__pyx_pf_17nearest_neighbors_4knn_batch_distance_pick(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pts, PyObject *__pyx_v_nqueries, PyObject *__pyx_v_K, PyObject *__pyx_v_omp); /* proto */ +static PyObject *__pyx_tuple_; +static PyObject *__pyx_tuple__2; +static PyObject *__pyx_tuple__3; +static PyObject *__pyx_tuple__5; +static PyObject *__pyx_tuple__7; +static PyObject *__pyx_codeobj__4; +static PyObject *__pyx_codeobj__6; +static PyObject *__pyx_codeobj__8; +/* Late includes */ + +/* "knn.pyx":33 + * const size_t K, long* batch_indices) + * + * def knn(pts, queries, K, omp=False): # <<<<<<<<<<<<<< + * + * # define shape parameters + */ + +/* Python wrapper */ +static PyObject *__pyx_pw_17nearest_neighbors_1knn(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static PyMethodDef __pyx_mdef_17nearest_neighbors_1knn = {"knn", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_17nearest_neighbors_1knn, METH_VARARGS|METH_KEYWORDS, 0}; +static PyObject *__pyx_pw_17nearest_neighbors_1knn(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyObject *__pyx_v_pts = 0; + PyObject *__pyx_v_queries = 0; + PyObject *__pyx_v_K = 0; + PyObject *__pyx_v_omp = 0; + PyObject *__pyx_r = 0; + __Pyx_RefNannyDeclarations + __Pyx_RefNannySetupContext("knn (wrapper)", 0); + { + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pts,&__pyx_n_s_queries,&__pyx_n_s_K,&__pyx_n_s_omp,0}; + PyObject* values[4] = {0,0,0,0}; + values[3] = ((PyObject *)Py_False); + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args; + const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); + switch (pos_args) { + case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); + CYTHON_FALLTHROUGH; + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + CYTHON_FALLTHROUGH; + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + CYTHON_FALLTHROUGH; + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + CYTHON_FALLTHROUGH; + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + kw_args = PyDict_Size(__pyx_kwds); + switch (pos_args) { + case 0: + if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pts)) != 0)) kw_args--; + else goto __pyx_L5_argtuple_error; + CYTHON_FALLTHROUGH; + case 1: + if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_queries)) != 0)) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("knn", 0, 3, 4, 1); __PYX_ERR(0, 33, __pyx_L3_error) + } + CYTHON_FALLTHROUGH; + case 2: + if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_K)) != 0)) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("knn", 0, 3, 4, 2); __PYX_ERR(0, 33, __pyx_L3_error) + } + CYTHON_FALLTHROUGH; + case 3: + if (kw_args > 0) { + PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_omp); + if (value) { values[3] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "knn") < 0)) __PYX_ERR(0, 33, __pyx_L3_error) + } + } else { + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); + CYTHON_FALLTHROUGH; + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + break; + default: goto __pyx_L5_argtuple_error; + } + } + __pyx_v_pts = values[0]; + __pyx_v_queries = values[1]; + __pyx_v_K = values[2]; + __pyx_v_omp = values[3]; + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("knn", 0, 3, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 33, __pyx_L3_error) + __pyx_L3_error:; + __Pyx_AddTraceback("nearest_neighbors.knn", __pyx_clineno, __pyx_lineno, __pyx_filename); + __Pyx_RefNannyFinishContext(); + return NULL; + __pyx_L4_argument_unpacking_done:; + __pyx_r = __pyx_pf_17nearest_neighbors_knn(__pyx_self, __pyx_v_pts, __pyx_v_queries, __pyx_v_K, __pyx_v_omp); + + /* function exit code */ + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +static PyObject *__pyx_pf_17nearest_neighbors_knn(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pts, PyObject *__pyx_v_queries, PyObject *__pyx_v_K, PyObject *__pyx_v_omp) { + int __pyx_v_npts; + int __pyx_v_dim; + int __pyx_v_K_cpp; + int __pyx_v_nqueries; + PyArrayObject *__pyx_v_pts_cpp = 0; + PyArrayObject *__pyx_v_queries_cpp = 0; + PyArrayObject *__pyx_v_indices_cpp = 0; + PyObject *__pyx_v_indices = NULL; + __Pyx_LocalBuf_ND __pyx_pybuffernd_indices_cpp; + __Pyx_Buffer __pyx_pybuffer_indices_cpp; + __Pyx_LocalBuf_ND __pyx_pybuffernd_pts_cpp; + __Pyx_Buffer __pyx_pybuffer_pts_cpp; + __Pyx_LocalBuf_ND __pyx_pybuffernd_queries_cpp; + __Pyx_Buffer __pyx_pybuffer_queries_cpp; + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + int __pyx_t_3; + PyObject *__pyx_t_4 = NULL; + PyObject *__pyx_t_5 = NULL; + PyObject *__pyx_t_6 = NULL; + PyArrayObject *__pyx_t_7 = NULL; + PyObject *__pyx_t_8 = NULL; + PyObject *__pyx_t_9 = NULL; + PyObject *__pyx_t_10 = NULL; + PyArrayObject *__pyx_t_11 = NULL; + int __pyx_t_12; + __Pyx_RefNannySetupContext("knn", 0); + __pyx_pybuffer_pts_cpp.pybuffer.buf = NULL; + __pyx_pybuffer_pts_cpp.refcount = 0; + __pyx_pybuffernd_pts_cpp.data = NULL; + __pyx_pybuffernd_pts_cpp.rcbuffer = &__pyx_pybuffer_pts_cpp; + __pyx_pybuffer_queries_cpp.pybuffer.buf = NULL; + __pyx_pybuffer_queries_cpp.refcount = 0; + __pyx_pybuffernd_queries_cpp.data = NULL; + __pyx_pybuffernd_queries_cpp.rcbuffer = &__pyx_pybuffer_queries_cpp; + __pyx_pybuffer_indices_cpp.pybuffer.buf = NULL; + __pyx_pybuffer_indices_cpp.refcount = 0; + __pyx_pybuffernd_indices_cpp.data = NULL; + __pyx_pybuffernd_indices_cpp.rcbuffer = &__pyx_pybuffer_indices_cpp; + + /* "knn.pyx":47 + * + * # set shape values + * npts = pts.shape[0] # <<<<<<<<<<<<<< + * nqueries = queries.shape[0] + * dim = pts.shape[1] + */ + __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_pts, __pyx_n_s_shape); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 47, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = __Pyx_GetItemInt(__pyx_t_1, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 47, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_3 = __Pyx_PyInt_As_int(__pyx_t_2); if (unlikely((__pyx_t_3 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 47, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_v_npts = __pyx_t_3; + + /* "knn.pyx":48 + * # set shape values + * npts = pts.shape[0] + * nqueries = queries.shape[0] # <<<<<<<<<<<<<< + * dim = pts.shape[1] + * K_cpp = K + */ + __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_queries, __pyx_n_s_shape); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 48, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_1 = __Pyx_GetItemInt(__pyx_t_2, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 48, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_3 = __Pyx_PyInt_As_int(__pyx_t_1); if (unlikely((__pyx_t_3 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 48, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_v_nqueries = __pyx_t_3; + + /* "knn.pyx":49 + * npts = pts.shape[0] + * nqueries = queries.shape[0] + * dim = pts.shape[1] # <<<<<<<<<<<<<< + * K_cpp = K + * + */ + __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_pts, __pyx_n_s_shape); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 49, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = __Pyx_GetItemInt(__pyx_t_1, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 49, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_3 = __Pyx_PyInt_As_int(__pyx_t_2); if (unlikely((__pyx_t_3 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 49, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_v_dim = __pyx_t_3; + + /* "knn.pyx":50 + * nqueries = queries.shape[0] + * dim = pts.shape[1] + * K_cpp = K # <<<<<<<<<<<<<< + * + * # create indices tensor + */ + __pyx_t_3 = __Pyx_PyInt_As_int(__pyx_v_K); if (unlikely((__pyx_t_3 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 50, __pyx_L1_error) + __pyx_v_K_cpp = __pyx_t_3; + + /* "knn.pyx":53 + * + * # create indices tensor + * indices = np.zeros((queries.shape[0], K), dtype=np.int64) # <<<<<<<<<<<<<< + * + * pts_cpp = np.ascontiguousarray(pts, dtype=np.float32) + */ + __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 53, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_zeros); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 53, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_queries, __pyx_n_s_shape); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 53, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_4 = __Pyx_GetItemInt(__pyx_t_2, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 53, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 53, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __Pyx_GIVEREF(__pyx_t_4); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_4); + __Pyx_INCREF(__pyx_v_K); + __Pyx_GIVEREF(__pyx_v_K); + PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_K); + __pyx_t_4 = 0; + __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 53, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_GIVEREF(__pyx_t_2); + PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_2); + __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 53, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 53, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_int64); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 53, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_dtype, __pyx_t_6) < 0) __PYX_ERR(0, 53, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + __pyx_t_6 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_4, __pyx_t_2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 53, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_v_indices = __pyx_t_6; + __pyx_t_6 = 0; + + /* "knn.pyx":55 + * indices = np.zeros((queries.shape[0], K), dtype=np.int64) + * + * pts_cpp = np.ascontiguousarray(pts, dtype=np.float32) # <<<<<<<<<<<<<< + * queries_cpp = np.ascontiguousarray(queries, dtype=np.float32) + * indices_cpp = indices + */ + __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_np); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 55, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_ascontiguousarray); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 55, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + __pyx_t_6 = PyTuple_New(1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 55, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __Pyx_INCREF(__pyx_v_pts); + __Pyx_GIVEREF(__pyx_v_pts); + PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_v_pts); + __pyx_t_4 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 55, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 55, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_float32); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 55, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + if (PyDict_SetItem(__pyx_t_4, __pyx_n_s_dtype, __pyx_t_5) < 0) __PYX_ERR(0, 55, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_6, __pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 55, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (!(likely(((__pyx_t_5) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_5, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 55, __pyx_L1_error) + __pyx_t_7 = ((PyArrayObject *)__pyx_t_5); + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer); + __pyx_t_3 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer, (PyObject*)__pyx_t_7, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack); + if (unlikely(__pyx_t_3 < 0)) { + PyErr_Fetch(&__pyx_t_8, &__pyx_t_9, &__pyx_t_10); + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer, (PyObject*)__pyx_v_pts_cpp, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) { + Py_XDECREF(__pyx_t_8); Py_XDECREF(__pyx_t_9); Py_XDECREF(__pyx_t_10); + __Pyx_RaiseBufferFallbackError(); + } else { + PyErr_Restore(__pyx_t_8, __pyx_t_9, __pyx_t_10); + } + __pyx_t_8 = __pyx_t_9 = __pyx_t_10 = 0; + } + __pyx_pybuffernd_pts_cpp.diminfo[0].strides = __pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_pts_cpp.diminfo[0].shape = __pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_pts_cpp.diminfo[1].strides = __pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_pts_cpp.diminfo[1].shape = __pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer.shape[1]; + if (unlikely(__pyx_t_3 < 0)) __PYX_ERR(0, 55, __pyx_L1_error) + } + __pyx_t_7 = 0; + __pyx_v_pts_cpp = ((PyArrayObject *)__pyx_t_5); + __pyx_t_5 = 0; + + /* "knn.pyx":56 + * + * pts_cpp = np.ascontiguousarray(pts, dtype=np.float32) + * queries_cpp = np.ascontiguousarray(queries, dtype=np.float32) # <<<<<<<<<<<<<< + * indices_cpp = indices + * + */ + __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 56, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_ascontiguousarray); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 56, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 56, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_INCREF(__pyx_v_queries); + __Pyx_GIVEREF(__pyx_v_queries); + PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_queries); + __pyx_t_6 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 56, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 56, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_float32); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 56, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (PyDict_SetItem(__pyx_t_6, __pyx_n_s_dtype, __pyx_t_1) < 0) __PYX_ERR(0, 56, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_5, __pyx_t_6); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 56, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 56, __pyx_L1_error) + __pyx_t_11 = ((PyArrayObject *)__pyx_t_1); + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer); + __pyx_t_3 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer, (PyObject*)__pyx_t_11, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack); + if (unlikely(__pyx_t_3 < 0)) { + PyErr_Fetch(&__pyx_t_10, &__pyx_t_9, &__pyx_t_8); + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer, (PyObject*)__pyx_v_queries_cpp, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) { + Py_XDECREF(__pyx_t_10); Py_XDECREF(__pyx_t_9); Py_XDECREF(__pyx_t_8); + __Pyx_RaiseBufferFallbackError(); + } else { + PyErr_Restore(__pyx_t_10, __pyx_t_9, __pyx_t_8); + } + __pyx_t_10 = __pyx_t_9 = __pyx_t_8 = 0; + } + __pyx_pybuffernd_queries_cpp.diminfo[0].strides = __pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_queries_cpp.diminfo[0].shape = __pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_queries_cpp.diminfo[1].strides = __pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_queries_cpp.diminfo[1].shape = __pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer.shape[1]; + if (unlikely(__pyx_t_3 < 0)) __PYX_ERR(0, 56, __pyx_L1_error) + } + __pyx_t_11 = 0; + __pyx_v_queries_cpp = ((PyArrayObject *)__pyx_t_1); + __pyx_t_1 = 0; + + /* "knn.pyx":57 + * pts_cpp = np.ascontiguousarray(pts, dtype=np.float32) + * queries_cpp = np.ascontiguousarray(queries, dtype=np.float32) + * indices_cpp = indices # <<<<<<<<<<<<<< + * + * # normal estimation + */ + if (!(likely(((__pyx_v_indices) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_indices, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 57, __pyx_L1_error) + __pyx_t_1 = __pyx_v_indices; + __Pyx_INCREF(__pyx_t_1); + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer); + __pyx_t_3 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer, (PyObject*)((PyArrayObject *)__pyx_t_1), &__Pyx_TypeInfo_nn___pyx_t_5numpy_int64_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack); + if (unlikely(__pyx_t_3 < 0)) { + PyErr_Fetch(&__pyx_t_8, &__pyx_t_9, &__pyx_t_10); + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer, (PyObject*)__pyx_v_indices_cpp, &__Pyx_TypeInfo_nn___pyx_t_5numpy_int64_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) { + Py_XDECREF(__pyx_t_8); Py_XDECREF(__pyx_t_9); Py_XDECREF(__pyx_t_10); + __Pyx_RaiseBufferFallbackError(); + } else { + PyErr_Restore(__pyx_t_8, __pyx_t_9, __pyx_t_10); + } + __pyx_t_8 = __pyx_t_9 = __pyx_t_10 = 0; + } + __pyx_pybuffernd_indices_cpp.diminfo[0].strides = __pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_indices_cpp.diminfo[0].shape = __pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_indices_cpp.diminfo[1].strides = __pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_indices_cpp.diminfo[1].shape = __pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer.shape[1]; + if (unlikely(__pyx_t_3 < 0)) __PYX_ERR(0, 57, __pyx_L1_error) + } + __pyx_v_indices_cpp = ((PyArrayObject *)__pyx_t_1); + __pyx_t_1 = 0; + + /* "knn.pyx":60 + * + * # normal estimation + * if omp: # <<<<<<<<<<<<<< + * cpp_knn_omp( pts_cpp.data, npts, dim, + * queries_cpp.data, nqueries, + */ + __pyx_t_12 = __Pyx_PyObject_IsTrue(__pyx_v_omp); if (unlikely(__pyx_t_12 < 0)) __PYX_ERR(0, 60, __pyx_L1_error) + if (__pyx_t_12) { + + /* "knn.pyx":61 + * # normal estimation + * if omp: + * cpp_knn_omp( pts_cpp.data, npts, dim, # <<<<<<<<<<<<<< + * queries_cpp.data, nqueries, + * K_cpp, indices_cpp.data) + */ + cpp_knn_omp(((float *)__pyx_v_pts_cpp->data), __pyx_v_npts, __pyx_v_dim, ((float *)__pyx_v_queries_cpp->data), __pyx_v_nqueries, __pyx_v_K_cpp, ((long long *)__pyx_v_indices_cpp->data)); + + /* "knn.pyx":60 + * + * # normal estimation + * if omp: # <<<<<<<<<<<<<< + * cpp_knn_omp( pts_cpp.data, npts, dim, + * queries_cpp.data, nqueries, + */ + goto __pyx_L3; + } + + /* "knn.pyx":65 + * K_cpp, indices_cpp.data) + * else: + * cpp_knn( pts_cpp.data, npts, dim, # <<<<<<<<<<<<<< + * queries_cpp.data, nqueries, + * K_cpp, indices_cpp.data) + */ + /*else*/ { + + /* "knn.pyx":67 + * cpp_knn( pts_cpp.data, npts, dim, + * queries_cpp.data, nqueries, + * K_cpp, indices_cpp.data) # <<<<<<<<<<<<<< + * + * return indices + */ + cpp_knn(((float *)__pyx_v_pts_cpp->data), __pyx_v_npts, __pyx_v_dim, ((float *)__pyx_v_queries_cpp->data), __pyx_v_nqueries, __pyx_v_K_cpp, ((long long *)__pyx_v_indices_cpp->data)); + } + __pyx_L3:; + + /* "knn.pyx":69 + * K_cpp, indices_cpp.data) + * + * return indices # <<<<<<<<<<<<<< + * + * def knn_batch(pts, queries, K, omp=False): + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(__pyx_v_indices); + __pyx_r = __pyx_v_indices; + goto __pyx_L0; + + /* "knn.pyx":33 + * const size_t K, long* batch_indices) + * + * def knn(pts, queries, K, omp=False): # <<<<<<<<<<<<<< + * + * # define shape parameters + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_XDECREF(__pyx_t_5); + __Pyx_XDECREF(__pyx_t_6); + { PyObject *__pyx_type, *__pyx_value, *__pyx_tb; + __Pyx_PyThreadState_declare + __Pyx_PyThreadState_assign + __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer); + __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);} + __Pyx_AddTraceback("nearest_neighbors.knn", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = NULL; + goto __pyx_L2; + __pyx_L0:; + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer); + __pyx_L2:; + __Pyx_XDECREF((PyObject *)__pyx_v_pts_cpp); + __Pyx_XDECREF((PyObject *)__pyx_v_queries_cpp); + __Pyx_XDECREF((PyObject *)__pyx_v_indices_cpp); + __Pyx_XDECREF(__pyx_v_indices); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "knn.pyx":71 + * return indices + * + * def knn_batch(pts, queries, K, omp=False): # <<<<<<<<<<<<<< + * + * # define shape parameters + */ + +/* Python wrapper */ +static PyObject *__pyx_pw_17nearest_neighbors_3knn_batch(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static PyMethodDef __pyx_mdef_17nearest_neighbors_3knn_batch = {"knn_batch", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_17nearest_neighbors_3knn_batch, METH_VARARGS|METH_KEYWORDS, 0}; +static PyObject *__pyx_pw_17nearest_neighbors_3knn_batch(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyObject *__pyx_v_pts = 0; + PyObject *__pyx_v_queries = 0; + PyObject *__pyx_v_K = 0; + PyObject *__pyx_v_omp = 0; + PyObject *__pyx_r = 0; + __Pyx_RefNannyDeclarations + __Pyx_RefNannySetupContext("knn_batch (wrapper)", 0); + { + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pts,&__pyx_n_s_queries,&__pyx_n_s_K,&__pyx_n_s_omp,0}; + PyObject* values[4] = {0,0,0,0}; + values[3] = ((PyObject *)Py_False); + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args; + const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); + switch (pos_args) { + case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); + CYTHON_FALLTHROUGH; + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + CYTHON_FALLTHROUGH; + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + CYTHON_FALLTHROUGH; + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + CYTHON_FALLTHROUGH; + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + kw_args = PyDict_Size(__pyx_kwds); + switch (pos_args) { + case 0: + if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pts)) != 0)) kw_args--; + else goto __pyx_L5_argtuple_error; + CYTHON_FALLTHROUGH; + case 1: + if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_queries)) != 0)) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("knn_batch", 0, 3, 4, 1); __PYX_ERR(0, 71, __pyx_L3_error) + } + CYTHON_FALLTHROUGH; + case 2: + if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_K)) != 0)) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("knn_batch", 0, 3, 4, 2); __PYX_ERR(0, 71, __pyx_L3_error) + } + CYTHON_FALLTHROUGH; + case 3: + if (kw_args > 0) { + PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_omp); + if (value) { values[3] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "knn_batch") < 0)) __PYX_ERR(0, 71, __pyx_L3_error) + } + } else { + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); + CYTHON_FALLTHROUGH; + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + break; + default: goto __pyx_L5_argtuple_error; + } + } + __pyx_v_pts = values[0]; + __pyx_v_queries = values[1]; + __pyx_v_K = values[2]; + __pyx_v_omp = values[3]; + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("knn_batch", 0, 3, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 71, __pyx_L3_error) + __pyx_L3_error:; + __Pyx_AddTraceback("nearest_neighbors.knn_batch", __pyx_clineno, __pyx_lineno, __pyx_filename); + __Pyx_RefNannyFinishContext(); + return NULL; + __pyx_L4_argument_unpacking_done:; + __pyx_r = __pyx_pf_17nearest_neighbors_2knn_batch(__pyx_self, __pyx_v_pts, __pyx_v_queries, __pyx_v_K, __pyx_v_omp); + + /* function exit code */ + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +static PyObject *__pyx_pf_17nearest_neighbors_2knn_batch(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pts, PyObject *__pyx_v_queries, PyObject *__pyx_v_K, PyObject *__pyx_v_omp) { + int __pyx_v_batch_size; + int __pyx_v_npts; + int __pyx_v_nqueries; + int __pyx_v_K_cpp; + int __pyx_v_dim; + PyArrayObject *__pyx_v_pts_cpp = 0; + PyArrayObject *__pyx_v_queries_cpp = 0; + PyArrayObject *__pyx_v_indices_cpp = 0; + PyObject *__pyx_v_indices = NULL; + __Pyx_LocalBuf_ND __pyx_pybuffernd_indices_cpp; + __Pyx_Buffer __pyx_pybuffer_indices_cpp; + __Pyx_LocalBuf_ND __pyx_pybuffernd_pts_cpp; + __Pyx_Buffer __pyx_pybuffer_pts_cpp; + __Pyx_LocalBuf_ND __pyx_pybuffernd_queries_cpp; + __Pyx_Buffer __pyx_pybuffer_queries_cpp; + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + int __pyx_t_3; + PyObject *__pyx_t_4 = NULL; + PyObject *__pyx_t_5 = NULL; + PyObject *__pyx_t_6 = NULL; + PyArrayObject *__pyx_t_7 = NULL; + PyObject *__pyx_t_8 = NULL; + PyObject *__pyx_t_9 = NULL; + PyObject *__pyx_t_10 = NULL; + PyArrayObject *__pyx_t_11 = NULL; + int __pyx_t_12; + __Pyx_RefNannySetupContext("knn_batch", 0); + __pyx_pybuffer_pts_cpp.pybuffer.buf = NULL; + __pyx_pybuffer_pts_cpp.refcount = 0; + __pyx_pybuffernd_pts_cpp.data = NULL; + __pyx_pybuffernd_pts_cpp.rcbuffer = &__pyx_pybuffer_pts_cpp; + __pyx_pybuffer_queries_cpp.pybuffer.buf = NULL; + __pyx_pybuffer_queries_cpp.refcount = 0; + __pyx_pybuffernd_queries_cpp.data = NULL; + __pyx_pybuffernd_queries_cpp.rcbuffer = &__pyx_pybuffer_queries_cpp; + __pyx_pybuffer_indices_cpp.pybuffer.buf = NULL; + __pyx_pybuffer_indices_cpp.refcount = 0; + __pyx_pybuffernd_indices_cpp.data = NULL; + __pyx_pybuffernd_indices_cpp.rcbuffer = &__pyx_pybuffer_indices_cpp; + + /* "knn.pyx":86 + * + * # set shape values + * batch_size = pts.shape[0] # <<<<<<<<<<<<<< + * npts = pts.shape[1] + * dim = pts.shape[2] + */ + __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_pts, __pyx_n_s_shape); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 86, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = __Pyx_GetItemInt(__pyx_t_1, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 86, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_3 = __Pyx_PyInt_As_int(__pyx_t_2); if (unlikely((__pyx_t_3 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 86, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_v_batch_size = __pyx_t_3; + + /* "knn.pyx":87 + * # set shape values + * batch_size = pts.shape[0] + * npts = pts.shape[1] # <<<<<<<<<<<<<< + * dim = pts.shape[2] + * nqueries = queries.shape[1] + */ + __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_pts, __pyx_n_s_shape); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 87, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_1 = __Pyx_GetItemInt(__pyx_t_2, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 87, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_3 = __Pyx_PyInt_As_int(__pyx_t_1); if (unlikely((__pyx_t_3 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 87, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_v_npts = __pyx_t_3; + + /* "knn.pyx":88 + * batch_size = pts.shape[0] + * npts = pts.shape[1] + * dim = pts.shape[2] # <<<<<<<<<<<<<< + * nqueries = queries.shape[1] + * K_cpp = K + */ + __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_pts, __pyx_n_s_shape); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 88, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = __Pyx_GetItemInt(__pyx_t_1, 2, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 88, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_3 = __Pyx_PyInt_As_int(__pyx_t_2); if (unlikely((__pyx_t_3 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 88, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_v_dim = __pyx_t_3; + + /* "knn.pyx":89 + * npts = pts.shape[1] + * dim = pts.shape[2] + * nqueries = queries.shape[1] # <<<<<<<<<<<<<< + * K_cpp = K + * + */ + __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_queries, __pyx_n_s_shape); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 89, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_1 = __Pyx_GetItemInt(__pyx_t_2, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 89, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_3 = __Pyx_PyInt_As_int(__pyx_t_1); if (unlikely((__pyx_t_3 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 89, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_v_nqueries = __pyx_t_3; + + /* "knn.pyx":90 + * dim = pts.shape[2] + * nqueries = queries.shape[1] + * K_cpp = K # <<<<<<<<<<<<<< + * + * # create indices tensor + */ + __pyx_t_3 = __Pyx_PyInt_As_int(__pyx_v_K); if (unlikely((__pyx_t_3 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 90, __pyx_L1_error) + __pyx_v_K_cpp = __pyx_t_3; + + /* "knn.pyx":93 + * + * # create indices tensor + * indices = np.zeros((pts.shape[0], queries.shape[1], K), dtype=np.int64) # <<<<<<<<<<<<<< + * + * pts_cpp = np.ascontiguousarray(pts, dtype=np.float32) + */ + __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 93, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_zeros); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 93, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_pts, __pyx_n_s_shape); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 93, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_4 = __Pyx_GetItemInt(__pyx_t_1, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 93, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_queries, __pyx_n_s_shape); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 93, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_5 = __Pyx_GetItemInt(__pyx_t_1, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 93, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 93, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __Pyx_GIVEREF(__pyx_t_4); + PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_4); + __Pyx_GIVEREF(__pyx_t_5); + PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_5); + __Pyx_INCREF(__pyx_v_K); + __Pyx_GIVEREF(__pyx_v_K); + PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_K); + __pyx_t_4 = 0; + __pyx_t_5 = 0; + __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 93, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_GIVEREF(__pyx_t_1); + PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_1); + __pyx_t_1 = 0; + __pyx_t_1 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 93, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 93, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_int64); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 93, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_dtype, __pyx_t_6) < 0) __PYX_ERR(0, 93, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + __pyx_t_6 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_5, __pyx_t_1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 93, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_v_indices = __pyx_t_6; + __pyx_t_6 = 0; + + /* "knn.pyx":95 + * indices = np.zeros((pts.shape[0], queries.shape[1], K), dtype=np.int64) + * + * pts_cpp = np.ascontiguousarray(pts, dtype=np.float32) # <<<<<<<<<<<<<< + * queries_cpp = np.ascontiguousarray(queries, dtype=np.float32) + * indices_cpp = indices + */ + __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_np); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 95, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_ascontiguousarray); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 95, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + __pyx_t_6 = PyTuple_New(1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 95, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __Pyx_INCREF(__pyx_v_pts); + __Pyx_GIVEREF(__pyx_v_pts); + PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_v_pts); + __pyx_t_5 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 95, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 95, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_float32); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 95, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (PyDict_SetItem(__pyx_t_5, __pyx_n_s_dtype, __pyx_t_4) < 0) __PYX_ERR(0, 95, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_6, __pyx_t_5); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 95, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (!(likely(((__pyx_t_4) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_4, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 95, __pyx_L1_error) + __pyx_t_7 = ((PyArrayObject *)__pyx_t_4); + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer); + __pyx_t_3 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer, (PyObject*)__pyx_t_7, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t, PyBUF_FORMAT| PyBUF_STRIDES, 3, 0, __pyx_stack); + if (unlikely(__pyx_t_3 < 0)) { + PyErr_Fetch(&__pyx_t_8, &__pyx_t_9, &__pyx_t_10); + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer, (PyObject*)__pyx_v_pts_cpp, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t, PyBUF_FORMAT| PyBUF_STRIDES, 3, 0, __pyx_stack) == -1)) { + Py_XDECREF(__pyx_t_8); Py_XDECREF(__pyx_t_9); Py_XDECREF(__pyx_t_10); + __Pyx_RaiseBufferFallbackError(); + } else { + PyErr_Restore(__pyx_t_8, __pyx_t_9, __pyx_t_10); + } + __pyx_t_8 = __pyx_t_9 = __pyx_t_10 = 0; + } + __pyx_pybuffernd_pts_cpp.diminfo[0].strides = __pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_pts_cpp.diminfo[0].shape = __pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_pts_cpp.diminfo[1].strides = __pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_pts_cpp.diminfo[1].shape = __pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer.shape[1]; __pyx_pybuffernd_pts_cpp.diminfo[2].strides = __pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer.strides[2]; __pyx_pybuffernd_pts_cpp.diminfo[2].shape = __pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer.shape[2]; + if (unlikely(__pyx_t_3 < 0)) __PYX_ERR(0, 95, __pyx_L1_error) + } + __pyx_t_7 = 0; + __pyx_v_pts_cpp = ((PyArrayObject *)__pyx_t_4); + __pyx_t_4 = 0; + + /* "knn.pyx":96 + * + * pts_cpp = np.ascontiguousarray(pts, dtype=np.float32) + * queries_cpp = np.ascontiguousarray(queries, dtype=np.float32) # <<<<<<<<<<<<<< + * indices_cpp = indices + * + */ + __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 96, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_ascontiguousarray); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 96, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 96, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_INCREF(__pyx_v_queries); + __Pyx_GIVEREF(__pyx_v_queries); + PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_queries); + __pyx_t_6 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 96, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 96, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_float32); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 96, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + if (PyDict_SetItem(__pyx_t_6, __pyx_n_s_dtype, __pyx_t_2) < 0) __PYX_ERR(0, 96, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_4, __pyx_t_6); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 96, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + if (!(likely(((__pyx_t_2) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_2, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 96, __pyx_L1_error) + __pyx_t_11 = ((PyArrayObject *)__pyx_t_2); + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer); + __pyx_t_3 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer, (PyObject*)__pyx_t_11, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t, PyBUF_FORMAT| PyBUF_STRIDES, 3, 0, __pyx_stack); + if (unlikely(__pyx_t_3 < 0)) { + PyErr_Fetch(&__pyx_t_10, &__pyx_t_9, &__pyx_t_8); + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer, (PyObject*)__pyx_v_queries_cpp, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t, PyBUF_FORMAT| PyBUF_STRIDES, 3, 0, __pyx_stack) == -1)) { + Py_XDECREF(__pyx_t_10); Py_XDECREF(__pyx_t_9); Py_XDECREF(__pyx_t_8); + __Pyx_RaiseBufferFallbackError(); + } else { + PyErr_Restore(__pyx_t_10, __pyx_t_9, __pyx_t_8); + } + __pyx_t_10 = __pyx_t_9 = __pyx_t_8 = 0; + } + __pyx_pybuffernd_queries_cpp.diminfo[0].strides = __pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_queries_cpp.diminfo[0].shape = __pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_queries_cpp.diminfo[1].strides = __pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_queries_cpp.diminfo[1].shape = __pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer.shape[1]; __pyx_pybuffernd_queries_cpp.diminfo[2].strides = __pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer.strides[2]; __pyx_pybuffernd_queries_cpp.diminfo[2].shape = __pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer.shape[2]; + if (unlikely(__pyx_t_3 < 0)) __PYX_ERR(0, 96, __pyx_L1_error) + } + __pyx_t_11 = 0; + __pyx_v_queries_cpp = ((PyArrayObject *)__pyx_t_2); + __pyx_t_2 = 0; + + /* "knn.pyx":97 + * pts_cpp = np.ascontiguousarray(pts, dtype=np.float32) + * queries_cpp = np.ascontiguousarray(queries, dtype=np.float32) + * indices_cpp = indices # <<<<<<<<<<<<<< + * + * # normal estimation + */ + if (!(likely(((__pyx_v_indices) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_indices, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 97, __pyx_L1_error) + __pyx_t_2 = __pyx_v_indices; + __Pyx_INCREF(__pyx_t_2); + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer); + __pyx_t_3 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer, (PyObject*)((PyArrayObject *)__pyx_t_2), &__Pyx_TypeInfo_nn___pyx_t_5numpy_int64_t, PyBUF_FORMAT| PyBUF_STRIDES, 3, 0, __pyx_stack); + if (unlikely(__pyx_t_3 < 0)) { + PyErr_Fetch(&__pyx_t_8, &__pyx_t_9, &__pyx_t_10); + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer, (PyObject*)__pyx_v_indices_cpp, &__Pyx_TypeInfo_nn___pyx_t_5numpy_int64_t, PyBUF_FORMAT| PyBUF_STRIDES, 3, 0, __pyx_stack) == -1)) { + Py_XDECREF(__pyx_t_8); Py_XDECREF(__pyx_t_9); Py_XDECREF(__pyx_t_10); + __Pyx_RaiseBufferFallbackError(); + } else { + PyErr_Restore(__pyx_t_8, __pyx_t_9, __pyx_t_10); + } + __pyx_t_8 = __pyx_t_9 = __pyx_t_10 = 0; + } + __pyx_pybuffernd_indices_cpp.diminfo[0].strides = __pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_indices_cpp.diminfo[0].shape = __pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_indices_cpp.diminfo[1].strides = __pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_indices_cpp.diminfo[1].shape = __pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer.shape[1]; __pyx_pybuffernd_indices_cpp.diminfo[2].strides = __pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer.strides[2]; __pyx_pybuffernd_indices_cpp.diminfo[2].shape = __pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer.shape[2]; + if (unlikely(__pyx_t_3 < 0)) __PYX_ERR(0, 97, __pyx_L1_error) + } + __pyx_v_indices_cpp = ((PyArrayObject *)__pyx_t_2); + __pyx_t_2 = 0; + + /* "knn.pyx":100 + * + * # normal estimation + * if omp: # <<<<<<<<<<<<<< + * cpp_knn_batch_omp( pts_cpp.data, batch_size, npts, dim, + * queries_cpp.data, nqueries, + */ + __pyx_t_12 = __Pyx_PyObject_IsTrue(__pyx_v_omp); if (unlikely(__pyx_t_12 < 0)) __PYX_ERR(0, 100, __pyx_L1_error) + if (__pyx_t_12) { + + /* "knn.pyx":101 + * # normal estimation + * if omp: + * cpp_knn_batch_omp( pts_cpp.data, batch_size, npts, dim, # <<<<<<<<<<<<<< + * queries_cpp.data, nqueries, + * K_cpp, indices_cpp.data) + */ + cpp_knn_batch_omp(((float *)__pyx_v_pts_cpp->data), __pyx_v_batch_size, __pyx_v_npts, __pyx_v_dim, ((float *)__pyx_v_queries_cpp->data), __pyx_v_nqueries, __pyx_v_K_cpp, ((long long *)__pyx_v_indices_cpp->data)); + + /* "knn.pyx":100 + * + * # normal estimation + * if omp: # <<<<<<<<<<<<<< + * cpp_knn_batch_omp( pts_cpp.data, batch_size, npts, dim, + * queries_cpp.data, nqueries, + */ + goto __pyx_L3; + } + + /* "knn.pyx":105 + * K_cpp, indices_cpp.data) + * else: + * cpp_knn_batch( pts_cpp.data, batch_size, npts, dim, # <<<<<<<<<<<<<< + * queries_cpp.data, nqueries, + * K_cpp, indices_cpp.data) + */ + /*else*/ { + + /* "knn.pyx":107 + * cpp_knn_batch( pts_cpp.data, batch_size, npts, dim, + * queries_cpp.data, nqueries, + * K_cpp, indices_cpp.data) # <<<<<<<<<<<<<< + * + * return indices + */ + cpp_knn_batch(((float *)__pyx_v_pts_cpp->data), __pyx_v_batch_size, __pyx_v_npts, __pyx_v_dim, ((float *)__pyx_v_queries_cpp->data), __pyx_v_nqueries, __pyx_v_K_cpp, ((long long *)__pyx_v_indices_cpp->data)); + } + __pyx_L3:; + + /* "knn.pyx":109 + * K_cpp, indices_cpp.data) + * + * return indices # <<<<<<<<<<<<<< + * + * def knn_batch_distance_pick(pts, nqueries, K, omp=False): + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(__pyx_v_indices); + __pyx_r = __pyx_v_indices; + goto __pyx_L0; + + /* "knn.pyx":71 + * return indices + * + * def knn_batch(pts, queries, K, omp=False): # <<<<<<<<<<<<<< + * + * # define shape parameters + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_XDECREF(__pyx_t_5); + __Pyx_XDECREF(__pyx_t_6); + { PyObject *__pyx_type, *__pyx_value, *__pyx_tb; + __Pyx_PyThreadState_declare + __Pyx_PyThreadState_assign + __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer); + __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);} + __Pyx_AddTraceback("nearest_neighbors.knn_batch", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = NULL; + goto __pyx_L2; + __pyx_L0:; + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer); + __pyx_L2:; + __Pyx_XDECREF((PyObject *)__pyx_v_pts_cpp); + __Pyx_XDECREF((PyObject *)__pyx_v_queries_cpp); + __Pyx_XDECREF((PyObject *)__pyx_v_indices_cpp); + __Pyx_XDECREF(__pyx_v_indices); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "knn.pyx":111 + * return indices + * + * def knn_batch_distance_pick(pts, nqueries, K, omp=False): # <<<<<<<<<<<<<< + * + * # define shape parameters + */ + +/* Python wrapper */ +static PyObject *__pyx_pw_17nearest_neighbors_5knn_batch_distance_pick(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static PyMethodDef __pyx_mdef_17nearest_neighbors_5knn_batch_distance_pick = {"knn_batch_distance_pick", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_17nearest_neighbors_5knn_batch_distance_pick, METH_VARARGS|METH_KEYWORDS, 0}; +static PyObject *__pyx_pw_17nearest_neighbors_5knn_batch_distance_pick(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyObject *__pyx_v_pts = 0; + PyObject *__pyx_v_nqueries = 0; + PyObject *__pyx_v_K = 0; + PyObject *__pyx_v_omp = 0; + PyObject *__pyx_r = 0; + __Pyx_RefNannyDeclarations + __Pyx_RefNannySetupContext("knn_batch_distance_pick (wrapper)", 0); + { + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pts,&__pyx_n_s_nqueries,&__pyx_n_s_K,&__pyx_n_s_omp,0}; + PyObject* values[4] = {0,0,0,0}; + values[3] = ((PyObject *)Py_False); + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args; + const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); + switch (pos_args) { + case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); + CYTHON_FALLTHROUGH; + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + CYTHON_FALLTHROUGH; + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + CYTHON_FALLTHROUGH; + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + CYTHON_FALLTHROUGH; + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + kw_args = PyDict_Size(__pyx_kwds); + switch (pos_args) { + case 0: + if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pts)) != 0)) kw_args--; + else goto __pyx_L5_argtuple_error; + CYTHON_FALLTHROUGH; + case 1: + if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_nqueries)) != 0)) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("knn_batch_distance_pick", 0, 3, 4, 1); __PYX_ERR(0, 111, __pyx_L3_error) + } + CYTHON_FALLTHROUGH; + case 2: + if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_K)) != 0)) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("knn_batch_distance_pick", 0, 3, 4, 2); __PYX_ERR(0, 111, __pyx_L3_error) + } + CYTHON_FALLTHROUGH; + case 3: + if (kw_args > 0) { + PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_omp); + if (value) { values[3] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "knn_batch_distance_pick") < 0)) __PYX_ERR(0, 111, __pyx_L3_error) + } + } else { + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); + CYTHON_FALLTHROUGH; + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + break; + default: goto __pyx_L5_argtuple_error; + } + } + __pyx_v_pts = values[0]; + __pyx_v_nqueries = values[1]; + __pyx_v_K = values[2]; + __pyx_v_omp = values[3]; + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("knn_batch_distance_pick", 0, 3, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 111, __pyx_L3_error) + __pyx_L3_error:; + __Pyx_AddTraceback("nearest_neighbors.knn_batch_distance_pick", __pyx_clineno, __pyx_lineno, __pyx_filename); + __Pyx_RefNannyFinishContext(); + return NULL; + __pyx_L4_argument_unpacking_done:; + __pyx_r = __pyx_pf_17nearest_neighbors_4knn_batch_distance_pick(__pyx_self, __pyx_v_pts, __pyx_v_nqueries, __pyx_v_K, __pyx_v_omp); + + /* function exit code */ + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +static PyObject *__pyx_pf_17nearest_neighbors_4knn_batch_distance_pick(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pts, PyObject *__pyx_v_nqueries, PyObject *__pyx_v_K, PyObject *__pyx_v_omp) { + int __pyx_v_batch_size; + int __pyx_v_npts; + CYTHON_UNUSED int __pyx_v_nqueries_cpp; + int __pyx_v_K_cpp; + int __pyx_v_dim; + PyArrayObject *__pyx_v_pts_cpp = 0; + PyArrayObject *__pyx_v_queries_cpp = 0; + PyArrayObject *__pyx_v_indices_cpp = 0; + PyObject *__pyx_v_indices = NULL; + PyObject *__pyx_v_queries = NULL; + __Pyx_LocalBuf_ND __pyx_pybuffernd_indices_cpp; + __Pyx_Buffer __pyx_pybuffer_indices_cpp; + __Pyx_LocalBuf_ND __pyx_pybuffernd_pts_cpp; + __Pyx_Buffer __pyx_pybuffer_pts_cpp; + __Pyx_LocalBuf_ND __pyx_pybuffernd_queries_cpp; + __Pyx_Buffer __pyx_pybuffer_queries_cpp; + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + int __pyx_t_3; + PyObject *__pyx_t_4 = NULL; + PyObject *__pyx_t_5 = NULL; + PyObject *__pyx_t_6 = NULL; + PyArrayObject *__pyx_t_7 = NULL; + PyObject *__pyx_t_8 = NULL; + PyObject *__pyx_t_9 = NULL; + PyObject *__pyx_t_10 = NULL; + PyArrayObject *__pyx_t_11 = NULL; + int __pyx_t_12; + size_t __pyx_t_13; + __Pyx_RefNannySetupContext("knn_batch_distance_pick", 0); + __pyx_pybuffer_pts_cpp.pybuffer.buf = NULL; + __pyx_pybuffer_pts_cpp.refcount = 0; + __pyx_pybuffernd_pts_cpp.data = NULL; + __pyx_pybuffernd_pts_cpp.rcbuffer = &__pyx_pybuffer_pts_cpp; + __pyx_pybuffer_queries_cpp.pybuffer.buf = NULL; + __pyx_pybuffer_queries_cpp.refcount = 0; + __pyx_pybuffernd_queries_cpp.data = NULL; + __pyx_pybuffernd_queries_cpp.rcbuffer = &__pyx_pybuffer_queries_cpp; + __pyx_pybuffer_indices_cpp.pybuffer.buf = NULL; + __pyx_pybuffer_indices_cpp.refcount = 0; + __pyx_pybuffernd_indices_cpp.data = NULL; + __pyx_pybuffernd_indices_cpp.rcbuffer = &__pyx_pybuffer_indices_cpp; + + /* "knn.pyx":126 + * + * # set shape values + * batch_size = pts.shape[0] # <<<<<<<<<<<<<< + * npts = pts.shape[1] + * dim = pts.shape[2] + */ + __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_pts, __pyx_n_s_shape); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 126, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = __Pyx_GetItemInt(__pyx_t_1, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 126, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_3 = __Pyx_PyInt_As_int(__pyx_t_2); if (unlikely((__pyx_t_3 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 126, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_v_batch_size = __pyx_t_3; + + /* "knn.pyx":127 + * # set shape values + * batch_size = pts.shape[0] + * npts = pts.shape[1] # <<<<<<<<<<<<<< + * dim = pts.shape[2] + * nqueries_cpp = nqueries + */ + __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_pts, __pyx_n_s_shape); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 127, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_1 = __Pyx_GetItemInt(__pyx_t_2, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 127, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_3 = __Pyx_PyInt_As_int(__pyx_t_1); if (unlikely((__pyx_t_3 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 127, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_v_npts = __pyx_t_3; + + /* "knn.pyx":128 + * batch_size = pts.shape[0] + * npts = pts.shape[1] + * dim = pts.shape[2] # <<<<<<<<<<<<<< + * nqueries_cpp = nqueries + * K_cpp = K + */ + __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_pts, __pyx_n_s_shape); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 128, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = __Pyx_GetItemInt(__pyx_t_1, 2, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 128, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_3 = __Pyx_PyInt_As_int(__pyx_t_2); if (unlikely((__pyx_t_3 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 128, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_v_dim = __pyx_t_3; + + /* "knn.pyx":129 + * npts = pts.shape[1] + * dim = pts.shape[2] + * nqueries_cpp = nqueries # <<<<<<<<<<<<<< + * K_cpp = K + * + */ + __pyx_t_3 = __Pyx_PyInt_As_int(__pyx_v_nqueries); if (unlikely((__pyx_t_3 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 129, __pyx_L1_error) + __pyx_v_nqueries_cpp = __pyx_t_3; + + /* "knn.pyx":130 + * dim = pts.shape[2] + * nqueries_cpp = nqueries + * K_cpp = K # <<<<<<<<<<<<<< + * + * # create indices tensor + */ + __pyx_t_3 = __Pyx_PyInt_As_int(__pyx_v_K); if (unlikely((__pyx_t_3 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 130, __pyx_L1_error) + __pyx_v_K_cpp = __pyx_t_3; + + /* "knn.pyx":133 + * + * # create indices tensor + * indices = np.zeros((pts.shape[0], nqueries, K), dtype=np.long) # <<<<<<<<<<<<<< + * queries = np.zeros((pts.shape[0], nqueries, dim), dtype=np.float32) + * + */ + __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 133, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_zeros); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 133, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_pts, __pyx_n_s_shape); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 133, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_4 = __Pyx_GetItemInt(__pyx_t_2, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 133, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyTuple_New(3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 133, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __Pyx_GIVEREF(__pyx_t_4); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_4); + __Pyx_INCREF(__pyx_v_nqueries); + __Pyx_GIVEREF(__pyx_v_nqueries); + PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_nqueries); + __Pyx_INCREF(__pyx_v_K); + __Pyx_GIVEREF(__pyx_v_K); + PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_v_K); + __pyx_t_4 = 0; + __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 133, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_GIVEREF(__pyx_t_2); + PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_2); + __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 133, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 133, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_long); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 133, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (PyDict_SetItem(__pyx_t_2, __pyx_n_s_dtype, __pyx_t_6) < 0) __PYX_ERR(0, 133, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + __pyx_t_6 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_4, __pyx_t_2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 133, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_v_indices = __pyx_t_6; + __pyx_t_6 = 0; + + /* "knn.pyx":134 + * # create indices tensor + * indices = np.zeros((pts.shape[0], nqueries, K), dtype=np.long) + * queries = np.zeros((pts.shape[0], nqueries, dim), dtype=np.float32) # <<<<<<<<<<<<<< + * + * pts_cpp = np.ascontiguousarray(pts, dtype=np.float32) + */ + __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_np); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 134, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_zeros); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 134, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_pts, __pyx_n_s_shape); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 134, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __pyx_t_4 = __Pyx_GetItemInt(__pyx_t_6, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 134, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + __pyx_t_6 = __Pyx_PyInt_From_int(__pyx_v_dim); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 134, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 134, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __Pyx_GIVEREF(__pyx_t_4); + PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_4); + __Pyx_INCREF(__pyx_v_nqueries); + __Pyx_GIVEREF(__pyx_v_nqueries); + PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_v_nqueries); + __Pyx_GIVEREF(__pyx_t_6); + PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_t_6); + __pyx_t_4 = 0; + __pyx_t_6 = 0; + __pyx_t_6 = PyTuple_New(1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 134, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __Pyx_GIVEREF(__pyx_t_1); + PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_1); + __pyx_t_1 = 0; + __pyx_t_1 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 134, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 134, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_float32); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 134, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_dtype, __pyx_t_5) < 0) __PYX_ERR(0, 134, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_6, __pyx_t_1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 134, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_v_queries = __pyx_t_5; + __pyx_t_5 = 0; + + /* "knn.pyx":136 + * queries = np.zeros((pts.shape[0], nqueries, dim), dtype=np.float32) + * + * pts_cpp = np.ascontiguousarray(pts, dtype=np.float32) # <<<<<<<<<<<<<< + * queries_cpp = np.ascontiguousarray(queries, dtype=np.float32) + * indices_cpp = indices + */ + __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 136, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_ascontiguousarray); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 136, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 136, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_INCREF(__pyx_v_pts); + __Pyx_GIVEREF(__pyx_v_pts); + PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_pts); + __pyx_t_6 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 136, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 136, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_float32); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 136, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (PyDict_SetItem(__pyx_t_6, __pyx_n_s_dtype, __pyx_t_4) < 0) __PYX_ERR(0, 136, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_5, __pyx_t_6); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 136, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + if (!(likely(((__pyx_t_4) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_4, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 136, __pyx_L1_error) + __pyx_t_7 = ((PyArrayObject *)__pyx_t_4); + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer); + __pyx_t_3 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer, (PyObject*)__pyx_t_7, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t, PyBUF_FORMAT| PyBUF_STRIDES, 3, 0, __pyx_stack); + if (unlikely(__pyx_t_3 < 0)) { + PyErr_Fetch(&__pyx_t_8, &__pyx_t_9, &__pyx_t_10); + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer, (PyObject*)__pyx_v_pts_cpp, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t, PyBUF_FORMAT| PyBUF_STRIDES, 3, 0, __pyx_stack) == -1)) { + Py_XDECREF(__pyx_t_8); Py_XDECREF(__pyx_t_9); Py_XDECREF(__pyx_t_10); + __Pyx_RaiseBufferFallbackError(); + } else { + PyErr_Restore(__pyx_t_8, __pyx_t_9, __pyx_t_10); + } + __pyx_t_8 = __pyx_t_9 = __pyx_t_10 = 0; + } + __pyx_pybuffernd_pts_cpp.diminfo[0].strides = __pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_pts_cpp.diminfo[0].shape = __pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_pts_cpp.diminfo[1].strides = __pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_pts_cpp.diminfo[1].shape = __pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer.shape[1]; __pyx_pybuffernd_pts_cpp.diminfo[2].strides = __pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer.strides[2]; __pyx_pybuffernd_pts_cpp.diminfo[2].shape = __pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer.shape[2]; + if (unlikely(__pyx_t_3 < 0)) __PYX_ERR(0, 136, __pyx_L1_error) + } + __pyx_t_7 = 0; + __pyx_v_pts_cpp = ((PyArrayObject *)__pyx_t_4); + __pyx_t_4 = 0; + + /* "knn.pyx":137 + * + * pts_cpp = np.ascontiguousarray(pts, dtype=np.float32) + * queries_cpp = np.ascontiguousarray(queries, dtype=np.float32) # <<<<<<<<<<<<<< + * indices_cpp = indices + * + */ + __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 137, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_ascontiguousarray); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 137, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_6); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 137, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_4); + __Pyx_INCREF(__pyx_v_queries); + __Pyx_GIVEREF(__pyx_v_queries); + PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_queries); + __pyx_t_5 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 137, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 137, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_float32); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 137, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + if (PyDict_SetItem(__pyx_t_5, __pyx_n_s_dtype, __pyx_t_2) < 0) __PYX_ERR(0, 137, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_4, __pyx_t_5); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 137, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (!(likely(((__pyx_t_2) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_2, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 137, __pyx_L1_error) + __pyx_t_11 = ((PyArrayObject *)__pyx_t_2); + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer); + __pyx_t_3 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer, (PyObject*)__pyx_t_11, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t, PyBUF_FORMAT| PyBUF_STRIDES, 3, 0, __pyx_stack); + if (unlikely(__pyx_t_3 < 0)) { + PyErr_Fetch(&__pyx_t_10, &__pyx_t_9, &__pyx_t_8); + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer, (PyObject*)__pyx_v_queries_cpp, &__Pyx_TypeInfo_nn___pyx_t_5numpy_float32_t, PyBUF_FORMAT| PyBUF_STRIDES, 3, 0, __pyx_stack) == -1)) { + Py_XDECREF(__pyx_t_10); Py_XDECREF(__pyx_t_9); Py_XDECREF(__pyx_t_8); + __Pyx_RaiseBufferFallbackError(); + } else { + PyErr_Restore(__pyx_t_10, __pyx_t_9, __pyx_t_8); + } + __pyx_t_10 = __pyx_t_9 = __pyx_t_8 = 0; + } + __pyx_pybuffernd_queries_cpp.diminfo[0].strides = __pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_queries_cpp.diminfo[0].shape = __pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_queries_cpp.diminfo[1].strides = __pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_queries_cpp.diminfo[1].shape = __pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer.shape[1]; __pyx_pybuffernd_queries_cpp.diminfo[2].strides = __pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer.strides[2]; __pyx_pybuffernd_queries_cpp.diminfo[2].shape = __pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer.shape[2]; + if (unlikely(__pyx_t_3 < 0)) __PYX_ERR(0, 137, __pyx_L1_error) + } + __pyx_t_11 = 0; + __pyx_v_queries_cpp = ((PyArrayObject *)__pyx_t_2); + __pyx_t_2 = 0; + + /* "knn.pyx":138 + * pts_cpp = np.ascontiguousarray(pts, dtype=np.float32) + * queries_cpp = np.ascontiguousarray(queries, dtype=np.float32) + * indices_cpp = indices # <<<<<<<<<<<<<< + * + * if omp: + */ + if (!(likely(((__pyx_v_indices) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_indices, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 138, __pyx_L1_error) + __pyx_t_2 = __pyx_v_indices; + __Pyx_INCREF(__pyx_t_2); + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer); + __pyx_t_3 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer, (PyObject*)((PyArrayObject *)__pyx_t_2), &__Pyx_TypeInfo_nn___pyx_t_5numpy_int64_t, PyBUF_FORMAT| PyBUF_STRIDES, 3, 0, __pyx_stack); + if (unlikely(__pyx_t_3 < 0)) { + PyErr_Fetch(&__pyx_t_8, &__pyx_t_9, &__pyx_t_10); + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer, (PyObject*)__pyx_v_indices_cpp, &__Pyx_TypeInfo_nn___pyx_t_5numpy_int64_t, PyBUF_FORMAT| PyBUF_STRIDES, 3, 0, __pyx_stack) == -1)) { + Py_XDECREF(__pyx_t_8); Py_XDECREF(__pyx_t_9); Py_XDECREF(__pyx_t_10); + __Pyx_RaiseBufferFallbackError(); + } else { + PyErr_Restore(__pyx_t_8, __pyx_t_9, __pyx_t_10); + } + __pyx_t_8 = __pyx_t_9 = __pyx_t_10 = 0; + } + __pyx_pybuffernd_indices_cpp.diminfo[0].strides = __pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_indices_cpp.diminfo[0].shape = __pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_indices_cpp.diminfo[1].strides = __pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_indices_cpp.diminfo[1].shape = __pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer.shape[1]; __pyx_pybuffernd_indices_cpp.diminfo[2].strides = __pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer.strides[2]; __pyx_pybuffernd_indices_cpp.diminfo[2].shape = __pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer.shape[2]; + if (unlikely(__pyx_t_3 < 0)) __PYX_ERR(0, 138, __pyx_L1_error) + } + __pyx_v_indices_cpp = ((PyArrayObject *)__pyx_t_2); + __pyx_t_2 = 0; + + /* "knn.pyx":140 + * indices_cpp = indices + * + * if omp: # <<<<<<<<<<<<<< + * cpp_knn_batch_distance_pick_omp( pts_cpp.data, batch_size, npts, dim, + * queries_cpp.data, nqueries, + */ + __pyx_t_12 = __Pyx_PyObject_IsTrue(__pyx_v_omp); if (unlikely(__pyx_t_12 < 0)) __PYX_ERR(0, 140, __pyx_L1_error) + if (__pyx_t_12) { + + /* "knn.pyx":142 + * if omp: + * cpp_knn_batch_distance_pick_omp( pts_cpp.data, batch_size, npts, dim, + * queries_cpp.data, nqueries, # <<<<<<<<<<<<<< + * K_cpp, indices_cpp.data) + * else: + */ + __pyx_t_13 = __Pyx_PyInt_As_size_t(__pyx_v_nqueries); if (unlikely((__pyx_t_13 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 142, __pyx_L1_error) + + /* "knn.pyx":141 + * + * if omp: + * cpp_knn_batch_distance_pick_omp( pts_cpp.data, batch_size, npts, dim, # <<<<<<<<<<<<<< + * queries_cpp.data, nqueries, + * K_cpp, indices_cpp.data) + */ + cpp_knn_batch_distance_pick_omp(((float *)__pyx_v_pts_cpp->data), __pyx_v_batch_size, __pyx_v_npts, __pyx_v_dim, ((float *)__pyx_v_queries_cpp->data), __pyx_t_13, __pyx_v_K_cpp, ((long long *)__pyx_v_indices_cpp->data)); + + /* "knn.pyx":140 + * indices_cpp = indices + * + * if omp: # <<<<<<<<<<<<<< + * cpp_knn_batch_distance_pick_omp( pts_cpp.data, batch_size, npts, dim, + * queries_cpp.data, nqueries, + */ + goto __pyx_L3; + } + + /* "knn.pyx":145 + * K_cpp, indices_cpp.data) + * else: + * cpp_knn_batch_distance_pick( pts_cpp.data, batch_size, npts, dim, # <<<<<<<<<<<<<< + * queries_cpp.data, nqueries, + * K_cpp, indices_cpp.data) + */ + /*else*/ { + + /* "knn.pyx":146 + * else: + * cpp_knn_batch_distance_pick( pts_cpp.data, batch_size, npts, dim, + * queries_cpp.data, nqueries, # <<<<<<<<<<<<<< + * K_cpp, indices_cpp.data) + * + */ + __pyx_t_13 = __Pyx_PyInt_As_size_t(__pyx_v_nqueries); if (unlikely((__pyx_t_13 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 146, __pyx_L1_error) + + /* "knn.pyx":145 + * K_cpp, indices_cpp.data) + * else: + * cpp_knn_batch_distance_pick( pts_cpp.data, batch_size, npts, dim, # <<<<<<<<<<<<<< + * queries_cpp.data, nqueries, + * K_cpp, indices_cpp.data) + */ + cpp_knn_batch_distance_pick(((float *)__pyx_v_pts_cpp->data), __pyx_v_batch_size, __pyx_v_npts, __pyx_v_dim, ((float *)__pyx_v_queries_cpp->data), __pyx_t_13, __pyx_v_K_cpp, ((long long *)__pyx_v_indices_cpp->data)); + } + __pyx_L3:; + + /* "knn.pyx":149 + * K_cpp, indices_cpp.data) + * + * return indices, queries # <<<<<<<<<<<<<< + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 149, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(__pyx_v_indices); + __Pyx_GIVEREF(__pyx_v_indices); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_indices); + __Pyx_INCREF(__pyx_v_queries); + __Pyx_GIVEREF(__pyx_v_queries); + PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_queries); + __pyx_r = __pyx_t_2; + __pyx_t_2 = 0; + goto __pyx_L0; + + /* "knn.pyx":111 + * return indices + * + * def knn_batch_distance_pick(pts, nqueries, K, omp=False): # <<<<<<<<<<<<<< + * + * # define shape parameters + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_XDECREF(__pyx_t_5); + __Pyx_XDECREF(__pyx_t_6); + { PyObject *__pyx_type, *__pyx_value, *__pyx_tb; + __Pyx_PyThreadState_declare + __Pyx_PyThreadState_assign + __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer); + __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);} + __Pyx_AddTraceback("nearest_neighbors.knn_batch_distance_pick", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = NULL; + goto __pyx_L2; + __pyx_L0:; + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_indices_cpp.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_pts_cpp.rcbuffer->pybuffer); + __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_queries_cpp.rcbuffer->pybuffer); + __pyx_L2:; + __Pyx_XDECREF((PyObject *)__pyx_v_pts_cpp); + __Pyx_XDECREF((PyObject *)__pyx_v_queries_cpp); + __Pyx_XDECREF((PyObject *)__pyx_v_indices_cpp); + __Pyx_XDECREF(__pyx_v_indices); + __Pyx_XDECREF(__pyx_v_queries); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":734 + * ctypedef npy_cdouble complex_t + * + * cdef inline object PyArray_MultiIterNew1(a): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(1, a) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew1(PyObject *__pyx_v_a) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew1", 0); + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":735 + * + * cdef inline object PyArray_MultiIterNew1(a): + * return PyArray_MultiIterNew(1, a) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew2(a, b): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(1, ((void *)__pyx_v_a)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 735, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":734 + * ctypedef npy_cdouble complex_t + * + * cdef inline object PyArray_MultiIterNew1(a): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(1, a) + * + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew1", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":737 + * return PyArray_MultiIterNew(1, a) + * + * cdef inline object PyArray_MultiIterNew2(a, b): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(2, a, b) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew2(PyObject *__pyx_v_a, PyObject *__pyx_v_b) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew2", 0); + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":738 + * + * cdef inline object PyArray_MultiIterNew2(a, b): + * return PyArray_MultiIterNew(2, a, b) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew3(a, b, c): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(2, ((void *)__pyx_v_a), ((void *)__pyx_v_b)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 738, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":737 + * return PyArray_MultiIterNew(1, a) + * + * cdef inline object PyArray_MultiIterNew2(a, b): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(2, a, b) + * + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew2", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":740 + * return PyArray_MultiIterNew(2, a, b) + * + * cdef inline object PyArray_MultiIterNew3(a, b, c): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(3, a, b, c) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew3(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew3", 0); + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":741 + * + * cdef inline object PyArray_MultiIterNew3(a, b, c): + * return PyArray_MultiIterNew(3, a, b, c) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew4(a, b, c, d): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(3, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 741, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":740 + * return PyArray_MultiIterNew(2, a, b) + * + * cdef inline object PyArray_MultiIterNew3(a, b, c): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(3, a, b, c) + * + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew3", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":743 + * return PyArray_MultiIterNew(3, a, b, c) + * + * cdef inline object PyArray_MultiIterNew4(a, b, c, d): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(4, a, b, c, d) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew4(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew4", 0); + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":744 + * + * cdef inline object PyArray_MultiIterNew4(a, b, c, d): + * return PyArray_MultiIterNew(4, a, b, c, d) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(4, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c), ((void *)__pyx_v_d)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 744, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":743 + * return PyArray_MultiIterNew(3, a, b, c) + * + * cdef inline object PyArray_MultiIterNew4(a, b, c, d): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(4, a, b, c, d) + * + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew4", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":746 + * return PyArray_MultiIterNew(4, a, b, c, d) + * + * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(5, a, b, c, d, e) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew5(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d, PyObject *__pyx_v_e) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew5", 0); + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":747 + * + * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): + * return PyArray_MultiIterNew(5, a, b, c, d, e) # <<<<<<<<<<<<<< + * + * cdef inline tuple PyDataType_SHAPE(dtype d): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(5, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c), ((void *)__pyx_v_d), ((void *)__pyx_v_e)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 747, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":746 + * return PyArray_MultiIterNew(4, a, b, c, d) + * + * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(5, a, b, c, d, e) + * + */ + + /* function exit code */ + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew5", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":749 + * return PyArray_MultiIterNew(5, a, b, c, d, e) + * + * cdef inline tuple PyDataType_SHAPE(dtype d): # <<<<<<<<<<<<<< + * if PyDataType_HASSUBARRAY(d): + * return d.subarray.shape + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyDataType_SHAPE(PyArray_Descr *__pyx_v_d) { + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + int __pyx_t_1; + __Pyx_RefNannySetupContext("PyDataType_SHAPE", 0); + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":750 + * + * cdef inline tuple PyDataType_SHAPE(dtype d): + * if PyDataType_HASSUBARRAY(d): # <<<<<<<<<<<<<< + * return d.subarray.shape + * else: + */ + __pyx_t_1 = (PyDataType_HASSUBARRAY(__pyx_v_d) != 0); + if (__pyx_t_1) { + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":751 + * cdef inline tuple PyDataType_SHAPE(dtype d): + * if PyDataType_HASSUBARRAY(d): + * return d.subarray.shape # <<<<<<<<<<<<<< + * else: + * return () + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(((PyObject*)__pyx_v_d->subarray->shape)); + __pyx_r = ((PyObject*)__pyx_v_d->subarray->shape); + goto __pyx_L0; + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":750 + * + * cdef inline tuple PyDataType_SHAPE(dtype d): + * if PyDataType_HASSUBARRAY(d): # <<<<<<<<<<<<<< + * return d.subarray.shape + * else: + */ + } + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":753 + * return d.subarray.shape + * else: + * return () # <<<<<<<<<<<<<< + * + * + */ + /*else*/ { + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(__pyx_empty_tuple); + __pyx_r = __pyx_empty_tuple; + goto __pyx_L0; + } + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":749 + * return PyArray_MultiIterNew(5, a, b, c, d, e) + * + * cdef inline tuple PyDataType_SHAPE(dtype d): # <<<<<<<<<<<<<< + * if PyDataType_HASSUBARRAY(d): + * return d.subarray.shape + */ + + /* function exit code */ + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":868 + * int _import_umath() except -1 + * + * cdef inline void set_array_base(ndarray arr, object base): # <<<<<<<<<<<<<< + * Py_INCREF(base) # important to do this before stealing the reference below! + * PyArray_SetBaseObject(arr, base) + */ + +static CYTHON_INLINE void __pyx_f_5numpy_set_array_base(PyArrayObject *__pyx_v_arr, PyObject *__pyx_v_base) { + __Pyx_RefNannyDeclarations + __Pyx_RefNannySetupContext("set_array_base", 0); + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":869 + * + * cdef inline void set_array_base(ndarray arr, object base): + * Py_INCREF(base) # important to do this before stealing the reference below! # <<<<<<<<<<<<<< + * PyArray_SetBaseObject(arr, base) + * + */ + Py_INCREF(__pyx_v_base); + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":870 + * cdef inline void set_array_base(ndarray arr, object base): + * Py_INCREF(base) # important to do this before stealing the reference below! + * PyArray_SetBaseObject(arr, base) # <<<<<<<<<<<<<< + * + * cdef inline object get_array_base(ndarray arr): + */ + (void)(PyArray_SetBaseObject(__pyx_v_arr, __pyx_v_base)); + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":868 + * int _import_umath() except -1 + * + * cdef inline void set_array_base(ndarray arr, object base): # <<<<<<<<<<<<<< + * Py_INCREF(base) # important to do this before stealing the reference below! + * PyArray_SetBaseObject(arr, base) + */ + + /* function exit code */ + __Pyx_RefNannyFinishContext(); +} + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":872 + * PyArray_SetBaseObject(arr, base) + * + * cdef inline object get_array_base(ndarray arr): # <<<<<<<<<<<<<< + * base = PyArray_BASE(arr) + * if base is NULL: + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_get_array_base(PyArrayObject *__pyx_v_arr) { + PyObject *__pyx_v_base; + PyObject *__pyx_r = NULL; + __Pyx_RefNannyDeclarations + int __pyx_t_1; + __Pyx_RefNannySetupContext("get_array_base", 0); + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":873 + * + * cdef inline object get_array_base(ndarray arr): + * base = PyArray_BASE(arr) # <<<<<<<<<<<<<< + * if base is NULL: + * return None + */ + __pyx_v_base = PyArray_BASE(__pyx_v_arr); + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":874 + * cdef inline object get_array_base(ndarray arr): + * base = PyArray_BASE(arr) + * if base is NULL: # <<<<<<<<<<<<<< + * return None + * return base + */ + __pyx_t_1 = ((__pyx_v_base == NULL) != 0); + if (__pyx_t_1) { + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":875 + * base = PyArray_BASE(arr) + * if base is NULL: + * return None # <<<<<<<<<<<<<< + * return base + * + */ + __Pyx_XDECREF(__pyx_r); + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":874 + * cdef inline object get_array_base(ndarray arr): + * base = PyArray_BASE(arr) + * if base is NULL: # <<<<<<<<<<<<<< + * return None + * return base + */ + } + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":876 + * if base is NULL: + * return None + * return base # <<<<<<<<<<<<<< + * + * # Versions of the import_* functions which are more suitable for + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(((PyObject *)__pyx_v_base)); + __pyx_r = ((PyObject *)__pyx_v_base); + goto __pyx_L0; + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":872 + * PyArray_SetBaseObject(arr, base) + * + * cdef inline object get_array_base(ndarray arr): # <<<<<<<<<<<<<< + * base = PyArray_BASE(arr) + * if base is NULL: + */ + + /* function exit code */ + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":880 + * # Versions of the import_* functions which are more suitable for + * # Cython code. + * cdef inline int import_array() except -1: # <<<<<<<<<<<<<< + * try: + * __pyx_import_array() + */ + +static CYTHON_INLINE int __pyx_f_5numpy_import_array(void) { + int __pyx_r; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + int __pyx_t_4; + PyObject *__pyx_t_5 = NULL; + PyObject *__pyx_t_6 = NULL; + PyObject *__pyx_t_7 = NULL; + PyObject *__pyx_t_8 = NULL; + __Pyx_RefNannySetupContext("import_array", 0); + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":881 + * # Cython code. + * cdef inline int import_array() except -1: + * try: # <<<<<<<<<<<<<< + * __pyx_import_array() + * except Exception: + */ + { + __Pyx_PyThreadState_declare + __Pyx_PyThreadState_assign + __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3); + __Pyx_XGOTREF(__pyx_t_1); + __Pyx_XGOTREF(__pyx_t_2); + __Pyx_XGOTREF(__pyx_t_3); + /*try:*/ { + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":882 + * cdef inline int import_array() except -1: + * try: + * __pyx_import_array() # <<<<<<<<<<<<<< + * except Exception: + * raise ImportError("numpy.core.multiarray failed to import") + */ + __pyx_t_4 = _import_array(); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 882, __pyx_L3_error) + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":881 + * # Cython code. + * cdef inline int import_array() except -1: + * try: # <<<<<<<<<<<<<< + * __pyx_import_array() + * except Exception: + */ + } + __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; + goto __pyx_L8_try_end; + __pyx_L3_error:; + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":883 + * try: + * __pyx_import_array() + * except Exception: # <<<<<<<<<<<<<< + * raise ImportError("numpy.core.multiarray failed to import") + * + */ + __pyx_t_4 = __Pyx_PyErr_ExceptionMatches(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0]))); + if (__pyx_t_4) { + __Pyx_AddTraceback("numpy.import_array", __pyx_clineno, __pyx_lineno, __pyx_filename); + if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(1, 883, __pyx_L5_except_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_GOTREF(__pyx_t_6); + __Pyx_GOTREF(__pyx_t_7); + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":884 + * __pyx_import_array() + * except Exception: + * raise ImportError("numpy.core.multiarray failed to import") # <<<<<<<<<<<<<< + * + * cdef inline int import_umath() except -1: + */ + __pyx_t_8 = __Pyx_PyObject_Call(__pyx_builtin_ImportError, __pyx_tuple_, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 884, __pyx_L5_except_error) + __Pyx_GOTREF(__pyx_t_8); + __Pyx_Raise(__pyx_t_8, 0, 0, 0); + __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; + __PYX_ERR(1, 884, __pyx_L5_except_error) + } + goto __pyx_L5_except_error; + __pyx_L5_except_error:; + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":881 + * # Cython code. + * cdef inline int import_array() except -1: + * try: # <<<<<<<<<<<<<< + * __pyx_import_array() + * except Exception: + */ + __Pyx_XGIVEREF(__pyx_t_1); + __Pyx_XGIVEREF(__pyx_t_2); + __Pyx_XGIVEREF(__pyx_t_3); + __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); + goto __pyx_L1_error; + __pyx_L8_try_end:; + } + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":880 + * # Versions of the import_* functions which are more suitable for + * # Cython code. + * cdef inline int import_array() except -1: # <<<<<<<<<<<<<< + * try: + * __pyx_import_array() + */ + + /* function exit code */ + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_5); + __Pyx_XDECREF(__pyx_t_6); + __Pyx_XDECREF(__pyx_t_7); + __Pyx_XDECREF(__pyx_t_8); + __Pyx_AddTraceback("numpy.import_array", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = -1; + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":886 + * raise ImportError("numpy.core.multiarray failed to import") + * + * cdef inline int import_umath() except -1: # <<<<<<<<<<<<<< + * try: + * _import_umath() + */ + +static CYTHON_INLINE int __pyx_f_5numpy_import_umath(void) { + int __pyx_r; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + int __pyx_t_4; + PyObject *__pyx_t_5 = NULL; + PyObject *__pyx_t_6 = NULL; + PyObject *__pyx_t_7 = NULL; + PyObject *__pyx_t_8 = NULL; + __Pyx_RefNannySetupContext("import_umath", 0); + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":887 + * + * cdef inline int import_umath() except -1: + * try: # <<<<<<<<<<<<<< + * _import_umath() + * except Exception: + */ + { + __Pyx_PyThreadState_declare + __Pyx_PyThreadState_assign + __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3); + __Pyx_XGOTREF(__pyx_t_1); + __Pyx_XGOTREF(__pyx_t_2); + __Pyx_XGOTREF(__pyx_t_3); + /*try:*/ { + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":888 + * cdef inline int import_umath() except -1: + * try: + * _import_umath() # <<<<<<<<<<<<<< + * except Exception: + * raise ImportError("numpy.core.umath failed to import") + */ + __pyx_t_4 = _import_umath(); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 888, __pyx_L3_error) + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":887 + * + * cdef inline int import_umath() except -1: + * try: # <<<<<<<<<<<<<< + * _import_umath() + * except Exception: + */ + } + __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; + goto __pyx_L8_try_end; + __pyx_L3_error:; + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":889 + * try: + * _import_umath() + * except Exception: # <<<<<<<<<<<<<< + * raise ImportError("numpy.core.umath failed to import") + * + */ + __pyx_t_4 = __Pyx_PyErr_ExceptionMatches(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0]))); + if (__pyx_t_4) { + __Pyx_AddTraceback("numpy.import_umath", __pyx_clineno, __pyx_lineno, __pyx_filename); + if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(1, 889, __pyx_L5_except_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_GOTREF(__pyx_t_6); + __Pyx_GOTREF(__pyx_t_7); + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":890 + * _import_umath() + * except Exception: + * raise ImportError("numpy.core.umath failed to import") # <<<<<<<<<<<<<< + * + * cdef inline int import_ufunc() except -1: + */ + __pyx_t_8 = __Pyx_PyObject_Call(__pyx_builtin_ImportError, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 890, __pyx_L5_except_error) + __Pyx_GOTREF(__pyx_t_8); + __Pyx_Raise(__pyx_t_8, 0, 0, 0); + __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; + __PYX_ERR(1, 890, __pyx_L5_except_error) + } + goto __pyx_L5_except_error; + __pyx_L5_except_error:; + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":887 + * + * cdef inline int import_umath() except -1: + * try: # <<<<<<<<<<<<<< + * _import_umath() + * except Exception: + */ + __Pyx_XGIVEREF(__pyx_t_1); + __Pyx_XGIVEREF(__pyx_t_2); + __Pyx_XGIVEREF(__pyx_t_3); + __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); + goto __pyx_L1_error; + __pyx_L8_try_end:; + } + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":886 + * raise ImportError("numpy.core.multiarray failed to import") + * + * cdef inline int import_umath() except -1: # <<<<<<<<<<<<<< + * try: + * _import_umath() + */ + + /* function exit code */ + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_5); + __Pyx_XDECREF(__pyx_t_6); + __Pyx_XDECREF(__pyx_t_7); + __Pyx_XDECREF(__pyx_t_8); + __Pyx_AddTraceback("numpy.import_umath", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = -1; + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":892 + * raise ImportError("numpy.core.umath failed to import") + * + * cdef inline int import_ufunc() except -1: # <<<<<<<<<<<<<< + * try: + * _import_umath() + */ + +static CYTHON_INLINE int __pyx_f_5numpy_import_ufunc(void) { + int __pyx_r; + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + int __pyx_t_4; + PyObject *__pyx_t_5 = NULL; + PyObject *__pyx_t_6 = NULL; + PyObject *__pyx_t_7 = NULL; + PyObject *__pyx_t_8 = NULL; + __Pyx_RefNannySetupContext("import_ufunc", 0); + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":893 + * + * cdef inline int import_ufunc() except -1: + * try: # <<<<<<<<<<<<<< + * _import_umath() + * except Exception: + */ + { + __Pyx_PyThreadState_declare + __Pyx_PyThreadState_assign + __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3); + __Pyx_XGOTREF(__pyx_t_1); + __Pyx_XGOTREF(__pyx_t_2); + __Pyx_XGOTREF(__pyx_t_3); + /*try:*/ { + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":894 + * cdef inline int import_ufunc() except -1: + * try: + * _import_umath() # <<<<<<<<<<<<<< + * except Exception: + * raise ImportError("numpy.core.umath failed to import") + */ + __pyx_t_4 = _import_umath(); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 894, __pyx_L3_error) + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":893 + * + * cdef inline int import_ufunc() except -1: + * try: # <<<<<<<<<<<<<< + * _import_umath() + * except Exception: + */ + } + __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; + goto __pyx_L8_try_end; + __pyx_L3_error:; + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":895 + * try: + * _import_umath() + * except Exception: # <<<<<<<<<<<<<< + * raise ImportError("numpy.core.umath failed to import") + * + */ + __pyx_t_4 = __Pyx_PyErr_ExceptionMatches(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0]))); + if (__pyx_t_4) { + __Pyx_AddTraceback("numpy.import_ufunc", __pyx_clineno, __pyx_lineno, __pyx_filename); + if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(1, 895, __pyx_L5_except_error) + __Pyx_GOTREF(__pyx_t_5); + __Pyx_GOTREF(__pyx_t_6); + __Pyx_GOTREF(__pyx_t_7); + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":896 + * _import_umath() + * except Exception: + * raise ImportError("numpy.core.umath failed to import") # <<<<<<<<<<<<<< + * + * cdef extern from *: + */ + __pyx_t_8 = __Pyx_PyObject_Call(__pyx_builtin_ImportError, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 896, __pyx_L5_except_error) + __Pyx_GOTREF(__pyx_t_8); + __Pyx_Raise(__pyx_t_8, 0, 0, 0); + __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; + __PYX_ERR(1, 896, __pyx_L5_except_error) + } + goto __pyx_L5_except_error; + __pyx_L5_except_error:; + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":893 + * + * cdef inline int import_ufunc() except -1: + * try: # <<<<<<<<<<<<<< + * _import_umath() + * except Exception: + */ + __Pyx_XGIVEREF(__pyx_t_1); + __Pyx_XGIVEREF(__pyx_t_2); + __Pyx_XGIVEREF(__pyx_t_3); + __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); + goto __pyx_L1_error; + __pyx_L8_try_end:; + } + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":892 + * raise ImportError("numpy.core.umath failed to import") + * + * cdef inline int import_ufunc() except -1: # <<<<<<<<<<<<<< + * try: + * _import_umath() + */ + + /* function exit code */ + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_5); + __Pyx_XDECREF(__pyx_t_6); + __Pyx_XDECREF(__pyx_t_7); + __Pyx_XDECREF(__pyx_t_8); + __Pyx_AddTraceback("numpy.import_ufunc", __pyx_clineno, __pyx_lineno, __pyx_filename); + __pyx_r = -1; + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +static PyMethodDef __pyx_methods[] = { + {0, 0, 0, 0} +}; + +#if PY_MAJOR_VERSION >= 3 +#if CYTHON_PEP489_MULTI_PHASE_INIT +static PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def); /*proto*/ +static int __pyx_pymod_exec_nearest_neighbors(PyObject* module); /*proto*/ +static PyModuleDef_Slot __pyx_moduledef_slots[] = { + {Py_mod_create, (void*)__pyx_pymod_create}, + {Py_mod_exec, (void*)__pyx_pymod_exec_nearest_neighbors}, + {0, NULL} +}; +#endif + +static struct PyModuleDef __pyx_moduledef = { + PyModuleDef_HEAD_INIT, + "nearest_neighbors", + 0, /* m_doc */ + #if CYTHON_PEP489_MULTI_PHASE_INIT + 0, /* m_size */ + #else + -1, /* m_size */ + #endif + __pyx_methods /* m_methods */, + #if CYTHON_PEP489_MULTI_PHASE_INIT + __pyx_moduledef_slots, /* m_slots */ + #else + NULL, /* m_reload */ + #endif + NULL, /* m_traverse */ + NULL, /* m_clear */ + NULL /* m_free */ +}; +#endif +#ifndef CYTHON_SMALL_CODE +#if defined(__clang__) + #define CYTHON_SMALL_CODE +#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3)) + #define CYTHON_SMALL_CODE __attribute__((cold)) +#else + #define CYTHON_SMALL_CODE +#endif +#endif + +static __Pyx_StringTabEntry __pyx_string_tab[] = { + {&__pyx_n_s_ImportError, __pyx_k_ImportError, sizeof(__pyx_k_ImportError), 0, 0, 1, 1}, + {&__pyx_n_s_K, __pyx_k_K, sizeof(__pyx_k_K), 0, 0, 1, 1}, + {&__pyx_n_s_K_cpp, __pyx_k_K_cpp, sizeof(__pyx_k_K_cpp), 0, 0, 1, 1}, + {&__pyx_n_s_ascontiguousarray, __pyx_k_ascontiguousarray, sizeof(__pyx_k_ascontiguousarray), 0, 0, 1, 1}, + {&__pyx_n_s_batch_size, __pyx_k_batch_size, sizeof(__pyx_k_batch_size), 0, 0, 1, 1}, + {&__pyx_n_s_cline_in_traceback, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1}, + {&__pyx_n_s_dim, __pyx_k_dim, sizeof(__pyx_k_dim), 0, 0, 1, 1}, + {&__pyx_n_s_dtype, __pyx_k_dtype, sizeof(__pyx_k_dtype), 0, 0, 1, 1}, + {&__pyx_n_s_float32, __pyx_k_float32, sizeof(__pyx_k_float32), 0, 0, 1, 1}, + {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1}, + {&__pyx_n_s_indices, __pyx_k_indices, sizeof(__pyx_k_indices), 0, 0, 1, 1}, + {&__pyx_n_s_indices_cpp, __pyx_k_indices_cpp, sizeof(__pyx_k_indices_cpp), 0, 0, 1, 1}, + {&__pyx_n_s_int64, __pyx_k_int64, sizeof(__pyx_k_int64), 0, 0, 1, 1}, + {&__pyx_n_s_knn, __pyx_k_knn, sizeof(__pyx_k_knn), 0, 0, 1, 1}, + {&__pyx_n_s_knn_batch, __pyx_k_knn_batch, sizeof(__pyx_k_knn_batch), 0, 0, 1, 1}, + {&__pyx_n_s_knn_batch_distance_pick, __pyx_k_knn_batch_distance_pick, sizeof(__pyx_k_knn_batch_distance_pick), 0, 0, 1, 1}, + {&__pyx_kp_s_knn_pyx, __pyx_k_knn_pyx, sizeof(__pyx_k_knn_pyx), 0, 0, 1, 0}, + {&__pyx_n_s_long, __pyx_k_long, sizeof(__pyx_k_long), 0, 0, 1, 1}, + {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1}, + {&__pyx_n_s_name, __pyx_k_name, sizeof(__pyx_k_name), 0, 0, 1, 1}, + {&__pyx_n_s_nearest_neighbors, __pyx_k_nearest_neighbors, sizeof(__pyx_k_nearest_neighbors), 0, 0, 1, 1}, + {&__pyx_n_s_np, __pyx_k_np, sizeof(__pyx_k_np), 0, 0, 1, 1}, + {&__pyx_n_s_npts, __pyx_k_npts, sizeof(__pyx_k_npts), 0, 0, 1, 1}, + {&__pyx_n_s_nqueries, __pyx_k_nqueries, sizeof(__pyx_k_nqueries), 0, 0, 1, 1}, + {&__pyx_n_s_nqueries_cpp, __pyx_k_nqueries_cpp, sizeof(__pyx_k_nqueries_cpp), 0, 0, 1, 1}, + {&__pyx_n_s_numpy, __pyx_k_numpy, sizeof(__pyx_k_numpy), 0, 0, 1, 1}, + {&__pyx_kp_s_numpy_core_multiarray_failed_to, __pyx_k_numpy_core_multiarray_failed_to, sizeof(__pyx_k_numpy_core_multiarray_failed_to), 0, 0, 1, 0}, + {&__pyx_kp_s_numpy_core_umath_failed_to_impor, __pyx_k_numpy_core_umath_failed_to_impor, sizeof(__pyx_k_numpy_core_umath_failed_to_impor), 0, 0, 1, 0}, + {&__pyx_n_s_omp, __pyx_k_omp, sizeof(__pyx_k_omp), 0, 0, 1, 1}, + {&__pyx_n_s_pts, __pyx_k_pts, sizeof(__pyx_k_pts), 0, 0, 1, 1}, + {&__pyx_n_s_pts_cpp, __pyx_k_pts_cpp, sizeof(__pyx_k_pts_cpp), 0, 0, 1, 1}, + {&__pyx_n_s_queries, __pyx_k_queries, sizeof(__pyx_k_queries), 0, 0, 1, 1}, + {&__pyx_n_s_queries_cpp, __pyx_k_queries_cpp, sizeof(__pyx_k_queries_cpp), 0, 0, 1, 1}, + {&__pyx_n_s_shape, __pyx_k_shape, sizeof(__pyx_k_shape), 0, 0, 1, 1}, + {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1}, + {&__pyx_n_s_zeros, __pyx_k_zeros, sizeof(__pyx_k_zeros), 0, 0, 1, 1}, + {0, 0, 0, 0, 0, 0, 0} +}; +static CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) { + __pyx_builtin_ImportError = __Pyx_GetBuiltinName(__pyx_n_s_ImportError); if (!__pyx_builtin_ImportError) __PYX_ERR(1, 884, __pyx_L1_error) + return 0; + __pyx_L1_error:; + return -1; +} + +static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) { + __Pyx_RefNannyDeclarations + __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0); + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":884 + * __pyx_import_array() + * except Exception: + * raise ImportError("numpy.core.multiarray failed to import") # <<<<<<<<<<<<<< + * + * cdef inline int import_umath() except -1: + */ + __pyx_tuple_ = PyTuple_Pack(1, __pyx_kp_s_numpy_core_multiarray_failed_to); if (unlikely(!__pyx_tuple_)) __PYX_ERR(1, 884, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple_); + __Pyx_GIVEREF(__pyx_tuple_); + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":890 + * _import_umath() + * except Exception: + * raise ImportError("numpy.core.umath failed to import") # <<<<<<<<<<<<<< + * + * cdef inline int import_ufunc() except -1: + */ + __pyx_tuple__2 = PyTuple_Pack(1, __pyx_kp_s_numpy_core_umath_failed_to_impor); if (unlikely(!__pyx_tuple__2)) __PYX_ERR(1, 890, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__2); + __Pyx_GIVEREF(__pyx_tuple__2); + + /* "knn.pyx":33 + * const size_t K, long* batch_indices) + * + * def knn(pts, queries, K, omp=False): # <<<<<<<<<<<<<< + * + * # define shape parameters + */ + __pyx_tuple__3 = PyTuple_Pack(12, __pyx_n_s_pts, __pyx_n_s_queries, __pyx_n_s_K, __pyx_n_s_omp, __pyx_n_s_npts, __pyx_n_s_dim, __pyx_n_s_K_cpp, __pyx_n_s_nqueries, __pyx_n_s_pts_cpp, __pyx_n_s_queries_cpp, __pyx_n_s_indices_cpp, __pyx_n_s_indices); if (unlikely(!__pyx_tuple__3)) __PYX_ERR(0, 33, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__3); + __Pyx_GIVEREF(__pyx_tuple__3); + __pyx_codeobj__4 = (PyObject*)__Pyx_PyCode_New(4, 0, 12, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__3, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_knn_pyx, __pyx_n_s_knn, 33, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__4)) __PYX_ERR(0, 33, __pyx_L1_error) + + /* "knn.pyx":71 + * return indices + * + * def knn_batch(pts, queries, K, omp=False): # <<<<<<<<<<<<<< + * + * # define shape parameters + */ + __pyx_tuple__5 = PyTuple_Pack(13, __pyx_n_s_pts, __pyx_n_s_queries, __pyx_n_s_K, __pyx_n_s_omp, __pyx_n_s_batch_size, __pyx_n_s_npts, __pyx_n_s_nqueries, __pyx_n_s_K_cpp, __pyx_n_s_dim, __pyx_n_s_pts_cpp, __pyx_n_s_queries_cpp, __pyx_n_s_indices_cpp, __pyx_n_s_indices); if (unlikely(!__pyx_tuple__5)) __PYX_ERR(0, 71, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__5); + __Pyx_GIVEREF(__pyx_tuple__5); + __pyx_codeobj__6 = (PyObject*)__Pyx_PyCode_New(4, 0, 13, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__5, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_knn_pyx, __pyx_n_s_knn_batch, 71, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__6)) __PYX_ERR(0, 71, __pyx_L1_error) + + /* "knn.pyx":111 + * return indices + * + * def knn_batch_distance_pick(pts, nqueries, K, omp=False): # <<<<<<<<<<<<<< + * + * # define shape parameters + */ + __pyx_tuple__7 = PyTuple_Pack(14, __pyx_n_s_pts, __pyx_n_s_nqueries, __pyx_n_s_K, __pyx_n_s_omp, __pyx_n_s_batch_size, __pyx_n_s_npts, __pyx_n_s_nqueries_cpp, __pyx_n_s_K_cpp, __pyx_n_s_dim, __pyx_n_s_pts_cpp, __pyx_n_s_queries_cpp, __pyx_n_s_indices_cpp, __pyx_n_s_indices, __pyx_n_s_queries); if (unlikely(!__pyx_tuple__7)) __PYX_ERR(0, 111, __pyx_L1_error) + __Pyx_GOTREF(__pyx_tuple__7); + __Pyx_GIVEREF(__pyx_tuple__7); + __pyx_codeobj__8 = (PyObject*)__Pyx_PyCode_New(4, 0, 14, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__7, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_knn_pyx, __pyx_n_s_knn_batch_distance_pick, 111, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__8)) __PYX_ERR(0, 111, __pyx_L1_error) + __Pyx_RefNannyFinishContext(); + return 0; + __pyx_L1_error:; + __Pyx_RefNannyFinishContext(); + return -1; +} + +static CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) { + if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(0, 1, __pyx_L1_error); + return 0; + __pyx_L1_error:; + return -1; +} + +static CYTHON_SMALL_CODE int __Pyx_modinit_global_init_code(void); /*proto*/ +static CYTHON_SMALL_CODE int __Pyx_modinit_variable_export_code(void); /*proto*/ +static CYTHON_SMALL_CODE int __Pyx_modinit_function_export_code(void); /*proto*/ +static CYTHON_SMALL_CODE int __Pyx_modinit_type_init_code(void); /*proto*/ +static CYTHON_SMALL_CODE int __Pyx_modinit_type_import_code(void); /*proto*/ +static CYTHON_SMALL_CODE int __Pyx_modinit_variable_import_code(void); /*proto*/ +static CYTHON_SMALL_CODE int __Pyx_modinit_function_import_code(void); /*proto*/ + +static int __Pyx_modinit_global_init_code(void) { + __Pyx_RefNannyDeclarations + __Pyx_RefNannySetupContext("__Pyx_modinit_global_init_code", 0); + /*--- Global init code ---*/ + __Pyx_RefNannyFinishContext(); + return 0; +} + +static int __Pyx_modinit_variable_export_code(void) { + __Pyx_RefNannyDeclarations + __Pyx_RefNannySetupContext("__Pyx_modinit_variable_export_code", 0); + /*--- Variable export code ---*/ + __Pyx_RefNannyFinishContext(); + return 0; +} + +static int __Pyx_modinit_function_export_code(void) { + __Pyx_RefNannyDeclarations + __Pyx_RefNannySetupContext("__Pyx_modinit_function_export_code", 0); + /*--- Function export code ---*/ + __Pyx_RefNannyFinishContext(); + return 0; +} + +static int __Pyx_modinit_type_init_code(void) { + __Pyx_RefNannyDeclarations + __Pyx_RefNannySetupContext("__Pyx_modinit_type_init_code", 0); + /*--- Type init code ---*/ + __Pyx_RefNannyFinishContext(); + return 0; +} + +static int __Pyx_modinit_type_import_code(void) { + __Pyx_RefNannyDeclarations + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("__Pyx_modinit_type_import_code", 0); + /*--- Type import code ---*/ + __pyx_t_1 = PyImport_ImportModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_t_1)) __PYX_ERR(2, 9, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_ptype_7cpython_4type_type = __Pyx_ImportType(__pyx_t_1, __Pyx_BUILTIN_MODULE_NAME, "type", + #if defined(PYPY_VERSION_NUM) && PYPY_VERSION_NUM < 0x050B0000 + sizeof(PyTypeObject), + #else + sizeof(PyHeapTypeObject), + #endif + __Pyx_ImportType_CheckSize_Warn); + if (!__pyx_ptype_7cpython_4type_type) __PYX_ERR(2, 9, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyImport_ImportModule("numpy"); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 199, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + __pyx_ptype_5numpy_dtype = __Pyx_ImportType(__pyx_t_1, "numpy", "dtype", sizeof(PyArray_Descr), __Pyx_ImportType_CheckSize_Ignore); + if (!__pyx_ptype_5numpy_dtype) __PYX_ERR(1, 199, __pyx_L1_error) + __pyx_ptype_5numpy_flatiter = __Pyx_ImportType(__pyx_t_1, "numpy", "flatiter", sizeof(PyArrayIterObject), __Pyx_ImportType_CheckSize_Ignore); + if (!__pyx_ptype_5numpy_flatiter) __PYX_ERR(1, 222, __pyx_L1_error) + __pyx_ptype_5numpy_broadcast = __Pyx_ImportType(__pyx_t_1, "numpy", "broadcast", sizeof(PyArrayMultiIterObject), __Pyx_ImportType_CheckSize_Ignore); + if (!__pyx_ptype_5numpy_broadcast) __PYX_ERR(1, 226, __pyx_L1_error) + __pyx_ptype_5numpy_ndarray = __Pyx_ImportType(__pyx_t_1, "numpy", "ndarray", sizeof(PyArrayObject), __Pyx_ImportType_CheckSize_Ignore); + if (!__pyx_ptype_5numpy_ndarray) __PYX_ERR(1, 238, __pyx_L1_error) + __pyx_ptype_5numpy_ufunc = __Pyx_ImportType(__pyx_t_1, "numpy", "ufunc", sizeof(PyUFuncObject), __Pyx_ImportType_CheckSize_Ignore); + if (!__pyx_ptype_5numpy_ufunc) __PYX_ERR(1, 764, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_RefNannyFinishContext(); + return 0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_RefNannyFinishContext(); + return -1; +} + +static int __Pyx_modinit_variable_import_code(void) { + __Pyx_RefNannyDeclarations + __Pyx_RefNannySetupContext("__Pyx_modinit_variable_import_code", 0); + /*--- Variable import code ---*/ + __Pyx_RefNannyFinishContext(); + return 0; +} + +static int __Pyx_modinit_function_import_code(void) { + __Pyx_RefNannyDeclarations + __Pyx_RefNannySetupContext("__Pyx_modinit_function_import_code", 0); + /*--- Function import code ---*/ + __Pyx_RefNannyFinishContext(); + return 0; +} + + +#if PY_MAJOR_VERSION < 3 +#ifdef CYTHON_NO_PYINIT_EXPORT +#define __Pyx_PyMODINIT_FUNC void +#else +#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC +#endif +#else +#ifdef CYTHON_NO_PYINIT_EXPORT +#define __Pyx_PyMODINIT_FUNC PyObject * +#else +#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC +#endif +#endif + + +#if PY_MAJOR_VERSION < 3 +__Pyx_PyMODINIT_FUNC initnearest_neighbors(void) CYTHON_SMALL_CODE; /*proto*/ +__Pyx_PyMODINIT_FUNC initnearest_neighbors(void) +#else +__Pyx_PyMODINIT_FUNC PyInit_nearest_neighbors(void) CYTHON_SMALL_CODE; /*proto*/ +__Pyx_PyMODINIT_FUNC PyInit_nearest_neighbors(void) +#if CYTHON_PEP489_MULTI_PHASE_INIT +{ + return PyModuleDef_Init(&__pyx_moduledef); +} +static CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) { + #if PY_VERSION_HEX >= 0x030700A1 + static PY_INT64_T main_interpreter_id = -1; + PY_INT64_T current_id = PyInterpreterState_GetID(PyThreadState_Get()->interp); + if (main_interpreter_id == -1) { + main_interpreter_id = current_id; + return (unlikely(current_id == -1)) ? -1 : 0; + } else if (unlikely(main_interpreter_id != current_id)) + #else + static PyInterpreterState *main_interpreter = NULL; + PyInterpreterState *current_interpreter = PyThreadState_Get()->interp; + if (!main_interpreter) { + main_interpreter = current_interpreter; + } else if (unlikely(main_interpreter != current_interpreter)) + #endif + { + PyErr_SetString( + PyExc_ImportError, + "Interpreter change detected - this module can only be loaded into one interpreter per process."); + return -1; + } + return 0; +} +static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name, int allow_none) { + PyObject *value = PyObject_GetAttrString(spec, from_name); + int result = 0; + if (likely(value)) { + if (allow_none || value != Py_None) { + result = PyDict_SetItemString(moddict, to_name, value); + } + Py_DECREF(value); + } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) { + PyErr_Clear(); + } else { + result = -1; + } + return result; +} +static CYTHON_SMALL_CODE PyObject* __pyx_pymod_create(PyObject *spec, CYTHON_UNUSED PyModuleDef *def) { + PyObject *module = NULL, *moddict, *modname; + if (__Pyx_check_single_interpreter()) + return NULL; + if (__pyx_m) + return __Pyx_NewRef(__pyx_m); + modname = PyObject_GetAttrString(spec, "name"); + if (unlikely(!modname)) goto bad; + module = PyModule_NewObject(modname); + Py_DECREF(modname); + if (unlikely(!module)) goto bad; + moddict = PyModule_GetDict(module); + if (unlikely(!moddict)) goto bad; + if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "loader", "__loader__", 1) < 0)) goto bad; + if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "origin", "__file__", 1) < 0)) goto bad; + if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "parent", "__package__", 1) < 0)) goto bad; + if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "submodule_search_locations", "__path__", 0) < 0)) goto bad; + return module; +bad: + Py_XDECREF(module); + return NULL; +} + + +static CYTHON_SMALL_CODE int __pyx_pymod_exec_nearest_neighbors(PyObject *__pyx_pyinit_module) +#endif +#endif +{ + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannyDeclarations + #if CYTHON_PEP489_MULTI_PHASE_INIT + if (__pyx_m) { + if (__pyx_m == __pyx_pyinit_module) return 0; + PyErr_SetString(PyExc_RuntimeError, "Module 'nearest_neighbors' has already been imported. Re-initialisation is not supported."); + return -1; + } + #elif PY_MAJOR_VERSION >= 3 + if (__pyx_m) return __Pyx_NewRef(__pyx_m); + #endif + #if CYTHON_REFNANNY +__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); +if (!__Pyx_RefNanny) { + PyErr_Clear(); + __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); + if (!__Pyx_RefNanny) + Py_FatalError("failed to import 'refnanny' module"); +} +#endif + __Pyx_RefNannySetupContext("__Pyx_PyMODINIT_FUNC PyInit_nearest_neighbors(void)", 0); + if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #ifdef __Pxy_PyFrame_Initialize_Offsets + __Pxy_PyFrame_Initialize_Offsets(); + #endif + __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error) + __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error) + __pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error) + #ifdef __Pyx_CyFunction_USED + if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + #ifdef __Pyx_FusedFunction_USED + if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + #ifdef __Pyx_Coroutine_USED + if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + #ifdef __Pyx_Generator_USED + if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + #ifdef __Pyx_AsyncGen_USED + if (__pyx_AsyncGen_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + #ifdef __Pyx_StopAsyncIteration_USED + if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + /*--- Library function declarations ---*/ + /*--- Threads initialization code ---*/ + #if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS + #ifdef WITH_THREAD /* Python build with threading support? */ + PyEval_InitThreads(); + #endif + #endif + /*--- Module creation code ---*/ + #if CYTHON_PEP489_MULTI_PHASE_INIT + __pyx_m = __pyx_pyinit_module; + Py_INCREF(__pyx_m); + #else + #if PY_MAJOR_VERSION < 3 + __pyx_m = Py_InitModule4("nearest_neighbors", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m); + #else + __pyx_m = PyModule_Create(&__pyx_moduledef); + #endif + if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error) + Py_INCREF(__pyx_d); + __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error) + Py_INCREF(__pyx_b); + __pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error) + Py_INCREF(__pyx_cython_runtime); + if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error); + /*--- Initialize various global constants etc. ---*/ + if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) + if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + if (__pyx_module_is_main_nearest_neighbors) { + if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error) + } + #if PY_MAJOR_VERSION >= 3 + { + PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error) + if (!PyDict_GetItemString(modules, "nearest_neighbors")) { + if (unlikely(PyDict_SetItemString(modules, "nearest_neighbors", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error) + } + } + #endif + /*--- Builtin init code ---*/ + if (__Pyx_InitCachedBuiltins() < 0) goto __pyx_L1_error; + /*--- Constants init code ---*/ + if (__Pyx_InitCachedConstants() < 0) goto __pyx_L1_error; + /*--- Global type/function init code ---*/ + (void)__Pyx_modinit_global_init_code(); + (void)__Pyx_modinit_variable_export_code(); + (void)__Pyx_modinit_function_export_code(); + (void)__Pyx_modinit_type_init_code(); + if (unlikely(__Pyx_modinit_type_import_code() != 0)) goto __pyx_L1_error; + (void)__Pyx_modinit_variable_import_code(); + (void)__Pyx_modinit_function_import_code(); + /*--- Execution code ---*/ + #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) + if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error) + #endif + + /* "knn.pyx":4 + * # distutils: sources = knn.cxx + * + * import numpy as np # <<<<<<<<<<<<<< + * cimport numpy as np + * import cython + */ + __pyx_t_1 = __Pyx_Import(__pyx_n_s_numpy, 0, -1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 4, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + if (PyDict_SetItem(__pyx_d, __pyx_n_s_np, __pyx_t_1) < 0) __PYX_ERR(0, 4, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "knn.pyx":33 + * const size_t K, long* batch_indices) + * + * def knn(pts, queries, K, omp=False): # <<<<<<<<<<<<<< + * + * # define shape parameters + */ + __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_17nearest_neighbors_1knn, NULL, __pyx_n_s_nearest_neighbors); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 33, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + if (PyDict_SetItem(__pyx_d, __pyx_n_s_knn, __pyx_t_1) < 0) __PYX_ERR(0, 33, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "knn.pyx":71 + * return indices + * + * def knn_batch(pts, queries, K, omp=False): # <<<<<<<<<<<<<< + * + * # define shape parameters + */ + __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_17nearest_neighbors_3knn_batch, NULL, __pyx_n_s_nearest_neighbors); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 71, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + if (PyDict_SetItem(__pyx_d, __pyx_n_s_knn_batch, __pyx_t_1) < 0) __PYX_ERR(0, 71, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "knn.pyx":111 + * return indices + * + * def knn_batch_distance_pick(pts, nqueries, K, omp=False): # <<<<<<<<<<<<<< + * + * # define shape parameters + */ + __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_17nearest_neighbors_5knn_batch_distance_pick, NULL, __pyx_n_s_nearest_neighbors); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 111, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + if (PyDict_SetItem(__pyx_d, __pyx_n_s_knn_batch_distance_pick, __pyx_t_1) < 0) __PYX_ERR(0, 111, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "knn.pyx":1 + * # distutils: language = c++ # <<<<<<<<<<<<<< + * # distutils: sources = knn.cxx + * + */ + __pyx_t_1 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) + __Pyx_GOTREF(__pyx_t_1); + if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error) + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "../../../../../../ProgramData/Anaconda3/envs/randlanet/lib/site-packages/numpy/__init__.pxd":892 + * raise ImportError("numpy.core.umath failed to import") + * + * cdef inline int import_ufunc() except -1: # <<<<<<<<<<<<<< + * try: + * _import_umath() + */ + + /*--- Wrapped vars code ---*/ + + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + if (__pyx_m) { + if (__pyx_d) { + __Pyx_AddTraceback("init nearest_neighbors", __pyx_clineno, __pyx_lineno, __pyx_filename); + } + Py_CLEAR(__pyx_m); + } else if (!PyErr_Occurred()) { + PyErr_SetString(PyExc_ImportError, "init nearest_neighbors"); + } + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + #if CYTHON_PEP489_MULTI_PHASE_INIT + return (__pyx_m != NULL) ? 0 : -1; + #elif PY_MAJOR_VERSION >= 3 + return __pyx_m; + #else + return; + #endif +} + +/* --- Runtime support code --- */ +/* Refnanny */ +#if CYTHON_REFNANNY +static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) { + PyObject *m = NULL, *p = NULL; + void *r = NULL; + m = PyImport_ImportModule(modname); + if (!m) goto end; + p = PyObject_GetAttrString(m, "RefNannyAPI"); + if (!p) goto end; + r = PyLong_AsVoidPtr(p); +end: + Py_XDECREF(p); + Py_XDECREF(m); + return (__Pyx_RefNannyAPIStruct *)r; +} +#endif + +/* RaiseArgTupleInvalid */ +static void __Pyx_RaiseArgtupleInvalid( + const char* func_name, + int exact, + Py_ssize_t num_min, + Py_ssize_t num_max, + Py_ssize_t num_found) +{ + Py_ssize_t num_expected; + const char *more_or_less; + if (num_found < num_min) { + num_expected = num_min; + more_or_less = "at least"; + } else { + num_expected = num_max; + more_or_less = "at most"; + } + if (exact) { + more_or_less = "exactly"; + } + PyErr_Format(PyExc_TypeError, + "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)", + func_name, more_or_less, num_expected, + (num_expected == 1) ? "" : "s", num_found); +} + +/* RaiseDoubleKeywords */ +static void __Pyx_RaiseDoubleKeywordsError( + const char* func_name, + PyObject* kw_name) +{ + PyErr_Format(PyExc_TypeError, + #if PY_MAJOR_VERSION >= 3 + "%s() got multiple values for keyword argument '%U'", func_name, kw_name); + #else + "%s() got multiple values for keyword argument '%s'", func_name, + PyString_AsString(kw_name)); + #endif +} + +/* ParseKeywords */ +static int __Pyx_ParseOptionalKeywords( + PyObject *kwds, + PyObject **argnames[], + PyObject *kwds2, + PyObject *values[], + Py_ssize_t num_pos_args, + const char* function_name) +{ + PyObject *key = 0, *value = 0; + Py_ssize_t pos = 0; + PyObject*** name; + PyObject*** first_kw_arg = argnames + num_pos_args; + while (PyDict_Next(kwds, &pos, &key, &value)) { + name = first_kw_arg; + while (*name && (**name != key)) name++; + if (*name) { + values[name-argnames] = value; + continue; + } + name = first_kw_arg; + #if PY_MAJOR_VERSION < 3 + if (likely(PyString_CheckExact(key)) || likely(PyString_Check(key))) { + while (*name) { + if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key)) + && _PyString_Eq(**name, key)) { + values[name-argnames] = value; + break; + } + name++; + } + if (*name) continue; + else { + PyObject*** argname = argnames; + while (argname != first_kw_arg) { + if ((**argname == key) || ( + (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key)) + && _PyString_Eq(**argname, key))) { + goto arg_passed_twice; + } + argname++; + } + } + } else + #endif + if (likely(PyUnicode_Check(key))) { + while (*name) { + int cmp = (**name == key) ? 0 : + #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 + (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 : + #endif + PyUnicode_Compare(**name, key); + if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; + if (cmp == 0) { + values[name-argnames] = value; + break; + } + name++; + } + if (*name) continue; + else { + PyObject*** argname = argnames; + while (argname != first_kw_arg) { + int cmp = (**argname == key) ? 0 : + #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 + (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 : + #endif + PyUnicode_Compare(**argname, key); + if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; + if (cmp == 0) goto arg_passed_twice; + argname++; + } + } + } else + goto invalid_keyword_type; + if (kwds2) { + if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; + } else { + goto invalid_keyword; + } + } + return 0; +arg_passed_twice: + __Pyx_RaiseDoubleKeywordsError(function_name, key); + goto bad; +invalid_keyword_type: + PyErr_Format(PyExc_TypeError, + "%.200s() keywords must be strings", function_name); + goto bad; +invalid_keyword: + PyErr_Format(PyExc_TypeError, + #if PY_MAJOR_VERSION < 3 + "%.200s() got an unexpected keyword argument '%.200s'", + function_name, PyString_AsString(key)); + #else + "%s() got an unexpected keyword argument '%U'", + function_name, key); + #endif +bad: + return -1; +} + +/* PyObjectGetAttrStr */ +#if CYTHON_USE_TYPE_SLOTS +static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) { + PyTypeObject* tp = Py_TYPE(obj); + if (likely(tp->tp_getattro)) + return tp->tp_getattro(obj, attr_name); +#if PY_MAJOR_VERSION < 3 + if (likely(tp->tp_getattr)) + return tp->tp_getattr(obj, PyString_AS_STRING(attr_name)); +#endif + return PyObject_GetAttr(obj, attr_name); +} +#endif + +/* GetItemInt */ +static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) { + PyObject *r; + if (!j) return NULL; + r = PyObject_GetItem(o, j); + Py_DECREF(j); + return r; +} +static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, + CYTHON_NCP_UNUSED int wraparound, + CYTHON_NCP_UNUSED int boundscheck) { +#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS + Py_ssize_t wrapped_i = i; + if (wraparound & unlikely(i < 0)) { + wrapped_i += PyList_GET_SIZE(o); + } + if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyList_GET_SIZE(o)))) { + PyObject *r = PyList_GET_ITEM(o, wrapped_i); + Py_INCREF(r); + return r; + } + return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); +#else + return PySequence_GetItem(o, i); +#endif +} +static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, + CYTHON_NCP_UNUSED int wraparound, + CYTHON_NCP_UNUSED int boundscheck) { +#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS + Py_ssize_t wrapped_i = i; + if (wraparound & unlikely(i < 0)) { + wrapped_i += PyTuple_GET_SIZE(o); + } + if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyTuple_GET_SIZE(o)))) { + PyObject *r = PyTuple_GET_ITEM(o, wrapped_i); + Py_INCREF(r); + return r; + } + return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); +#else + return PySequence_GetItem(o, i); +#endif +} +static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int is_list, + CYTHON_NCP_UNUSED int wraparound, + CYTHON_NCP_UNUSED int boundscheck) { +#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS + if (is_list || PyList_CheckExact(o)) { + Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyList_GET_SIZE(o); + if ((!boundscheck) || (likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o))))) { + PyObject *r = PyList_GET_ITEM(o, n); + Py_INCREF(r); + return r; + } + } + else if (PyTuple_CheckExact(o)) { + Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyTuple_GET_SIZE(o); + if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyTuple_GET_SIZE(o)))) { + PyObject *r = PyTuple_GET_ITEM(o, n); + Py_INCREF(r); + return r; + } + } else { + PySequenceMethods *m = Py_TYPE(o)->tp_as_sequence; + if (likely(m && m->sq_item)) { + if (wraparound && unlikely(i < 0) && likely(m->sq_length)) { + Py_ssize_t l = m->sq_length(o); + if (likely(l >= 0)) { + i += l; + } else { + if (!PyErr_ExceptionMatches(PyExc_OverflowError)) + return NULL; + PyErr_Clear(); + } + } + return m->sq_item(o, i); + } + } +#else + if (is_list || PySequence_Check(o)) { + return PySequence_GetItem(o, i); + } +#endif + return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); +} + +/* GetBuiltinName */ +static PyObject *__Pyx_GetBuiltinName(PyObject *name) { + PyObject* result = __Pyx_PyObject_GetAttrStr(__pyx_b, name); + if (unlikely(!result)) { + PyErr_Format(PyExc_NameError, +#if PY_MAJOR_VERSION >= 3 + "name '%U' is not defined", name); +#else + "name '%.200s' is not defined", PyString_AS_STRING(name)); +#endif + } + return result; +} + +/* PyDictVersioning */ +#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS +static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) { + PyObject *dict = Py_TYPE(obj)->tp_dict; + return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0; +} +static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj) { + PyObject **dictptr = NULL; + Py_ssize_t offset = Py_TYPE(obj)->tp_dictoffset; + if (offset) { +#if CYTHON_COMPILING_IN_CPYTHON + dictptr = (likely(offset > 0)) ? (PyObject **) ((char *)obj + offset) : _PyObject_GetDictPtr(obj); +#else + dictptr = _PyObject_GetDictPtr(obj); +#endif + } + return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0; +} +static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version) { + PyObject *dict = Py_TYPE(obj)->tp_dict; + if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict))) + return 0; + return obj_dict_version == __Pyx_get_object_dict_version(obj); +} +#endif + +/* GetModuleGlobalName */ +#if CYTHON_USE_DICT_VERSIONS +static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value) +#else +static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name) +#endif +{ + PyObject *result; +#if !CYTHON_AVOID_BORROWED_REFS +#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 + result = _PyDict_GetItem_KnownHash(__pyx_d, name, ((PyASCIIObject *) name)->hash); + __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) + if (likely(result)) { + return __Pyx_NewRef(result); + } else if (unlikely(PyErr_Occurred())) { + return NULL; + } +#else + result = PyDict_GetItem(__pyx_d, name); + __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) + if (likely(result)) { + return __Pyx_NewRef(result); + } +#endif +#else + result = PyObject_GetItem(__pyx_d, name); + __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) + if (likely(result)) { + return __Pyx_NewRef(result); + } + PyErr_Clear(); +#endif + return __Pyx_GetBuiltinName(name); +} + +/* PyObjectCall */ +#if CYTHON_COMPILING_IN_CPYTHON +static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) { + PyObject *result; + ternaryfunc call = func->ob_type->tp_call; + if (unlikely(!call)) + return PyObject_Call(func, arg, kw); + if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) + return NULL; + result = (*call)(func, arg, kw); + Py_LeaveRecursiveCall(); + if (unlikely(!result) && unlikely(!PyErr_Occurred())) { + PyErr_SetString( + PyExc_SystemError, + "NULL result without error in PyObject_Call"); + } + return result; +} +#endif + +/* ExtTypeTest */ +static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type) { + if (unlikely(!type)) { + PyErr_SetString(PyExc_SystemError, "Missing type object"); + return 0; + } + if (likely(__Pyx_TypeCheck(obj, type))) + return 1; + PyErr_Format(PyExc_TypeError, "Cannot convert %.200s to %.200s", + Py_TYPE(obj)->tp_name, type->tp_name); + return 0; +} + +/* IsLittleEndian */ +static CYTHON_INLINE int __Pyx_Is_Little_Endian(void) +{ + union { + uint32_t u32; + uint8_t u8[4]; + } S; + S.u32 = 0x01020304; + return S.u8[0] == 4; +} + +/* BufferFormatCheck */ +static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, + __Pyx_BufFmt_StackElem* stack, + __Pyx_TypeInfo* type) { + stack[0].field = &ctx->root; + stack[0].parent_offset = 0; + ctx->root.type = type; + ctx->root.name = "buffer dtype"; + ctx->root.offset = 0; + ctx->head = stack; + ctx->head->field = &ctx->root; + ctx->fmt_offset = 0; + ctx->head->parent_offset = 0; + ctx->new_packmode = '@'; + ctx->enc_packmode = '@'; + ctx->new_count = 1; + ctx->enc_count = 0; + ctx->enc_type = 0; + ctx->is_complex = 0; + ctx->is_valid_array = 0; + ctx->struct_alignment = 0; + while (type->typegroup == 'S') { + ++ctx->head; + ctx->head->field = type->fields; + ctx->head->parent_offset = 0; + type = type->fields->type; + } +} +static int __Pyx_BufFmt_ParseNumber(const char** ts) { + int count; + const char* t = *ts; + if (*t < '0' || *t > '9') { + return -1; + } else { + count = *t++ - '0'; + while (*t >= '0' && *t <= '9') { + count *= 10; + count += *t++ - '0'; + } + } + *ts = t; + return count; +} +static int __Pyx_BufFmt_ExpectNumber(const char **ts) { + int number = __Pyx_BufFmt_ParseNumber(ts); + if (number == -1) + PyErr_Format(PyExc_ValueError,\ + "Does not understand character buffer dtype format string ('%c')", **ts); + return number; +} +static void __Pyx_BufFmt_RaiseUnexpectedChar(char ch) { + PyErr_Format(PyExc_ValueError, + "Unexpected format string character: '%c'", ch); +} +static const char* __Pyx_BufFmt_DescribeTypeChar(char ch, int is_complex) { + switch (ch) { + case '?': return "'bool'"; + case 'c': return "'char'"; + case 'b': return "'signed char'"; + case 'B': return "'unsigned char'"; + case 'h': return "'short'"; + case 'H': return "'unsigned short'"; + case 'i': return "'int'"; + case 'I': return "'unsigned int'"; + case 'l': return "'long'"; + case 'L': return "'unsigned long'"; + case 'q': return "'long long'"; + case 'Q': return "'unsigned long long'"; + case 'f': return (is_complex ? "'complex float'" : "'float'"); + case 'd': return (is_complex ? "'complex double'" : "'double'"); + case 'g': return (is_complex ? "'complex long double'" : "'long double'"); + case 'T': return "a struct"; + case 'O': return "Python object"; + case 'P': return "a pointer"; + case 's': case 'p': return "a string"; + case 0: return "end"; + default: return "unparseable format string"; + } +} +static size_t __Pyx_BufFmt_TypeCharToStandardSize(char ch, int is_complex) { + switch (ch) { + case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; + case 'h': case 'H': return 2; + case 'i': case 'I': case 'l': case 'L': return 4; + case 'q': case 'Q': return 8; + case 'f': return (is_complex ? 8 : 4); + case 'd': return (is_complex ? 16 : 8); + case 'g': { + PyErr_SetString(PyExc_ValueError, "Python does not define a standard format string size for long double ('g').."); + return 0; + } + case 'O': case 'P': return sizeof(void*); + default: + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } +} +static size_t __Pyx_BufFmt_TypeCharToNativeSize(char ch, int is_complex) { + switch (ch) { + case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; + case 'h': case 'H': return sizeof(short); + case 'i': case 'I': return sizeof(int); + case 'l': case 'L': return sizeof(long); + #ifdef HAVE_LONG_LONG + case 'q': case 'Q': return sizeof(PY_LONG_LONG); + #endif + case 'f': return sizeof(float) * (is_complex ? 2 : 1); + case 'd': return sizeof(double) * (is_complex ? 2 : 1); + case 'g': return sizeof(long double) * (is_complex ? 2 : 1); + case 'O': case 'P': return sizeof(void*); + default: { + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } + } +} +typedef struct { char c; short x; } __Pyx_st_short; +typedef struct { char c; int x; } __Pyx_st_int; +typedef struct { char c; long x; } __Pyx_st_long; +typedef struct { char c; float x; } __Pyx_st_float; +typedef struct { char c; double x; } __Pyx_st_double; +typedef struct { char c; long double x; } __Pyx_st_longdouble; +typedef struct { char c; void *x; } __Pyx_st_void_p; +#ifdef HAVE_LONG_LONG +typedef struct { char c; PY_LONG_LONG x; } __Pyx_st_longlong; +#endif +static size_t __Pyx_BufFmt_TypeCharToAlignment(char ch, CYTHON_UNUSED int is_complex) { + switch (ch) { + case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; + case 'h': case 'H': return sizeof(__Pyx_st_short) - sizeof(short); + case 'i': case 'I': return sizeof(__Pyx_st_int) - sizeof(int); + case 'l': case 'L': return sizeof(__Pyx_st_long) - sizeof(long); +#ifdef HAVE_LONG_LONG + case 'q': case 'Q': return sizeof(__Pyx_st_longlong) - sizeof(PY_LONG_LONG); +#endif + case 'f': return sizeof(__Pyx_st_float) - sizeof(float); + case 'd': return sizeof(__Pyx_st_double) - sizeof(double); + case 'g': return sizeof(__Pyx_st_longdouble) - sizeof(long double); + case 'P': case 'O': return sizeof(__Pyx_st_void_p) - sizeof(void*); + default: + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } +} +/* These are for computing the padding at the end of the struct to align + on the first member of the struct. This will probably the same as above, + but we don't have any guarantees. + */ +typedef struct { short x; char c; } __Pyx_pad_short; +typedef struct { int x; char c; } __Pyx_pad_int; +typedef struct { long x; char c; } __Pyx_pad_long; +typedef struct { float x; char c; } __Pyx_pad_float; +typedef struct { double x; char c; } __Pyx_pad_double; +typedef struct { long double x; char c; } __Pyx_pad_longdouble; +typedef struct { void *x; char c; } __Pyx_pad_void_p; +#ifdef HAVE_LONG_LONG +typedef struct { PY_LONG_LONG x; char c; } __Pyx_pad_longlong; +#endif +static size_t __Pyx_BufFmt_TypeCharToPadding(char ch, CYTHON_UNUSED int is_complex) { + switch (ch) { + case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; + case 'h': case 'H': return sizeof(__Pyx_pad_short) - sizeof(short); + case 'i': case 'I': return sizeof(__Pyx_pad_int) - sizeof(int); + case 'l': case 'L': return sizeof(__Pyx_pad_long) - sizeof(long); +#ifdef HAVE_LONG_LONG + case 'q': case 'Q': return sizeof(__Pyx_pad_longlong) - sizeof(PY_LONG_LONG); +#endif + case 'f': return sizeof(__Pyx_pad_float) - sizeof(float); + case 'd': return sizeof(__Pyx_pad_double) - sizeof(double); + case 'g': return sizeof(__Pyx_pad_longdouble) - sizeof(long double); + case 'P': case 'O': return sizeof(__Pyx_pad_void_p) - sizeof(void*); + default: + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } +} +static char __Pyx_BufFmt_TypeCharToGroup(char ch, int is_complex) { + switch (ch) { + case 'c': + return 'H'; + case 'b': case 'h': case 'i': + case 'l': case 'q': case 's': case 'p': + return 'I'; + case '?': case 'B': case 'H': case 'I': case 'L': case 'Q': + return 'U'; + case 'f': case 'd': case 'g': + return (is_complex ? 'C' : 'R'); + case 'O': + return 'O'; + case 'P': + return 'P'; + default: { + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } + } +} +static void __Pyx_BufFmt_RaiseExpected(__Pyx_BufFmt_Context* ctx) { + if (ctx->head == NULL || ctx->head->field == &ctx->root) { + const char* expected; + const char* quote; + if (ctx->head == NULL) { + expected = "end"; + quote = ""; + } else { + expected = ctx->head->field->type->name; + quote = "'"; + } + PyErr_Format(PyExc_ValueError, + "Buffer dtype mismatch, expected %s%s%s but got %s", + quote, expected, quote, + __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex)); + } else { + __Pyx_StructField* field = ctx->head->field; + __Pyx_StructField* parent = (ctx->head - 1)->field; + PyErr_Format(PyExc_ValueError, + "Buffer dtype mismatch, expected '%s' but got %s in '%s.%s'", + field->type->name, __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex), + parent->type->name, field->name); + } +} +static int __Pyx_BufFmt_ProcessTypeChunk(__Pyx_BufFmt_Context* ctx) { + char group; + size_t size, offset, arraysize = 1; + if (ctx->enc_type == 0) return 0; + if (ctx->head->field->type->arraysize[0]) { + int i, ndim = 0; + if (ctx->enc_type == 's' || ctx->enc_type == 'p') { + ctx->is_valid_array = ctx->head->field->type->ndim == 1; + ndim = 1; + if (ctx->enc_count != ctx->head->field->type->arraysize[0]) { + PyErr_Format(PyExc_ValueError, + "Expected a dimension of size %zu, got %zu", + ctx->head->field->type->arraysize[0], ctx->enc_count); + return -1; + } + } + if (!ctx->is_valid_array) { + PyErr_Format(PyExc_ValueError, "Expected %d dimensions, got %d", + ctx->head->field->type->ndim, ndim); + return -1; + } + for (i = 0; i < ctx->head->field->type->ndim; i++) { + arraysize *= ctx->head->field->type->arraysize[i]; + } + ctx->is_valid_array = 0; + ctx->enc_count = 1; + } + group = __Pyx_BufFmt_TypeCharToGroup(ctx->enc_type, ctx->is_complex); + do { + __Pyx_StructField* field = ctx->head->field; + __Pyx_TypeInfo* type = field->type; + if (ctx->enc_packmode == '@' || ctx->enc_packmode == '^') { + size = __Pyx_BufFmt_TypeCharToNativeSize(ctx->enc_type, ctx->is_complex); + } else { + size = __Pyx_BufFmt_TypeCharToStandardSize(ctx->enc_type, ctx->is_complex); + } + if (ctx->enc_packmode == '@') { + size_t align_at = __Pyx_BufFmt_TypeCharToAlignment(ctx->enc_type, ctx->is_complex); + size_t align_mod_offset; + if (align_at == 0) return -1; + align_mod_offset = ctx->fmt_offset % align_at; + if (align_mod_offset > 0) ctx->fmt_offset += align_at - align_mod_offset; + if (ctx->struct_alignment == 0) + ctx->struct_alignment = __Pyx_BufFmt_TypeCharToPadding(ctx->enc_type, + ctx->is_complex); + } + if (type->size != size || type->typegroup != group) { + if (type->typegroup == 'C' && type->fields != NULL) { + size_t parent_offset = ctx->head->parent_offset + field->offset; + ++ctx->head; + ctx->head->field = type->fields; + ctx->head->parent_offset = parent_offset; + continue; + } + if ((type->typegroup == 'H' || group == 'H') && type->size == size) { + } else { + __Pyx_BufFmt_RaiseExpected(ctx); + return -1; + } + } + offset = ctx->head->parent_offset + field->offset; + if (ctx->fmt_offset != offset) { + PyErr_Format(PyExc_ValueError, + "Buffer dtype mismatch; next field is at offset %" CYTHON_FORMAT_SSIZE_T "d but %" CYTHON_FORMAT_SSIZE_T "d expected", + (Py_ssize_t)ctx->fmt_offset, (Py_ssize_t)offset); + return -1; + } + ctx->fmt_offset += size; + if (arraysize) + ctx->fmt_offset += (arraysize - 1) * size; + --ctx->enc_count; + while (1) { + if (field == &ctx->root) { + ctx->head = NULL; + if (ctx->enc_count != 0) { + __Pyx_BufFmt_RaiseExpected(ctx); + return -1; + } + break; + } + ctx->head->field = ++field; + if (field->type == NULL) { + --ctx->head; + field = ctx->head->field; + continue; + } else if (field->type->typegroup == 'S') { + size_t parent_offset = ctx->head->parent_offset + field->offset; + if (field->type->fields->type == NULL) continue; + field = field->type->fields; + ++ctx->head; + ctx->head->field = field; + ctx->head->parent_offset = parent_offset; + break; + } else { + break; + } + } + } while (ctx->enc_count); + ctx->enc_type = 0; + ctx->is_complex = 0; + return 0; +} +static PyObject * +__pyx_buffmt_parse_array(__Pyx_BufFmt_Context* ctx, const char** tsp) +{ + const char *ts = *tsp; + int i = 0, number; + int ndim = ctx->head->field->type->ndim; +; + ++ts; + if (ctx->new_count != 1) { + PyErr_SetString(PyExc_ValueError, + "Cannot handle repeated arrays in format string"); + return NULL; + } + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + while (*ts && *ts != ')') { + switch (*ts) { + case ' ': case '\f': case '\r': case '\n': case '\t': case '\v': continue; + default: break; + } + number = __Pyx_BufFmt_ExpectNumber(&ts); + if (number == -1) return NULL; + if (i < ndim && (size_t) number != ctx->head->field->type->arraysize[i]) + return PyErr_Format(PyExc_ValueError, + "Expected a dimension of size %zu, got %d", + ctx->head->field->type->arraysize[i], number); + if (*ts != ',' && *ts != ')') + return PyErr_Format(PyExc_ValueError, + "Expected a comma in format string, got '%c'", *ts); + if (*ts == ',') ts++; + i++; + } + if (i != ndim) + return PyErr_Format(PyExc_ValueError, "Expected %d dimension(s), got %d", + ctx->head->field->type->ndim, i); + if (!*ts) { + PyErr_SetString(PyExc_ValueError, + "Unexpected end of format string, expected ')'"); + return NULL; + } + ctx->is_valid_array = 1; + ctx->new_count = 1; + *tsp = ++ts; + return Py_None; +} +static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts) { + int got_Z = 0; + while (1) { + switch(*ts) { + case 0: + if (ctx->enc_type != 0 && ctx->head == NULL) { + __Pyx_BufFmt_RaiseExpected(ctx); + return NULL; + } + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + if (ctx->head != NULL) { + __Pyx_BufFmt_RaiseExpected(ctx); + return NULL; + } + return ts; + case ' ': + case '\r': + case '\n': + ++ts; + break; + case '<': + if (!__Pyx_Is_Little_Endian()) { + PyErr_SetString(PyExc_ValueError, "Little-endian buffer not supported on big-endian compiler"); + return NULL; + } + ctx->new_packmode = '='; + ++ts; + break; + case '>': + case '!': + if (__Pyx_Is_Little_Endian()) { + PyErr_SetString(PyExc_ValueError, "Big-endian buffer not supported on little-endian compiler"); + return NULL; + } + ctx->new_packmode = '='; + ++ts; + break; + case '=': + case '@': + case '^': + ctx->new_packmode = *ts++; + break; + case 'T': + { + const char* ts_after_sub; + size_t i, struct_count = ctx->new_count; + size_t struct_alignment = ctx->struct_alignment; + ctx->new_count = 1; + ++ts; + if (*ts != '{') { + PyErr_SetString(PyExc_ValueError, "Buffer acquisition: Expected '{' after 'T'"); + return NULL; + } + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + ctx->enc_type = 0; + ctx->enc_count = 0; + ctx->struct_alignment = 0; + ++ts; + ts_after_sub = ts; + for (i = 0; i != struct_count; ++i) { + ts_after_sub = __Pyx_BufFmt_CheckString(ctx, ts); + if (!ts_after_sub) return NULL; + } + ts = ts_after_sub; + if (struct_alignment) ctx->struct_alignment = struct_alignment; + } + break; + case '}': + { + size_t alignment = ctx->struct_alignment; + ++ts; + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + ctx->enc_type = 0; + if (alignment && ctx->fmt_offset % alignment) { + ctx->fmt_offset += alignment - (ctx->fmt_offset % alignment); + } + } + return ts; + case 'x': + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + ctx->fmt_offset += ctx->new_count; + ctx->new_count = 1; + ctx->enc_count = 0; + ctx->enc_type = 0; + ctx->enc_packmode = ctx->new_packmode; + ++ts; + break; + case 'Z': + got_Z = 1; + ++ts; + if (*ts != 'f' && *ts != 'd' && *ts != 'g') { + __Pyx_BufFmt_RaiseUnexpectedChar('Z'); + return NULL; + } + CYTHON_FALLTHROUGH; + case '?': case 'c': case 'b': case 'B': case 'h': case 'H': case 'i': case 'I': + case 'l': case 'L': case 'q': case 'Q': + case 'f': case 'd': case 'g': + case 'O': case 'p': + if (ctx->enc_type == *ts && got_Z == ctx->is_complex && + ctx->enc_packmode == ctx->new_packmode) { + ctx->enc_count += ctx->new_count; + ctx->new_count = 1; + got_Z = 0; + ++ts; + break; + } + CYTHON_FALLTHROUGH; + case 's': + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + ctx->enc_count = ctx->new_count; + ctx->enc_packmode = ctx->new_packmode; + ctx->enc_type = *ts; + ctx->is_complex = got_Z; + ++ts; + ctx->new_count = 1; + got_Z = 0; + break; + case ':': + ++ts; + while(*ts != ':') ++ts; + ++ts; + break; + case '(': + if (!__pyx_buffmt_parse_array(ctx, &ts)) return NULL; + break; + default: + { + int number = __Pyx_BufFmt_ExpectNumber(&ts); + if (number == -1) return NULL; + ctx->new_count = (size_t)number; + } + } + } +} + +/* BufferGetAndValidate */ + static CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info) { + if (unlikely(info->buf == NULL)) return; + if (info->suboffsets == __Pyx_minusones) info->suboffsets = NULL; + __Pyx_ReleaseBuffer(info); +} +static void __Pyx_ZeroBuffer(Py_buffer* buf) { + buf->buf = NULL; + buf->obj = NULL; + buf->strides = __Pyx_zeros; + buf->shape = __Pyx_zeros; + buf->suboffsets = __Pyx_minusones; +} +static int __Pyx__GetBufferAndValidate( + Py_buffer* buf, PyObject* obj, __Pyx_TypeInfo* dtype, int flags, + int nd, int cast, __Pyx_BufFmt_StackElem* stack) +{ + buf->buf = NULL; + if (unlikely(__Pyx_GetBuffer(obj, buf, flags) == -1)) { + __Pyx_ZeroBuffer(buf); + return -1; + } + if (unlikely(buf->ndim != nd)) { + PyErr_Format(PyExc_ValueError, + "Buffer has wrong number of dimensions (expected %d, got %d)", + nd, buf->ndim); + goto fail; + } + if (!cast) { + __Pyx_BufFmt_Context ctx; + __Pyx_BufFmt_Init(&ctx, stack, dtype); + if (!__Pyx_BufFmt_CheckString(&ctx, buf->format)) goto fail; + } + if (unlikely((size_t)buf->itemsize != dtype->size)) { + PyErr_Format(PyExc_ValueError, + "Item size of buffer (%" CYTHON_FORMAT_SSIZE_T "d byte%s) does not match size of '%s' (%" CYTHON_FORMAT_SSIZE_T "d byte%s)", + buf->itemsize, (buf->itemsize > 1) ? "s" : "", + dtype->name, (Py_ssize_t)dtype->size, (dtype->size > 1) ? "s" : ""); + goto fail; + } + if (buf->suboffsets == NULL) buf->suboffsets = __Pyx_minusones; + return 0; +fail:; + __Pyx_SafeReleaseBuffer(buf); + return -1; +} + +/* BufferFallbackError */ + static void __Pyx_RaiseBufferFallbackError(void) { + PyErr_SetString(PyExc_ValueError, + "Buffer acquisition failed on assignment; and then reacquiring the old buffer failed too!"); +} + +/* PyErrFetchRestore */ + #if CYTHON_FAST_THREAD_STATE +static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { + PyObject *tmp_type, *tmp_value, *tmp_tb; + tmp_type = tstate->curexc_type; + tmp_value = tstate->curexc_value; + tmp_tb = tstate->curexc_traceback; + tstate->curexc_type = type; + tstate->curexc_value = value; + tstate->curexc_traceback = tb; + Py_XDECREF(tmp_type); + Py_XDECREF(tmp_value); + Py_XDECREF(tmp_tb); +} +static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { + *type = tstate->curexc_type; + *value = tstate->curexc_value; + *tb = tstate->curexc_traceback; + tstate->curexc_type = 0; + tstate->curexc_value = 0; + tstate->curexc_traceback = 0; +} +#endif + +/* GetTopmostException */ + #if CYTHON_USE_EXC_INFO_STACK +static _PyErr_StackItem * +__Pyx_PyErr_GetTopmostException(PyThreadState *tstate) +{ + _PyErr_StackItem *exc_info = tstate->exc_info; + while ((exc_info->exc_type == NULL || exc_info->exc_type == Py_None) && + exc_info->previous_item != NULL) + { + exc_info = exc_info->previous_item; + } + return exc_info; +} +#endif + +/* SaveResetException */ + #if CYTHON_FAST_THREAD_STATE +static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { + #if CYTHON_USE_EXC_INFO_STACK + _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate); + *type = exc_info->exc_type; + *value = exc_info->exc_value; + *tb = exc_info->exc_traceback; + #else + *type = tstate->exc_type; + *value = tstate->exc_value; + *tb = tstate->exc_traceback; + #endif + Py_XINCREF(*type); + Py_XINCREF(*value); + Py_XINCREF(*tb); +} +static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { + PyObject *tmp_type, *tmp_value, *tmp_tb; + #if CYTHON_USE_EXC_INFO_STACK + _PyErr_StackItem *exc_info = tstate->exc_info; + tmp_type = exc_info->exc_type; + tmp_value = exc_info->exc_value; + tmp_tb = exc_info->exc_traceback; + exc_info->exc_type = type; + exc_info->exc_value = value; + exc_info->exc_traceback = tb; + #else + tmp_type = tstate->exc_type; + tmp_value = tstate->exc_value; + tmp_tb = tstate->exc_traceback; + tstate->exc_type = type; + tstate->exc_value = value; + tstate->exc_traceback = tb; + #endif + Py_XDECREF(tmp_type); + Py_XDECREF(tmp_value); + Py_XDECREF(tmp_tb); +} +#endif + +/* PyErrExceptionMatches */ + #if CYTHON_FAST_THREAD_STATE +static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { + Py_ssize_t i, n; + n = PyTuple_GET_SIZE(tuple); +#if PY_MAJOR_VERSION >= 3 + for (i=0; icurexc_type; + if (exc_type == err) return 1; + if (unlikely(!exc_type)) return 0; + if (unlikely(PyTuple_Check(err))) + return __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err); + return __Pyx_PyErr_GivenExceptionMatches(exc_type, err); +} +#endif + +/* GetException */ + #if CYTHON_FAST_THREAD_STATE +static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) +#else +static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) +#endif +{ + PyObject *local_type, *local_value, *local_tb; +#if CYTHON_FAST_THREAD_STATE + PyObject *tmp_type, *tmp_value, *tmp_tb; + local_type = tstate->curexc_type; + local_value = tstate->curexc_value; + local_tb = tstate->curexc_traceback; + tstate->curexc_type = 0; + tstate->curexc_value = 0; + tstate->curexc_traceback = 0; +#else + PyErr_Fetch(&local_type, &local_value, &local_tb); +#endif + PyErr_NormalizeException(&local_type, &local_value, &local_tb); +#if CYTHON_FAST_THREAD_STATE + if (unlikely(tstate->curexc_type)) +#else + if (unlikely(PyErr_Occurred())) +#endif + goto bad; + #if PY_MAJOR_VERSION >= 3 + if (local_tb) { + if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0)) + goto bad; + } + #endif + Py_XINCREF(local_tb); + Py_XINCREF(local_type); + Py_XINCREF(local_value); + *type = local_type; + *value = local_value; + *tb = local_tb; +#if CYTHON_FAST_THREAD_STATE + #if CYTHON_USE_EXC_INFO_STACK + { + _PyErr_StackItem *exc_info = tstate->exc_info; + tmp_type = exc_info->exc_type; + tmp_value = exc_info->exc_value; + tmp_tb = exc_info->exc_traceback; + exc_info->exc_type = local_type; + exc_info->exc_value = local_value; + exc_info->exc_traceback = local_tb; + } + #else + tmp_type = tstate->exc_type; + tmp_value = tstate->exc_value; + tmp_tb = tstate->exc_traceback; + tstate->exc_type = local_type; + tstate->exc_value = local_value; + tstate->exc_traceback = local_tb; + #endif + Py_XDECREF(tmp_type); + Py_XDECREF(tmp_value); + Py_XDECREF(tmp_tb); +#else + PyErr_SetExcInfo(local_type, local_value, local_tb); +#endif + return 0; +bad: + *type = 0; + *value = 0; + *tb = 0; + Py_XDECREF(local_type); + Py_XDECREF(local_value); + Py_XDECREF(local_tb); + return -1; +} + +/* RaiseException */ + #if PY_MAJOR_VERSION < 3 +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, + CYTHON_UNUSED PyObject *cause) { + __Pyx_PyThreadState_declare + Py_XINCREF(type); + if (!value || value == Py_None) + value = NULL; + else + Py_INCREF(value); + if (!tb || tb == Py_None) + tb = NULL; + else { + Py_INCREF(tb); + if (!PyTraceBack_Check(tb)) { + PyErr_SetString(PyExc_TypeError, + "raise: arg 3 must be a traceback or None"); + goto raise_error; + } + } + if (PyType_Check(type)) { +#if CYTHON_COMPILING_IN_PYPY + if (!value) { + Py_INCREF(Py_None); + value = Py_None; + } +#endif + PyErr_NormalizeException(&type, &value, &tb); + } else { + if (value) { + PyErr_SetString(PyExc_TypeError, + "instance exception may not have a separate value"); + goto raise_error; + } + value = type; + type = (PyObject*) Py_TYPE(type); + Py_INCREF(type); + if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { + PyErr_SetString(PyExc_TypeError, + "raise: exception class must be a subclass of BaseException"); + goto raise_error; + } + } + __Pyx_PyThreadState_assign + __Pyx_ErrRestore(type, value, tb); + return; +raise_error: + Py_XDECREF(value); + Py_XDECREF(type); + Py_XDECREF(tb); + return; +} +#else +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { + PyObject* owned_instance = NULL; + if (tb == Py_None) { + tb = 0; + } else if (tb && !PyTraceBack_Check(tb)) { + PyErr_SetString(PyExc_TypeError, + "raise: arg 3 must be a traceback or None"); + goto bad; + } + if (value == Py_None) + value = 0; + if (PyExceptionInstance_Check(type)) { + if (value) { + PyErr_SetString(PyExc_TypeError, + "instance exception may not have a separate value"); + goto bad; + } + value = type; + type = (PyObject*) Py_TYPE(value); + } else if (PyExceptionClass_Check(type)) { + PyObject *instance_class = NULL; + if (value && PyExceptionInstance_Check(value)) { + instance_class = (PyObject*) Py_TYPE(value); + if (instance_class != type) { + int is_subclass = PyObject_IsSubclass(instance_class, type); + if (!is_subclass) { + instance_class = NULL; + } else if (unlikely(is_subclass == -1)) { + goto bad; + } else { + type = instance_class; + } + } + } + if (!instance_class) { + PyObject *args; + if (!value) + args = PyTuple_New(0); + else if (PyTuple_Check(value)) { + Py_INCREF(value); + args = value; + } else + args = PyTuple_Pack(1, value); + if (!args) + goto bad; + owned_instance = PyObject_Call(type, args, NULL); + Py_DECREF(args); + if (!owned_instance) + goto bad; + value = owned_instance; + if (!PyExceptionInstance_Check(value)) { + PyErr_Format(PyExc_TypeError, + "calling %R should have returned an instance of " + "BaseException, not %R", + type, Py_TYPE(value)); + goto bad; + } + } + } else { + PyErr_SetString(PyExc_TypeError, + "raise: exception class must be a subclass of BaseException"); + goto bad; + } + if (cause) { + PyObject *fixed_cause; + if (cause == Py_None) { + fixed_cause = NULL; + } else if (PyExceptionClass_Check(cause)) { + fixed_cause = PyObject_CallObject(cause, NULL); + if (fixed_cause == NULL) + goto bad; + } else if (PyExceptionInstance_Check(cause)) { + fixed_cause = cause; + Py_INCREF(fixed_cause); + } else { + PyErr_SetString(PyExc_TypeError, + "exception causes must derive from " + "BaseException"); + goto bad; + } + PyException_SetCause(value, fixed_cause); + } + PyErr_SetObject(type, value); + if (tb) { +#if CYTHON_COMPILING_IN_PYPY + PyObject *tmp_type, *tmp_value, *tmp_tb; + PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb); + Py_INCREF(tb); + PyErr_Restore(tmp_type, tmp_value, tb); + Py_XDECREF(tmp_tb); +#else + PyThreadState *tstate = __Pyx_PyThreadState_Current; + PyObject* tmp_tb = tstate->curexc_traceback; + if (tb != tmp_tb) { + Py_INCREF(tb); + tstate->curexc_traceback = tb; + Py_XDECREF(tmp_tb); + } +#endif + } +bad: + Py_XDECREF(owned_instance); + return; +} +#endif + +/* TypeImport */ + #ifndef __PYX_HAVE_RT_ImportType +#define __PYX_HAVE_RT_ImportType +static PyTypeObject *__Pyx_ImportType(PyObject *module, const char *module_name, const char *class_name, + size_t size, enum __Pyx_ImportType_CheckSize check_size) +{ + PyObject *result = 0; + char warning[200]; + Py_ssize_t basicsize; +#ifdef Py_LIMITED_API + PyObject *py_basicsize; +#endif + result = PyObject_GetAttrString(module, class_name); + if (!result) + goto bad; + if (!PyType_Check(result)) { + PyErr_Format(PyExc_TypeError, + "%.200s.%.200s is not a type object", + module_name, class_name); + goto bad; + } +#ifndef Py_LIMITED_API + basicsize = ((PyTypeObject *)result)->tp_basicsize; +#else + py_basicsize = PyObject_GetAttrString(result, "__basicsize__"); + if (!py_basicsize) + goto bad; + basicsize = PyLong_AsSsize_t(py_basicsize); + Py_DECREF(py_basicsize); + py_basicsize = 0; + if (basicsize == (Py_ssize_t)-1 && PyErr_Occurred()) + goto bad; +#endif + if ((size_t)basicsize < size) { + PyErr_Format(PyExc_ValueError, + "%.200s.%.200s size changed, may indicate binary incompatibility. " + "Expected %zd from C header, got %zd from PyObject", + module_name, class_name, size, basicsize); + goto bad; + } + if (check_size == __Pyx_ImportType_CheckSize_Error && (size_t)basicsize != size) { + PyErr_Format(PyExc_ValueError, + "%.200s.%.200s size changed, may indicate binary incompatibility. " + "Expected %zd from C header, got %zd from PyObject", + module_name, class_name, size, basicsize); + goto bad; + } + else if (check_size == __Pyx_ImportType_CheckSize_Warn && (size_t)basicsize > size) { + PyOS_snprintf(warning, sizeof(warning), + "%s.%s size changed, may indicate binary incompatibility. " + "Expected %zd from C header, got %zd from PyObject", + module_name, class_name, size, basicsize); + if (PyErr_WarnEx(NULL, warning, 0) < 0) goto bad; + } + return (PyTypeObject *)result; +bad: + Py_XDECREF(result); + return NULL; +} +#endif + +/* Import */ + static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) { + PyObject *empty_list = 0; + PyObject *module = 0; + PyObject *global_dict = 0; + PyObject *empty_dict = 0; + PyObject *list; + #if PY_MAJOR_VERSION < 3 + PyObject *py_import; + py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import); + if (!py_import) + goto bad; + #endif + if (from_list) + list = from_list; + else { + empty_list = PyList_New(0); + if (!empty_list) + goto bad; + list = empty_list; + } + global_dict = PyModule_GetDict(__pyx_m); + if (!global_dict) + goto bad; + empty_dict = PyDict_New(); + if (!empty_dict) + goto bad; + { + #if PY_MAJOR_VERSION >= 3 + if (level == -1) { + if (strchr(__Pyx_MODULE_NAME, '.')) { + module = PyImport_ImportModuleLevelObject( + name, global_dict, empty_dict, list, 1); + if (!module) { + if (!PyErr_ExceptionMatches(PyExc_ImportError)) + goto bad; + PyErr_Clear(); + } + } + level = 0; + } + #endif + if (!module) { + #if PY_MAJOR_VERSION < 3 + PyObject *py_level = PyInt_FromLong(level); + if (!py_level) + goto bad; + module = PyObject_CallFunctionObjArgs(py_import, + name, global_dict, empty_dict, list, py_level, (PyObject *)NULL); + Py_DECREF(py_level); + #else + module = PyImport_ImportModuleLevelObject( + name, global_dict, empty_dict, list, level); + #endif + } + } +bad: + #if PY_MAJOR_VERSION < 3 + Py_XDECREF(py_import); + #endif + Py_XDECREF(empty_list); + Py_XDECREF(empty_dict); + return module; +} + +/* CLineInTraceback */ + #ifndef CYTHON_CLINE_IN_TRACEBACK +static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line) { + PyObject *use_cline; + PyObject *ptype, *pvalue, *ptraceback; +#if CYTHON_COMPILING_IN_CPYTHON + PyObject **cython_runtime_dict; +#endif + if (unlikely(!__pyx_cython_runtime)) { + return c_line; + } + __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); +#if CYTHON_COMPILING_IN_CPYTHON + cython_runtime_dict = _PyObject_GetDictPtr(__pyx_cython_runtime); + if (likely(cython_runtime_dict)) { + __PYX_PY_DICT_LOOKUP_IF_MODIFIED( + use_cline, *cython_runtime_dict, + __Pyx_PyDict_GetItemStr(*cython_runtime_dict, __pyx_n_s_cline_in_traceback)) + } else +#endif + { + PyObject *use_cline_obj = __Pyx_PyObject_GetAttrStr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback); + if (use_cline_obj) { + use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True; + Py_DECREF(use_cline_obj); + } else { + PyErr_Clear(); + use_cline = NULL; + } + } + if (!use_cline) { + c_line = 0; + PyObject_SetAttr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback, Py_False); + } + else if (use_cline == Py_False || (use_cline != Py_True && PyObject_Not(use_cline) != 0)) { + c_line = 0; + } + __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); + return c_line; +} +#endif + +/* CodeObjectCache */ + static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) { + int start = 0, mid = 0, end = count - 1; + if (end >= 0 && code_line > entries[end].code_line) { + return count; + } + while (start < end) { + mid = start + (end - start) / 2; + if (code_line < entries[mid].code_line) { + end = mid; + } else if (code_line > entries[mid].code_line) { + start = mid + 1; + } else { + return mid; + } + } + if (code_line <= entries[mid].code_line) { + return mid; + } else { + return mid + 1; + } +} +static PyCodeObject *__pyx_find_code_object(int code_line) { + PyCodeObject* code_object; + int pos; + if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) { + return NULL; + } + pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); + if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) { + return NULL; + } + code_object = __pyx_code_cache.entries[pos].code_object; + Py_INCREF(code_object); + return code_object; +} +static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) { + int pos, i; + __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries; + if (unlikely(!code_line)) { + return; + } + if (unlikely(!entries)) { + entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry)); + if (likely(entries)) { + __pyx_code_cache.entries = entries; + __pyx_code_cache.max_count = 64; + __pyx_code_cache.count = 1; + entries[0].code_line = code_line; + entries[0].code_object = code_object; + Py_INCREF(code_object); + } + return; + } + pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); + if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) { + PyCodeObject* tmp = entries[pos].code_object; + entries[pos].code_object = code_object; + Py_DECREF(tmp); + return; + } + if (__pyx_code_cache.count == __pyx_code_cache.max_count) { + int new_max = __pyx_code_cache.max_count + 64; + entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc( + __pyx_code_cache.entries, (size_t)new_max*sizeof(__Pyx_CodeObjectCacheEntry)); + if (unlikely(!entries)) { + return; + } + __pyx_code_cache.entries = entries; + __pyx_code_cache.max_count = new_max; + } + for (i=__pyx_code_cache.count; i>pos; i--) { + entries[i] = entries[i-1]; + } + entries[pos].code_line = code_line; + entries[pos].code_object = code_object; + __pyx_code_cache.count++; + Py_INCREF(code_object); +} + +/* AddTraceback */ + #include "compile.h" +#include "frameobject.h" +#include "traceback.h" +static PyCodeObject* __Pyx_CreateCodeObjectForTraceback( + const char *funcname, int c_line, + int py_line, const char *filename) { + PyCodeObject *py_code = 0; + PyObject *py_srcfile = 0; + PyObject *py_funcname = 0; + #if PY_MAJOR_VERSION < 3 + py_srcfile = PyString_FromString(filename); + #else + py_srcfile = PyUnicode_FromString(filename); + #endif + if (!py_srcfile) goto bad; + if (c_line) { + #if PY_MAJOR_VERSION < 3 + py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); + #else + py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); + #endif + } + else { + #if PY_MAJOR_VERSION < 3 + py_funcname = PyString_FromString(funcname); + #else + py_funcname = PyUnicode_FromString(funcname); + #endif + } + if (!py_funcname) goto bad; + py_code = __Pyx_PyCode_New( + 0, + 0, + 0, + 0, + 0, + __pyx_empty_bytes, /*PyObject *code,*/ + __pyx_empty_tuple, /*PyObject *consts,*/ + __pyx_empty_tuple, /*PyObject *names,*/ + __pyx_empty_tuple, /*PyObject *varnames,*/ + __pyx_empty_tuple, /*PyObject *freevars,*/ + __pyx_empty_tuple, /*PyObject *cellvars,*/ + py_srcfile, /*PyObject *filename,*/ + py_funcname, /*PyObject *name,*/ + py_line, + __pyx_empty_bytes /*PyObject *lnotab*/ + ); + Py_DECREF(py_srcfile); + Py_DECREF(py_funcname); + return py_code; +bad: + Py_XDECREF(py_srcfile); + Py_XDECREF(py_funcname); + return NULL; +} +static void __Pyx_AddTraceback(const char *funcname, int c_line, + int py_line, const char *filename) { + PyCodeObject *py_code = 0; + PyFrameObject *py_frame = 0; + PyThreadState *tstate = __Pyx_PyThreadState_Current; + if (c_line) { + c_line = __Pyx_CLineForTraceback(tstate, c_line); + } + py_code = __pyx_find_code_object(c_line ? -c_line : py_line); + if (!py_code) { + py_code = __Pyx_CreateCodeObjectForTraceback( + funcname, c_line, py_line, filename); + if (!py_code) goto bad; + __pyx_insert_code_object(c_line ? -c_line : py_line, py_code); + } + py_frame = PyFrame_New( + tstate, /*PyThreadState *tstate,*/ + py_code, /*PyCodeObject *code,*/ + __pyx_d, /*PyObject *globals,*/ + 0 /*PyObject *locals*/ + ); + if (!py_frame) goto bad; + __Pyx_PyFrame_SetLineNumber(py_frame, py_line); + PyTraceBack_Here(py_frame); +bad: + Py_XDECREF(py_code); + Py_XDECREF(py_frame); +} + +#if PY_MAJOR_VERSION < 3 +static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags) { + if (PyObject_CheckBuffer(obj)) return PyObject_GetBuffer(obj, view, flags); + PyErr_Format(PyExc_TypeError, "'%.200s' does not have the buffer interface", Py_TYPE(obj)->tp_name); + return -1; +} +static void __Pyx_ReleaseBuffer(Py_buffer *view) { + PyObject *obj = view->obj; + if (!obj) return; + if (PyObject_CheckBuffer(obj)) { + PyBuffer_Release(view); + return; + } + if ((0)) {} + view->obj = NULL; + Py_DECREF(obj); +} +#endif + + + /* CIntToPy */ + static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) { + const long neg_one = (long) ((long) 0 - (long) 1), const_zero = (long) 0; + const int is_unsigned = neg_one > const_zero; + if (is_unsigned) { + if (sizeof(long) < sizeof(long)) { + return PyInt_FromLong((long) value); + } else if (sizeof(long) <= sizeof(unsigned long)) { + return PyLong_FromUnsignedLong((unsigned long) value); +#ifdef HAVE_LONG_LONG + } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { + return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); +#endif + } + } else { + if (sizeof(long) <= sizeof(long)) { + return PyInt_FromLong((long) value); +#ifdef HAVE_LONG_LONG + } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { + return PyLong_FromLongLong((PY_LONG_LONG) value); +#endif + } + } + { + int one = 1; int little = (int)*(unsigned char *)&one; + unsigned char *bytes = (unsigned char *)&value; + return _PyLong_FromByteArray(bytes, sizeof(long), + little, !is_unsigned); + } +} + +/* CIntFromPyVerify */ + #define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\ + __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0) +#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\ + __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1) +#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\ + {\ + func_type value = func_value;\ + if (sizeof(target_type) < sizeof(func_type)) {\ + if (unlikely(value != (func_type) (target_type) value)) {\ + func_type zero = 0;\ + if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\ + return (target_type) -1;\ + if (is_unsigned && unlikely(value < zero))\ + goto raise_neg_overflow;\ + else\ + goto raise_overflow;\ + }\ + }\ + return (target_type) value;\ + } + +/* CIntToPy */ + static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value) { + const int neg_one = (int) ((int) 0 - (int) 1), const_zero = (int) 0; + const int is_unsigned = neg_one > const_zero; + if (is_unsigned) { + if (sizeof(int) < sizeof(long)) { + return PyInt_FromLong((long) value); + } else if (sizeof(int) <= sizeof(unsigned long)) { + return PyLong_FromUnsignedLong((unsigned long) value); +#ifdef HAVE_LONG_LONG + } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { + return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); +#endif + } + } else { + if (sizeof(int) <= sizeof(long)) { + return PyInt_FromLong((long) value); +#ifdef HAVE_LONG_LONG + } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { + return PyLong_FromLongLong((PY_LONG_LONG) value); +#endif + } + } + { + int one = 1; int little = (int)*(unsigned char *)&one; + unsigned char *bytes = (unsigned char *)&value; + return _PyLong_FromByteArray(bytes, sizeof(int), + little, !is_unsigned); + } +} + +/* Declarations */ + #if CYTHON_CCOMPLEX + #ifdef __cplusplus + static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) { + return ::std::complex< float >(x, y); + } + #else + static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) { + return x + y*(__pyx_t_float_complex)_Complex_I; + } + #endif +#else + static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) { + __pyx_t_float_complex z; + z.real = x; + z.imag = y; + return z; + } +#endif + +/* Arithmetic */ + #if CYTHON_CCOMPLEX +#else + static CYTHON_INLINE int __Pyx_c_eq_float(__pyx_t_float_complex a, __pyx_t_float_complex b) { + return (a.real == b.real) && (a.imag == b.imag); + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sum_float(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + z.real = a.real + b.real; + z.imag = a.imag + b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_diff_float(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + z.real = a.real - b.real; + z.imag = a.imag - b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prod_float(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + z.real = a.real * b.real - a.imag * b.imag; + z.imag = a.real * b.imag + a.imag * b.real; + return z; + } + #if 1 + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex a, __pyx_t_float_complex b) { + if (b.imag == 0) { + return __pyx_t_float_complex_from_parts(a.real / b.real, a.imag / b.real); + } else if (fabsf(b.real) >= fabsf(b.imag)) { + if (b.real == 0 && b.imag == 0) { + return __pyx_t_float_complex_from_parts(a.real / b.real, a.imag / b.imag); + } else { + float r = b.imag / b.real; + float s = (float)(1.0) / (b.real + b.imag * r); + return __pyx_t_float_complex_from_parts( + (a.real + a.imag * r) * s, (a.imag - a.real * r) * s); + } + } else { + float r = b.real / b.imag; + float s = (float)(1.0) / (b.imag + b.real * r); + return __pyx_t_float_complex_from_parts( + (a.real * r + a.imag) * s, (a.imag * r - a.real) * s); + } + } + #else + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex a, __pyx_t_float_complex b) { + if (b.imag == 0) { + return __pyx_t_float_complex_from_parts(a.real / b.real, a.imag / b.real); + } else { + float denom = b.real * b.real + b.imag * b.imag; + return __pyx_t_float_complex_from_parts( + (a.real * b.real + a.imag * b.imag) / denom, + (a.imag * b.real - a.real * b.imag) / denom); + } + } + #endif + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_neg_float(__pyx_t_float_complex a) { + __pyx_t_float_complex z; + z.real = -a.real; + z.imag = -a.imag; + return z; + } + static CYTHON_INLINE int __Pyx_c_is_zero_float(__pyx_t_float_complex a) { + return (a.real == 0) && (a.imag == 0); + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conj_float(__pyx_t_float_complex a) { + __pyx_t_float_complex z; + z.real = a.real; + z.imag = -a.imag; + return z; + } + #if 1 + static CYTHON_INLINE float __Pyx_c_abs_float(__pyx_t_float_complex z) { + #if !defined(HAVE_HYPOT) || defined(_MSC_VER) + return sqrtf(z.real*z.real + z.imag*z.imag); + #else + return hypotf(z.real, z.imag); + #endif + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_pow_float(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + float r, lnr, theta, z_r, z_theta; + if (b.imag == 0 && b.real == (int)b.real) { + if (b.real < 0) { + float denom = a.real * a.real + a.imag * a.imag; + a.real = a.real / denom; + a.imag = -a.imag / denom; + b.real = -b.real; + } + switch ((int)b.real) { + case 0: + z.real = 1; + z.imag = 0; + return z; + case 1: + return a; + case 2: + return __Pyx_c_prod_float(a, a); + case 3: + z = __Pyx_c_prod_float(a, a); + return __Pyx_c_prod_float(z, a); + case 4: + z = __Pyx_c_prod_float(a, a); + return __Pyx_c_prod_float(z, z); + } + } + if (a.imag == 0) { + if (a.real == 0) { + return a; + } else if (b.imag == 0) { + z.real = powf(a.real, b.real); + z.imag = 0; + return z; + } else if (a.real > 0) { + r = a.real; + theta = 0; + } else { + r = -a.real; + theta = atan2f(0.0, -1.0); + } + } else { + r = __Pyx_c_abs_float(a); + theta = atan2f(a.imag, a.real); + } + lnr = logf(r); + z_r = expf(lnr * b.real - theta * b.imag); + z_theta = theta * b.real + lnr * b.imag; + z.real = z_r * cosf(z_theta); + z.imag = z_r * sinf(z_theta); + return z; + } + #endif +#endif + +/* Declarations */ + #if CYTHON_CCOMPLEX + #ifdef __cplusplus + static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { + return ::std::complex< double >(x, y); + } + #else + static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { + return x + y*(__pyx_t_double_complex)_Complex_I; + } + #endif +#else + static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { + __pyx_t_double_complex z; + z.real = x; + z.imag = y; + return z; + } +#endif + +/* Arithmetic */ + #if CYTHON_CCOMPLEX +#else + static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { + return (a.real == b.real) && (a.imag == b.imag); + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + z.real = a.real + b.real; + z.imag = a.imag + b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + z.real = a.real - b.real; + z.imag = a.imag - b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + z.real = a.real * b.real - a.imag * b.imag; + z.imag = a.real * b.imag + a.imag * b.real; + return z; + } + #if 1 + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { + if (b.imag == 0) { + return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.real); + } else if (fabs(b.real) >= fabs(b.imag)) { + if (b.real == 0 && b.imag == 0) { + return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.imag); + } else { + double r = b.imag / b.real; + double s = (double)(1.0) / (b.real + b.imag * r); + return __pyx_t_double_complex_from_parts( + (a.real + a.imag * r) * s, (a.imag - a.real * r) * s); + } + } else { + double r = b.real / b.imag; + double s = (double)(1.0) / (b.imag + b.real * r); + return __pyx_t_double_complex_from_parts( + (a.real * r + a.imag) * s, (a.imag * r - a.real) * s); + } + } + #else + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { + if (b.imag == 0) { + return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.real); + } else { + double denom = b.real * b.real + b.imag * b.imag; + return __pyx_t_double_complex_from_parts( + (a.real * b.real + a.imag * b.imag) / denom, + (a.imag * b.real - a.real * b.imag) / denom); + } + } + #endif + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_double_complex a) { + __pyx_t_double_complex z; + z.real = -a.real; + z.imag = -a.imag; + return z; + } + static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex a) { + return (a.real == 0) && (a.imag == 0); + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_double_complex a) { + __pyx_t_double_complex z; + z.real = a.real; + z.imag = -a.imag; + return z; + } + #if 1 + static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex z) { + #if !defined(HAVE_HYPOT) || defined(_MSC_VER) + return sqrt(z.real*z.real + z.imag*z.imag); + #else + return hypot(z.real, z.imag); + #endif + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + double r, lnr, theta, z_r, z_theta; + if (b.imag == 0 && b.real == (int)b.real) { + if (b.real < 0) { + double denom = a.real * a.real + a.imag * a.imag; + a.real = a.real / denom; + a.imag = -a.imag / denom; + b.real = -b.real; + } + switch ((int)b.real) { + case 0: + z.real = 1; + z.imag = 0; + return z; + case 1: + return a; + case 2: + return __Pyx_c_prod_double(a, a); + case 3: + z = __Pyx_c_prod_double(a, a); + return __Pyx_c_prod_double(z, a); + case 4: + z = __Pyx_c_prod_double(a, a); + return __Pyx_c_prod_double(z, z); + } + } + if (a.imag == 0) { + if (a.real == 0) { + return a; + } else if (b.imag == 0) { + z.real = pow(a.real, b.real); + z.imag = 0; + return z; + } else if (a.real > 0) { + r = a.real; + theta = 0; + } else { + r = -a.real; + theta = atan2(0.0, -1.0); + } + } else { + r = __Pyx_c_abs_double(a); + theta = atan2(a.imag, a.real); + } + lnr = log(r); + z_r = exp(lnr * b.real - theta * b.imag); + z_theta = theta * b.real + lnr * b.imag; + z.real = z_r * cos(z_theta); + z.imag = z_r * sin(z_theta); + return z; + } + #endif +#endif + +/* CIntFromPy */ + static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) { + const int neg_one = (int) ((int) 0 - (int) 1), const_zero = (int) 0; + const int is_unsigned = neg_one > const_zero; +#if PY_MAJOR_VERSION < 3 + if (likely(PyInt_Check(x))) { + if (sizeof(int) < sizeof(long)) { + __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x)) + } else { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + goto raise_neg_overflow; + } + return (int) val; + } + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (int) 0; + case 1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0]) + case 2: + if (8 * sizeof(int) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) >= 2 * PyLong_SHIFT) { + return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); + } + } + break; + case 3: + if (8 * sizeof(int) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) >= 3 * PyLong_SHIFT) { + return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); + } + } + break; + case 4: + if (8 * sizeof(int) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) >= 4 * PyLong_SHIFT) { + return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); + } + } + break; + } +#endif +#if CYTHON_COMPILING_IN_CPYTHON + if (unlikely(Py_SIZE(x) < 0)) { + goto raise_neg_overflow; + } +#else + { + int result = PyObject_RichCompareBool(x, Py_False, Py_LT); + if (unlikely(result < 0)) + return (int) -1; + if (unlikely(result == 1)) + goto raise_neg_overflow; + } +#endif + if (sizeof(int) <= sizeof(unsigned long)) { + __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x)) +#ifdef HAVE_LONG_LONG + } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) +#endif + } + } else { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (int) 0; + case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0])) + case 1: __PYX_VERIFY_RETURN_INT(int, digit, +digits[0]) + case -2: + if (8 * sizeof(int) - 1 > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { + return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + case 2: + if (8 * sizeof(int) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { + return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + case -3: + if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { + return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + case 3: + if (8 * sizeof(int) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { + return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + case -4: + if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { + return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + case 4: + if (8 * sizeof(int) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { + return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); + } + } + break; + } +#endif + if (sizeof(int) <= sizeof(long)) { + __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x)) +#ifdef HAVE_LONG_LONG + } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x)) +#endif + } + } + { +#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) + PyErr_SetString(PyExc_RuntimeError, + "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); +#else + int val; + PyObject *v = __Pyx_PyNumber_IntOrLong(x); + #if PY_MAJOR_VERSION < 3 + if (likely(v) && !PyLong_Check(v)) { + PyObject *tmp = v; + v = PyNumber_Long(tmp); + Py_DECREF(tmp); + } + #endif + if (likely(v)) { + int one = 1; int is_little = (int)*(unsigned char *)&one; + unsigned char *bytes = (unsigned char *)&val; + int ret = _PyLong_AsByteArray((PyLongObject *)v, + bytes, sizeof(val), + is_little, !is_unsigned); + Py_DECREF(v); + if (likely(!ret)) + return val; + } +#endif + return (int) -1; + } + } else { + int val; + PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); + if (!tmp) return (int) -1; + val = __Pyx_PyInt_As_int(tmp); + Py_DECREF(tmp); + return val; + } +raise_overflow: + PyErr_SetString(PyExc_OverflowError, + "value too large to convert to int"); + return (int) -1; +raise_neg_overflow: + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to int"); + return (int) -1; +} + +/* CIntFromPy */ + static CYTHON_INLINE size_t __Pyx_PyInt_As_size_t(PyObject *x) { + const size_t neg_one = (size_t) ((size_t) 0 - (size_t) 1), const_zero = (size_t) 0; + const int is_unsigned = neg_one > const_zero; +#if PY_MAJOR_VERSION < 3 + if (likely(PyInt_Check(x))) { + if (sizeof(size_t) < sizeof(long)) { + __PYX_VERIFY_RETURN_INT(size_t, long, PyInt_AS_LONG(x)) + } else { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + goto raise_neg_overflow; + } + return (size_t) val; + } + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (size_t) 0; + case 1: __PYX_VERIFY_RETURN_INT(size_t, digit, digits[0]) + case 2: + if (8 * sizeof(size_t) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(size_t) >= 2 * PyLong_SHIFT) { + return (size_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + } + break; + case 3: + if (8 * sizeof(size_t) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(size_t) >= 3 * PyLong_SHIFT) { + return (size_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + } + break; + case 4: + if (8 * sizeof(size_t) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(size_t) >= 4 * PyLong_SHIFT) { + return (size_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + } + break; + } +#endif +#if CYTHON_COMPILING_IN_CPYTHON + if (unlikely(Py_SIZE(x) < 0)) { + goto raise_neg_overflow; + } +#else + { + int result = PyObject_RichCompareBool(x, Py_False, Py_LT); + if (unlikely(result < 0)) + return (size_t) -1; + if (unlikely(result == 1)) + goto raise_neg_overflow; + } +#endif + if (sizeof(size_t) <= sizeof(unsigned long)) { + __PYX_VERIFY_RETURN_INT_EXC(size_t, unsigned long, PyLong_AsUnsignedLong(x)) +#ifdef HAVE_LONG_LONG + } else if (sizeof(size_t) <= sizeof(unsigned PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(size_t, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) +#endif + } + } else { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (size_t) 0; + case -1: __PYX_VERIFY_RETURN_INT(size_t, sdigit, (sdigit) (-(sdigit)digits[0])) + case 1: __PYX_VERIFY_RETURN_INT(size_t, digit, +digits[0]) + case -2: + if (8 * sizeof(size_t) - 1 > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(size_t, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(size_t) - 1 > 2 * PyLong_SHIFT) { + return (size_t) (((size_t)-1)*(((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]))); + } + } + break; + case 2: + if (8 * sizeof(size_t) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(size_t) - 1 > 2 * PyLong_SHIFT) { + return (size_t) ((((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]))); + } + } + break; + case -3: + if (8 * sizeof(size_t) - 1 > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(size_t, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(size_t) - 1 > 3 * PyLong_SHIFT) { + return (size_t) (((size_t)-1)*(((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]))); + } + } + break; + case 3: + if (8 * sizeof(size_t) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(size_t) - 1 > 3 * PyLong_SHIFT) { + return (size_t) ((((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]))); + } + } + break; + case -4: + if (8 * sizeof(size_t) - 1 > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(size_t, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(size_t) - 1 > 4 * PyLong_SHIFT) { + return (size_t) (((size_t)-1)*(((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]))); + } + } + break; + case 4: + if (8 * sizeof(size_t) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(size_t) - 1 > 4 * PyLong_SHIFT) { + return (size_t) ((((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]))); + } + } + break; + } +#endif + if (sizeof(size_t) <= sizeof(long)) { + __PYX_VERIFY_RETURN_INT_EXC(size_t, long, PyLong_AsLong(x)) +#ifdef HAVE_LONG_LONG + } else if (sizeof(size_t) <= sizeof(PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(size_t, PY_LONG_LONG, PyLong_AsLongLong(x)) +#endif + } + } + { +#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) + PyErr_SetString(PyExc_RuntimeError, + "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); +#else + size_t val; + PyObject *v = __Pyx_PyNumber_IntOrLong(x); + #if PY_MAJOR_VERSION < 3 + if (likely(v) && !PyLong_Check(v)) { + PyObject *tmp = v; + v = PyNumber_Long(tmp); + Py_DECREF(tmp); + } + #endif + if (likely(v)) { + int one = 1; int is_little = (int)*(unsigned char *)&one; + unsigned char *bytes = (unsigned char *)&val; + int ret = _PyLong_AsByteArray((PyLongObject *)v, + bytes, sizeof(val), + is_little, !is_unsigned); + Py_DECREF(v); + if (likely(!ret)) + return val; + } +#endif + return (size_t) -1; + } + } else { + size_t val; + PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); + if (!tmp) return (size_t) -1; + val = __Pyx_PyInt_As_size_t(tmp); + Py_DECREF(tmp); + return val; + } +raise_overflow: + PyErr_SetString(PyExc_OverflowError, + "value too large to convert to size_t"); + return (size_t) -1; +raise_neg_overflow: + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to size_t"); + return (size_t) -1; +} + +/* CIntFromPy */ + static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) { + const long neg_one = (long) ((long) 0 - (long) 1), const_zero = (long) 0; + const int is_unsigned = neg_one > const_zero; +#if PY_MAJOR_VERSION < 3 + if (likely(PyInt_Check(x))) { + if (sizeof(long) < sizeof(long)) { + __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x)) + } else { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + goto raise_neg_overflow; + } + return (long) val; + } + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (long) 0; + case 1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0]) + case 2: + if (8 * sizeof(long) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) >= 2 * PyLong_SHIFT) { + return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); + } + } + break; + case 3: + if (8 * sizeof(long) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) >= 3 * PyLong_SHIFT) { + return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); + } + } + break; + case 4: + if (8 * sizeof(long) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) >= 4 * PyLong_SHIFT) { + return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); + } + } + break; + } +#endif +#if CYTHON_COMPILING_IN_CPYTHON + if (unlikely(Py_SIZE(x) < 0)) { + goto raise_neg_overflow; + } +#else + { + int result = PyObject_RichCompareBool(x, Py_False, Py_LT); + if (unlikely(result < 0)) + return (long) -1; + if (unlikely(result == 1)) + goto raise_neg_overflow; + } +#endif + if (sizeof(long) <= sizeof(unsigned long)) { + __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x)) +#ifdef HAVE_LONG_LONG + } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) +#endif + } + } else { +#if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)x)->ob_digit; + switch (Py_SIZE(x)) { + case 0: return (long) 0; + case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0])) + case 1: __PYX_VERIFY_RETURN_INT(long, digit, +digits[0]) + case -2: + if (8 * sizeof(long) - 1 > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { + return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + case 2: + if (8 * sizeof(long) > 1 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { + return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + case -3: + if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { + return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + case 3: + if (8 * sizeof(long) > 2 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { + return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + case -4: + if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { + return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + case 4: + if (8 * sizeof(long) > 3 * PyLong_SHIFT) { + if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { + __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) + } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { + return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); + } + } + break; + } +#endif + if (sizeof(long) <= sizeof(long)) { + __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x)) +#ifdef HAVE_LONG_LONG + } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { + __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x)) +#endif + } + } + { +#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) + PyErr_SetString(PyExc_RuntimeError, + "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); +#else + long val; + PyObject *v = __Pyx_PyNumber_IntOrLong(x); + #if PY_MAJOR_VERSION < 3 + if (likely(v) && !PyLong_Check(v)) { + PyObject *tmp = v; + v = PyNumber_Long(tmp); + Py_DECREF(tmp); + } + #endif + if (likely(v)) { + int one = 1; int is_little = (int)*(unsigned char *)&one; + unsigned char *bytes = (unsigned char *)&val; + int ret = _PyLong_AsByteArray((PyLongObject *)v, + bytes, sizeof(val), + is_little, !is_unsigned); + Py_DECREF(v); + if (likely(!ret)) + return val; + } +#endif + return (long) -1; + } + } else { + long val; + PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); + if (!tmp) return (long) -1; + val = __Pyx_PyInt_As_long(tmp); + Py_DECREF(tmp); + return val; + } +raise_overflow: + PyErr_SetString(PyExc_OverflowError, + "value too large to convert to long"); + return (long) -1; +raise_neg_overflow: + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to long"); + return (long) -1; +} + +/* FastTypeChecks */ + #if CYTHON_COMPILING_IN_CPYTHON +static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) { + while (a) { + a = a->tp_base; + if (a == b) + return 1; + } + return b == &PyBaseObject_Type; +} +static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) { + PyObject *mro; + if (a == b) return 1; + mro = a->tp_mro; + if (likely(mro)) { + Py_ssize_t i, n; + n = PyTuple_GET_SIZE(mro); + for (i = 0; i < n; i++) { + if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b) + return 1; + } + return 0; + } + return __Pyx_InBases(a, b); +} +#if PY_MAJOR_VERSION == 2 +static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) { + PyObject *exception, *value, *tb; + int res; + __Pyx_PyThreadState_declare + __Pyx_PyThreadState_assign + __Pyx_ErrFetch(&exception, &value, &tb); + res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0; + if (unlikely(res == -1)) { + PyErr_WriteUnraisable(err); + res = 0; + } + if (!res) { + res = PyObject_IsSubclass(err, exc_type2); + if (unlikely(res == -1)) { + PyErr_WriteUnraisable(err); + res = 0; + } + } + __Pyx_ErrRestore(exception, value, tb); + return res; +} +#else +static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) { + int res = exc_type1 ? __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type1) : 0; + if (!res) { + res = __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2); + } + return res; +} +#endif +static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { + Py_ssize_t i, n; + assert(PyExceptionClass_Check(exc_type)); + n = PyTuple_GET_SIZE(tuple); +#if PY_MAJOR_VERSION >= 3 + for (i=0; ip) { + #if PY_MAJOR_VERSION < 3 + if (t->is_unicode) { + *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); + } else if (t->intern) { + *t->p = PyString_InternFromString(t->s); + } else { + *t->p = PyString_FromStringAndSize(t->s, t->n - 1); + } + #else + if (t->is_unicode | t->is_str) { + if (t->intern) { + *t->p = PyUnicode_InternFromString(t->s); + } else if (t->encoding) { + *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL); + } else { + *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1); + } + } else { + *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1); + } + #endif + if (!*t->p) + return -1; + if (PyObject_Hash(*t->p) == -1) + return -1; + ++t; + } + return 0; +} + +static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) { + return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str)); +} +static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) { + Py_ssize_t ignore; + return __Pyx_PyObject_AsStringAndSize(o, &ignore); +} +#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT +#if !CYTHON_PEP393_ENABLED +static const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { + char* defenc_c; + PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL); + if (!defenc) return NULL; + defenc_c = PyBytes_AS_STRING(defenc); +#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII + { + char* end = defenc_c + PyBytes_GET_SIZE(defenc); + char* c; + for (c = defenc_c; c < end; c++) { + if ((unsigned char) (*c) >= 128) { + PyUnicode_AsASCIIString(o); + return NULL; + } + } + } +#endif + *length = PyBytes_GET_SIZE(defenc); + return defenc_c; +} +#else +static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { + if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL; +#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII + if (likely(PyUnicode_IS_ASCII(o))) { + *length = PyUnicode_GET_LENGTH(o); + return PyUnicode_AsUTF8(o); + } else { + PyUnicode_AsASCIIString(o); + return NULL; + } +#else + return PyUnicode_AsUTF8AndSize(o, length); +#endif +} +#endif +#endif +static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) { +#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT + if ( +#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII + __Pyx_sys_getdefaultencoding_not_ascii && +#endif + PyUnicode_Check(o)) { + return __Pyx_PyUnicode_AsStringAndSize(o, length); + } else +#endif +#if (!CYTHON_COMPILING_IN_PYPY) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE)) + if (PyByteArray_Check(o)) { + *length = PyByteArray_GET_SIZE(o); + return PyByteArray_AS_STRING(o); + } else +#endif + { + char* result; + int r = PyBytes_AsStringAndSize(o, &result, length); + if (unlikely(r < 0)) { + return NULL; + } else { + return result; + } + } +} +static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { + int is_true = x == Py_True; + if (is_true | (x == Py_False) | (x == Py_None)) return is_true; + else return PyObject_IsTrue(x); +} +static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) { + int retval; + if (unlikely(!x)) return -1; + retval = __Pyx_PyObject_IsTrue(x); + Py_DECREF(x); + return retval; +} +static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) { +#if PY_MAJOR_VERSION >= 3 + if (PyLong_Check(result)) { + if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1, + "__int__ returned non-int (type %.200s). " + "The ability to return an instance of a strict subclass of int " + "is deprecated, and may be removed in a future version of Python.", + Py_TYPE(result)->tp_name)) { + Py_DECREF(result); + return NULL; + } + return result; + } +#endif + PyErr_Format(PyExc_TypeError, + "__%.4s__ returned non-%.4s (type %.200s)", + type_name, type_name, Py_TYPE(result)->tp_name); + Py_DECREF(result); + return NULL; +} +static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) { +#if CYTHON_USE_TYPE_SLOTS + PyNumberMethods *m; +#endif + const char *name = NULL; + PyObject *res = NULL; +#if PY_MAJOR_VERSION < 3 + if (likely(PyInt_Check(x) || PyLong_Check(x))) +#else + if (likely(PyLong_Check(x))) +#endif + return __Pyx_NewRef(x); +#if CYTHON_USE_TYPE_SLOTS + m = Py_TYPE(x)->tp_as_number; + #if PY_MAJOR_VERSION < 3 + if (m && m->nb_int) { + name = "int"; + res = m->nb_int(x); + } + else if (m && m->nb_long) { + name = "long"; + res = m->nb_long(x); + } + #else + if (likely(m && m->nb_int)) { + name = "int"; + res = m->nb_int(x); + } + #endif +#else + if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) { + res = PyNumber_Int(x); + } +#endif + if (likely(res)) { +#if PY_MAJOR_VERSION < 3 + if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) { +#else + if (unlikely(!PyLong_CheckExact(res))) { +#endif + return __Pyx_PyNumber_IntOrLongWrongResultType(res, name); + } + } + else if (!PyErr_Occurred()) { + PyErr_SetString(PyExc_TypeError, + "an integer is required"); + } + return res; +} +static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { + Py_ssize_t ival; + PyObject *x; +#if PY_MAJOR_VERSION < 3 + if (likely(PyInt_CheckExact(b))) { + if (sizeof(Py_ssize_t) >= sizeof(long)) + return PyInt_AS_LONG(b); + else + return PyInt_AsSsize_t(b); + } +#endif + if (likely(PyLong_CheckExact(b))) { + #if CYTHON_USE_PYLONG_INTERNALS + const digit* digits = ((PyLongObject*)b)->ob_digit; + const Py_ssize_t size = Py_SIZE(b); + if (likely(__Pyx_sst_abs(size) <= 1)) { + ival = likely(size) ? digits[0] : 0; + if (size == -1) ival = -ival; + return ival; + } else { + switch (size) { + case 2: + if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { + return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + case -2: + if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { + return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + case 3: + if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { + return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + case -3: + if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { + return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + case 4: + if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { + return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + case -4: + if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { + return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); + } + break; + } + } + #endif + return PyLong_AsSsize_t(b); + } + x = PyNumber_Index(b); + if (!x) return -1; + ival = PyInt_AsSsize_t(x); + Py_DECREF(x); + return ival; +} +static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) { + return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False); +} +static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { + return PyInt_FromSize_t(ival); +} + + +#endif /* Py_PYTHON_H */ diff --git a/competing_methods/my_RandLANet/utils/nearest_neighbors/knn.pyx b/competing_methods/my_RandLANet/utils/nearest_neighbors/knn.pyx new file mode 100644 index 00000000..34b5547f --- /dev/null +++ b/competing_methods/my_RandLANet/utils/nearest_neighbors/knn.pyx @@ -0,0 +1,149 @@ +# distutils: language = c++ +# distutils: sources = knn.cxx + +import numpy as np +cimport numpy as np +import cython + +cdef extern from "knn_.h": + void cpp_knn(const float* points, const size_t npts, const size_t dim, + const float* queries, const size_t nqueries, + const size_t K, long long* indices) + + void cpp_knn_omp(const float* points, const size_t npts, const size_t dim, + const float* queries, const size_t nqueries, + const size_t K, long long* indices) + + void cpp_knn_batch(const float* batch_data, const size_t batch_size, const size_t npts, const size_t dim, + const float* queries, const size_t nqueries, + const size_t K, long long* batch_indices) + + void cpp_knn_batch_omp(const float* batch_data, const size_t batch_size, const size_t npts, const size_t dim, + const float* queries, const size_t nqueries, + const size_t K, long long* batch_indices) + + void cpp_knn_batch_distance_pick(const float* batch_data, const size_t batch_size, const size_t npts, const size_t dim, + float* queries, const size_t nqueries, + const size_t K, long long* batch_indices) + + void cpp_knn_batch_distance_pick_omp(const float* batch_data, const size_t batch_size, const size_t npts, const size_t dim, + float* batch_queries, const size_t nqueries, + const size_t K, long long* batch_indices) + +def knn(pts, queries, K, omp=False): + + # define shape parameters + cdef int npts + cdef int dim + cdef int K_cpp + cdef int nqueries + + # define tables + cdef np.ndarray[np.float32_t, ndim=2] pts_cpp + cdef np.ndarray[np.float32_t, ndim=2] queries_cpp + cdef np.ndarray[np.int64_t, ndim=2] indices_cpp + + # set shape values + npts = pts.shape[0] + nqueries = queries.shape[0] + dim = pts.shape[1] + K_cpp = K + + # create indices tensor + indices = np.zeros((queries.shape[0], K), dtype=np.int64) + + pts_cpp = np.ascontiguousarray(pts, dtype=np.float32) + queries_cpp = np.ascontiguousarray(queries, dtype=np.float32) + indices_cpp = indices + + # normal estimation + if omp: + cpp_knn_omp( pts_cpp.data, npts, dim, + queries_cpp.data, nqueries, + K_cpp, indices_cpp.data) + else: + cpp_knn( pts_cpp.data, npts, dim, + queries_cpp.data, nqueries, + K_cpp, indices_cpp.data) + + return indices + +def knn_batch(pts, queries, K, omp=False): + + # define shape parameters + cdef int batch_size + cdef int npts + cdef int nqueries + cdef int K_cpp + cdef int dim + + # define tables + cdef np.ndarray[np.float32_t, ndim=3] pts_cpp + cdef np.ndarray[np.float32_t, ndim=3] queries_cpp + cdef np.ndarray[np.int64_t, ndim=3] indices_cpp + + # set shape values + batch_size = pts.shape[0] + npts = pts.shape[1] + dim = pts.shape[2] + nqueries = queries.shape[1] + K_cpp = K + + # create indices tensor + indices = np.zeros((pts.shape[0], queries.shape[1], K), dtype=np.int64) + + pts_cpp = np.ascontiguousarray(pts, dtype=np.float32) + queries_cpp = np.ascontiguousarray(queries, dtype=np.float32) + indices_cpp = indices + + # normal estimation + if omp: + cpp_knn_batch_omp( pts_cpp.data, batch_size, npts, dim, + queries_cpp.data, nqueries, + K_cpp, indices_cpp.data) + else: + cpp_knn_batch( pts_cpp.data, batch_size, npts, dim, + queries_cpp.data, nqueries, + K_cpp, indices_cpp.data) + + return indices + +def knn_batch_distance_pick(pts, nqueries, K, omp=False): + + # define shape parameters + cdef int batch_size + cdef int npts + cdef int nqueries_cpp + cdef int K_cpp + cdef int dim + + # define tables + cdef np.ndarray[np.float32_t, ndim=3] pts_cpp + cdef np.ndarray[np.float32_t, ndim=3] queries_cpp + cdef np.ndarray[np.int64_t, ndim=3] indices_cpp + + # set shape values + batch_size = pts.shape[0] + npts = pts.shape[1] + dim = pts.shape[2] + nqueries_cpp = nqueries + K_cpp = K + + # create indices tensor + indices = np.zeros((pts.shape[0], nqueries, K), dtype=np.longlong) + queries = np.zeros((pts.shape[0], nqueries, dim), dtype=np.float32) + + pts_cpp = np.ascontiguousarray(pts, dtype=np.float32) + queries_cpp = np.ascontiguousarray(queries, dtype=np.float32) + indices_cpp = indices + + if omp: + cpp_knn_batch_distance_pick_omp( pts_cpp.data, batch_size, npts, dim, + queries_cpp.data, nqueries, + K_cpp, indices_cpp.data) + else: + cpp_knn_batch_distance_pick( pts_cpp.data, batch_size, npts, dim, + queries_cpp.data, nqueries, + K_cpp, indices_cpp.data) + + return indices, queries \ No newline at end of file diff --git a/competing_methods/my_RandLANet/utils/nearest_neighbors/knn_.cxx b/competing_methods/my_RandLANet/utils/nearest_neighbors/knn_.cxx new file mode 100644 index 00000000..60cbd312 --- /dev/null +++ b/competing_methods/my_RandLANet/utils/nearest_neighbors/knn_.cxx @@ -0,0 +1,271 @@ + +#include "knn_.h" +#include "nanoflann.hpp" +using namespace nanoflann; + +#include "KDTreeTableAdaptor.h" + +#include +#include +#include +#include + +#include +#include +#include +#include + +using namespace std; + + + +void cpp_knn(const float* points, const size_t npts, const size_t dim, + const float* queries, const size_t nqueries, + const size_t K, long long* indices){ + + // create the kdtree + typedef KDTreeTableAdaptor< float, float> KDTree; + KDTree mat_index(npts, dim, points, 10); + mat_index.index->buildIndex(); + + std::vector out_dists_sqr(K); + std::vector out_ids(K); + + // iterate over the points + for(size_t i=0; i resultSet(K); + resultSet.init(&out_ids[0], &out_dists_sqr[0] ); + mat_index.index->findNeighbors(resultSet, &queries[i*dim], nanoflann::SearchParams(10)); + for(size_t j=0; j KDTree; + KDTree mat_index(npts, dim, points, 10); + mat_index.index->buildIndex(); + + + // iterate over the points +# pragma omp parallel for + for(size_t i=0; i out_ids(K); + std::vector out_dists_sqr(K); + + nanoflann::KNNResultSet resultSet(K); + resultSet.init(&out_ids[0], &out_dists_sqr[0] ); + mat_index.index->findNeighbors(resultSet, &queries[i*dim], nanoflann::SearchParams(10)); + for(size_t j=0; j KDTree; + KDTree mat_index(npts, dim, points, 10); + + mat_index.index->buildIndex(); + + std::vector out_dists_sqr(K); + std::vector out_ids(K); + + // iterate over the points + for(size_t i=0; i resultSet(K); + resultSet.init(&out_ids[0], &out_dists_sqr[0] ); + mat_index.index->findNeighbors(resultSet, &queries[bid*nqueries*dim + i*dim], nanoflann::SearchParams(10)); + for(size_t j=0; j KDTree; + KDTree mat_index(npts, dim, points, 10); + + mat_index.index->buildIndex(); + + std::vector out_dists_sqr(K); + std::vector out_ids(K); + + // iterate over the points + for(size_t i=0; i resultSet(K); + resultSet.init(&out_ids[0], &out_dists_sqr[0] ); + mat_index.index->findNeighbors(resultSet, &queries[bid*nqueries*dim + i*dim], nanoflann::SearchParams(10)); + for(size_t j=0; j KDTree; + KDTree tree(npts, dim, points, 10); + tree.index->buildIndex(); + + vector used(npts, 0); + int current_id = 0; + for(size_t ptid=0; ptid possible_ids; + while(possible_ids.size() == 0){ + for(size_t i=0; i query(3); + for(size_t i=0; i dists(K); + std::vector ids(K); + nanoflann::KNNResultSet resultSet(K); + resultSet.init(&ids[0], &dists[0] ); + tree.index->findNeighbors(resultSet, &query[0], nanoflann::SearchParams(10)); + + for(size_t i=0; i KDTree; + KDTree tree(npts, dim, points, 10); + tree.index->buildIndex(); + + vector used(npts, 0); + int current_id = 0; + for(size_t ptid=0; ptid possible_ids; + while(possible_ids.size() == 0){ + for(size_t i=0; i query(3); + for(size_t i=0; i dists(K); + std::vector ids(K); + nanoflann::KNNResultSet resultSet(K); + resultSet.init(&ids[0], &dists[0] ); + tree.index->findNeighbors(resultSet, &query[0], nanoflann::SearchParams(10)); + + for(size_t i=0; i +void cpp_knn(const float* points, const size_t npts, const size_t dim, + const float* queries, const size_t nqueries, + const size_t K, long long* indices); + +void cpp_knn_omp(const float* points, const size_t npts, const size_t dim, + const float* queries, const size_t nqueries, + const size_t K, long long* indices); + + +void cpp_knn_batch(const float* batch_data, const size_t batch_size, const size_t npts, const size_t dim, + const float* queries, const size_t nqueries, + const size_t K, long long* batch_indices); + +void cpp_knn_batch_omp(const float* batch_data, const size_t batch_size, const size_t npts, const size_t dim, + const float* queries, const size_t nqueries, + const size_t K, long long* batch_indices); + +void cpp_knn_batch_distance_pick(const float* batch_data, const size_t batch_size, const size_t npts, const size_t dim, + float* queries, const size_t nqueries, + const size_t K, long long* batch_indices); + +void cpp_knn_batch_distance_pick_omp(const float* batch_data, const size_t batch_size, const size_t npts, const size_t dim, + float* batch_queries, const size_t nqueries, + const size_t K, long long* batch_indices); \ No newline at end of file diff --git a/competing_methods/my_RandLANet/utils/nearest_neighbors/lib/python/KNN_NanoFLANN-0.0.0-py3.6.egg-info b/competing_methods/my_RandLANet/utils/nearest_neighbors/lib/python/KNN_NanoFLANN-0.0.0-py3.6.egg-info new file mode 100644 index 00000000..3d8f9397 --- /dev/null +++ b/competing_methods/my_RandLANet/utils/nearest_neighbors/lib/python/KNN_NanoFLANN-0.0.0-py3.6.egg-info @@ -0,0 +1,10 @@ +Metadata-Version: 1.0 +Name: KNN NanoFLANN +Version: 0.0.0 +Summary: UNKNOWN +Home-page: UNKNOWN +Author: UNKNOWN +Author-email: UNKNOWN +License: UNKNOWN +Description: UNKNOWN +Platform: UNKNOWN diff --git a/competing_methods/my_RandLANet/utils/nearest_neighbors/lib/python/KNN_NanoFLANN-0.0.0-py3.7.egg-info b/competing_methods/my_RandLANet/utils/nearest_neighbors/lib/python/KNN_NanoFLANN-0.0.0-py3.7.egg-info new file mode 100644 index 00000000..3d8f9397 --- /dev/null +++ b/competing_methods/my_RandLANet/utils/nearest_neighbors/lib/python/KNN_NanoFLANN-0.0.0-py3.7.egg-info @@ -0,0 +1,10 @@ +Metadata-Version: 1.0 +Name: KNN NanoFLANN +Version: 0.0.0 +Summary: UNKNOWN +Home-page: UNKNOWN +Author: UNKNOWN +Author-email: UNKNOWN +License: UNKNOWN +Description: UNKNOWN +Platform: UNKNOWN diff --git a/competing_methods/my_RandLANet/utils/nearest_neighbors/lib/python/nearest_neighbors.cp36-win_amd64.pyd b/competing_methods/my_RandLANet/utils/nearest_neighbors/lib/python/nearest_neighbors.cp36-win_amd64.pyd new file mode 100644 index 00000000..b0b0d6cc Binary files /dev/null and b/competing_methods/my_RandLANet/utils/nearest_neighbors/lib/python/nearest_neighbors.cp36-win_amd64.pyd differ diff --git a/competing_methods/my_RandLANet/utils/nearest_neighbors/lib/python/nearest_neighbors.cp37-win_amd64.pyd b/competing_methods/my_RandLANet/utils/nearest_neighbors/lib/python/nearest_neighbors.cp37-win_amd64.pyd new file mode 100644 index 00000000..a747c33c Binary files /dev/null and b/competing_methods/my_RandLANet/utils/nearest_neighbors/lib/python/nearest_neighbors.cp37-win_amd64.pyd differ diff --git a/competing_methods/my_RandLANet/utils/nearest_neighbors/lib/python/nearest_neighbors.cpython-36m-x86_64-linux-gnu.so b/competing_methods/my_RandLANet/utils/nearest_neighbors/lib/python/nearest_neighbors.cpython-36m-x86_64-linux-gnu.so new file mode 100644 index 00000000..23be4ce5 Binary files /dev/null and b/competing_methods/my_RandLANet/utils/nearest_neighbors/lib/python/nearest_neighbors.cpython-36m-x86_64-linux-gnu.so differ diff --git a/competing_methods/my_RandLANet/utils/nearest_neighbors/lib/python/nearest_neighbors.cpython-37m-x86_64-linux-gnu.so b/competing_methods/my_RandLANet/utils/nearest_neighbors/lib/python/nearest_neighbors.cpython-37m-x86_64-linux-gnu.so new file mode 100644 index 00000000..1bc6540c Binary files /dev/null and b/competing_methods/my_RandLANet/utils/nearest_neighbors/lib/python/nearest_neighbors.cpython-37m-x86_64-linux-gnu.so differ diff --git a/competing_methods/my_RandLANet/utils/nearest_neighbors/nanoflann.hpp b/competing_methods/my_RandLANet/utils/nearest_neighbors/nanoflann.hpp new file mode 100644 index 00000000..45c185bb --- /dev/null +++ b/competing_methods/my_RandLANet/utils/nearest_neighbors/nanoflann.hpp @@ -0,0 +1,1990 @@ +/*********************************************************************** + * Software License Agreement (BSD License) + * + * Copyright 2008-2009 Marius Muja (mariusm@cs.ubc.ca). All rights reserved. + * Copyright 2008-2009 David G. Lowe (lowe@cs.ubc.ca). All rights reserved. + * Copyright 2011-2016 Jose Luis Blanco (joseluisblancoc@gmail.com). + * All rights reserved. + * + * THE BSD LICENSE + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR + * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES + * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. + * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, + * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT + * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF + * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + *************************************************************************/ + +/** \mainpage nanoflann C++ API documentation + * nanoflann is a C++ header-only library for building KD-Trees, mostly + * optimized for 2D or 3D point clouds. + * + * nanoflann does not require compiling or installing, just an + * #include in your code. + * + * See: + * - C++ API organized by modules + * - Online README + * - Doxygen documentation + */ + +#ifndef NANOFLANN_HPP_ +#define NANOFLANN_HPP_ + +#include +#include +#include +#include +#include // for fwrite() +#define _USE_MATH_DEFINES // Required by MSVC to define M_PI,etc. in +#include // for abs() +#include // for abs() +#include + +// Avoid conflicting declaration of min/max macros in windows headers +#if !defined(NOMINMAX) && (defined(_WIN32) || defined(_WIN32_) || defined(WIN32) || defined(_WIN64)) +# define NOMINMAX +# ifdef max +# undef max +# undef min +# endif +#endif + +namespace nanoflann +{ +/** @addtogroup nanoflann_grp nanoflann C++ library for ANN + * @{ */ + + /** Library version: 0xMmP (M=Major,m=minor,P=patch) */ + #define NANOFLANN_VERSION 0x123 + + /** @addtogroup result_sets_grp Result set classes + * @{ */ + template + class KNNResultSet + { + IndexType * indices; + DistanceType* dists; + CountType capacity; + CountType count; + + public: + inline KNNResultSet(CountType capacity_) : indices(0), dists(0), capacity(capacity_), count(0) + { + } + + inline void init(IndexType* indices_, DistanceType* dists_) + { + indices = indices_; + dists = dists_; + count = 0; + if (capacity) + dists[capacity-1] = (std::numeric_limits::max)(); + } + + inline CountType size() const + { + return count; + } + + inline bool full() const + { + return count == capacity; + } + + + /** + * Called during search to add an element matching the criteria. + * @return true if the search should be continued, false if the results are sufficient + */ + inline bool addPoint(DistanceType dist, IndexType index) + { + CountType i; + for (i = count; i > 0; --i) { +#ifdef NANOFLANN_FIRST_MATCH // If defined and two points have the same distance, the one with the lowest-index will be returned first. + if ( (dists[i-1] > dist) || ((dist == dists[i-1]) && (indices[i-1] > index)) ) { +#else + if (dists[i-1] > dist) { +#endif + if (i < capacity) { + dists[i] = dists[i-1]; + indices[i] = indices[i-1]; + } + } + else break; + } + if (i < capacity) { + dists[i] = dist; + indices[i] = index; + } + if (count < capacity) count++; + + // tell caller that the search shall continue + return true; + } + + inline DistanceType worstDist() const + { + return dists[capacity-1]; + } + }; + + /** operator "<" for std::sort() */ + struct IndexDist_Sorter + { + /** PairType will be typically: std::pair */ + template + inline bool operator()(const PairType &p1, const PairType &p2) const { + return p1.second < p2.second; + } + }; + + /** + * A result-set class used when performing a radius based search. + */ + template + class RadiusResultSet + { + public: + const DistanceType radius; + + std::vector > &m_indices_dists; + + inline RadiusResultSet(DistanceType radius_, std::vector > &indices_dists) : radius(radius_), m_indices_dists(indices_dists) + { + init(); + } + + inline void init() { clear(); } + inline void clear() { m_indices_dists.clear(); } + + inline size_t size() const { return m_indices_dists.size(); } + + inline bool full() const { return true; } + + /** + * Called during search to add an element matching the criteria. + * @return true if the search should be continued, false if the results are sufficient + */ + inline bool addPoint(DistanceType dist, IndexType index) + { + if (dist < radius) + m_indices_dists.push_back(std::make_pair(index, dist)); + return true; + } + + inline DistanceType worstDist() const { return radius; } + + /** + * Find the worst result (furtherest neighbor) without copying or sorting + * Pre-conditions: size() > 0 + */ + std::pair worst_item() const + { + if (m_indices_dists.empty()) throw std::runtime_error("Cannot invoke RadiusResultSet::worst_item() on an empty list of results."); + typedef typename std::vector >::const_iterator DistIt; + DistIt it = std::max_element(m_indices_dists.begin(), m_indices_dists.end(), IndexDist_Sorter()); + return *it; + } + }; + + + /** @} */ + + + /** @addtogroup loadsave_grp Load/save auxiliary functions + * @{ */ + template + void save_value(FILE* stream, const T& value, size_t count = 1) + { + fwrite(&value, sizeof(value), count, stream); + } + + template + void save_value(FILE* stream, const std::vector& value) + { + size_t size = value.size(); + fwrite(&size, sizeof(size_t), 1, stream); + fwrite(&value[0], sizeof(T), size, stream); + } + + template + void load_value(FILE* stream, T& value, size_t count = 1) + { + size_t read_cnt = fread(&value, sizeof(value), count, stream); + if (read_cnt != count) { + throw std::runtime_error("Cannot read from file"); + } + } + + + template + void load_value(FILE* stream, std::vector& value) + { + size_t size; + size_t read_cnt = fread(&size, sizeof(size_t), 1, stream); + if (read_cnt != 1) { + throw std::runtime_error("Cannot read from file"); + } + value.resize(size); + read_cnt = fread(&value[0], sizeof(T), size, stream); + if (read_cnt != size) { + throw std::runtime_error("Cannot read from file"); + } + } + /** @} */ + + + /** @addtogroup metric_grp Metric (distance) classes + * @{ */ + + struct Metric + { + }; + + /** Manhattan distance functor (generic version, optimized for high-dimensionality data sets). + * Corresponding distance traits: nanoflann::metric_L1 + * \tparam T Type of the elements (e.g. double, float, uint8_t) + * \tparam _DistanceType Type of distance variables (must be signed) (e.g. float, double, int64_t) + */ + template + struct L1_Adaptor + { + typedef T ElementType; + typedef _DistanceType DistanceType; + + const DataSource &data_source; + + L1_Adaptor(const DataSource &_data_source) : data_source(_data_source) { } + + inline DistanceType evalMetric(const T* a, const size_t b_idx, size_t size, DistanceType worst_dist = -1) const + { + DistanceType result = DistanceType(); + const T* last = a + size; + const T* lastgroup = last - 3; + size_t d = 0; + + /* Process 4 items with each loop for efficiency. */ + while (a < lastgroup) { + const DistanceType diff0 = std::abs(a[0] - data_source.kdtree_get_pt(b_idx,d++)); + const DistanceType diff1 = std::abs(a[1] - data_source.kdtree_get_pt(b_idx,d++)); + const DistanceType diff2 = std::abs(a[2] - data_source.kdtree_get_pt(b_idx,d++)); + const DistanceType diff3 = std::abs(a[3] - data_source.kdtree_get_pt(b_idx,d++)); + result += diff0 + diff1 + diff2 + diff3; + a += 4; + if ((worst_dist > 0) && (result > worst_dist)) { + return result; + } + } + /* Process last 0-3 components. Not needed for standard vector lengths. */ + while (a < last) { + result += std::abs( *a++ - data_source.kdtree_get_pt(b_idx, d++) ); + } + return result; + } + + template + inline DistanceType accum_dist(const U a, const V b, int ) const + { + return std::abs(a-b); + } + }; + + /** Squared Euclidean distance functor (generic version, optimized for high-dimensionality data sets). + * Corresponding distance traits: nanoflann::metric_L2 + * \tparam T Type of the elements (e.g. double, float, uint8_t) + * \tparam _DistanceType Type of distance variables (must be signed) (e.g. float, double, int64_t) + */ + template + struct L2_Adaptor + { + typedef T ElementType; + typedef _DistanceType DistanceType; + + const DataSource &data_source; + + L2_Adaptor(const DataSource &_data_source) : data_source(_data_source) { } + + inline DistanceType evalMetric(const T* a, const size_t b_idx, size_t size, DistanceType worst_dist = -1) const + { + DistanceType result = DistanceType(); + const T* last = a + size; + const T* lastgroup = last - 3; + size_t d = 0; + + /* Process 4 items with each loop for efficiency. */ + while (a < lastgroup) { + const DistanceType diff0 = a[0] - data_source.kdtree_get_pt(b_idx,d++); + const DistanceType diff1 = a[1] - data_source.kdtree_get_pt(b_idx,d++); + const DistanceType diff2 = a[2] - data_source.kdtree_get_pt(b_idx,d++); + const DistanceType diff3 = a[3] - data_source.kdtree_get_pt(b_idx,d++); + result += diff0 * diff0 + diff1 * diff1 + diff2 * diff2 + diff3 * diff3; + a += 4; + if ((worst_dist > 0) && (result > worst_dist)) { + return result; + } + } + /* Process last 0-3 components. Not needed for standard vector lengths. */ + while (a < last) { + const DistanceType diff0 = *a++ - data_source.kdtree_get_pt(b_idx, d++); + result += diff0 * diff0; + } + return result; + } + + template + inline DistanceType accum_dist(const U a, const V b, int ) const + { + return (a - b) * (a - b); + } + }; + + /** Squared Euclidean (L2) distance functor (suitable for low-dimensionality datasets, like 2D or 3D point clouds) + * Corresponding distance traits: nanoflann::metric_L2_Simple + * \tparam T Type of the elements (e.g. double, float, uint8_t) + * \tparam _DistanceType Type of distance variables (must be signed) (e.g. float, double, int64_t) + */ + template + struct L2_Simple_Adaptor + { + typedef T ElementType; + typedef _DistanceType DistanceType; + + const DataSource &data_source; + + L2_Simple_Adaptor(const DataSource &_data_source) : data_source(_data_source) { } + + inline DistanceType evalMetric(const T* a, const size_t b_idx, size_t size) const { + DistanceType result = DistanceType(); + for (size_t i = 0; i < size; ++i) { + const DistanceType diff = a[i] - data_source.kdtree_get_pt(b_idx, i); + result += diff * diff; + } + return result; + } + + template + inline DistanceType accum_dist(const U a, const V b, int ) const + { + return (a - b) * (a - b); + } + }; + + /** SO2 distance functor + * Corresponding distance traits: nanoflann::metric_SO2 + * \tparam T Type of the elements (e.g. double, float) + * \tparam _DistanceType Type of distance variables (must be signed) (e.g. float, double) + * orientation is constrained to be in [-pi, pi] + */ + template + struct SO2_Adaptor + { + typedef T ElementType; + typedef _DistanceType DistanceType; + + const DataSource &data_source; + + SO2_Adaptor(const DataSource &_data_source) : data_source(_data_source) { } + + inline DistanceType evalMetric(const T* a, const size_t b_idx, size_t size) const { + return accum_dist(a[size-1], data_source.kdtree_get_pt(b_idx, size - 1) , size - 1); + } + + template + inline DistanceType accum_dist(const U a, const V b, int ) const + { + DistanceType result = DistanceType(); + result = b - a; + if (result > M_PI) + result -= 2. * M_PI; + else if (result < -M_PI) + result += 2. * M_PI; + return result; + } + }; + + /** SO3 distance functor (Uses L2_Simple) + * Corresponding distance traits: nanoflann::metric_SO3 + * \tparam T Type of the elements (e.g. double, float) + * \tparam _DistanceType Type of distance variables (must be signed) (e.g. float, double) + */ + template + struct SO3_Adaptor + { + typedef T ElementType; + typedef _DistanceType DistanceType; + + L2_Simple_Adaptor distance_L2_Simple; + + SO3_Adaptor(const DataSource &_data_source) : distance_L2_Simple(_data_source) { } + + inline DistanceType evalMetric(const T* a, const size_t b_idx, size_t size) const { + return distance_L2_Simple.evalMetric(a, b_idx, size); + } + + template + inline DistanceType accum_dist(const U a, const V b, int idx) const + { + return distance_L2_Simple.accum_dist(a, b, idx); + } + }; + + /** Metaprogramming helper traits class for the L1 (Manhattan) metric */ + struct metric_L1 : public Metric + { + template + struct traits { + typedef L1_Adaptor distance_t; + }; + }; + /** Metaprogramming helper traits class for the L2 (Euclidean) metric */ + struct metric_L2 : public Metric + { + template + struct traits { + typedef L2_Adaptor distance_t; + }; + }; + /** Metaprogramming helper traits class for the L2_simple (Euclidean) metric */ + struct metric_L2_Simple : public Metric + { + template + struct traits { + typedef L2_Simple_Adaptor distance_t; + }; + }; + /** Metaprogramming helper traits class for the SO3_InnerProdQuat metric */ + struct metric_SO2 : public Metric + { + template + struct traits { + typedef SO2_Adaptor distance_t; + }; + }; + /** Metaprogramming helper traits class for the SO3_InnerProdQuat metric */ + struct metric_SO3 : public Metric + { + template + struct traits { + typedef SO3_Adaptor distance_t; + }; + }; + + /** @} */ + + /** @addtogroup param_grp Parameter structs + * @{ */ + + /** Parameters (see README.md) */ + struct KDTreeSingleIndexAdaptorParams + { + KDTreeSingleIndexAdaptorParams(size_t _leaf_max_size = 10) : + leaf_max_size(_leaf_max_size) + {} + + size_t leaf_max_size; + }; + + /** Search options for KDTreeSingleIndexAdaptor::findNeighbors() */ + struct SearchParams + { + /** Note: The first argument (checks_IGNORED_) is ignored, but kept for compatibility with the FLANN interface */ + SearchParams(int checks_IGNORED_ = 32, float eps_ = 0, bool sorted_ = true ) : + checks(checks_IGNORED_), eps(eps_), sorted(sorted_) {} + + int checks; //!< Ignored parameter (Kept for compatibility with the FLANN interface). + float eps; //!< search for eps-approximate neighbours (default: 0) + bool sorted; //!< only for radius search, require neighbours sorted by distance (default: true) + }; + /** @} */ + + + /** @addtogroup memalloc_grp Memory allocation + * @{ */ + + /** + * Allocates (using C's malloc) a generic type T. + * + * Params: + * count = number of instances to allocate. + * Returns: pointer (of type T*) to memory buffer + */ + template + inline T* allocate(size_t count = 1) + { + T* mem = static_cast( ::malloc(sizeof(T)*count)); + return mem; + } + + + /** + * Pooled storage allocator + * + * The following routines allow for the efficient allocation of storage in + * small chunks from a specified pool. Rather than allowing each structure + * to be freed individually, an entire pool of storage is freed at once. + * This method has two advantages over just using malloc() and free(). First, + * it is far more efficient for allocating small objects, as there is + * no overhead for remembering all the information needed to free each + * object or consolidating fragmented memory. Second, the decision about + * how long to keep an object is made at the time of allocation, and there + * is no need to track down all the objects to free them. + * + */ + + const size_t WORDSIZE = 16; + const size_t BLOCKSIZE = 8192; + + class PooledAllocator + { + /* We maintain memory alignment to word boundaries by requiring that all + allocations be in multiples of the machine wordsize. */ + /* Size of machine word in bytes. Must be power of 2. */ + /* Minimum number of bytes requested at a time from the system. Must be multiple of WORDSIZE. */ + + + size_t remaining; /* Number of bytes left in current block of storage. */ + void* base; /* Pointer to base of current block of storage. */ + void* loc; /* Current location in block to next allocate memory. */ + + void internal_init() + { + remaining = 0; + base = NULL; + usedMemory = 0; + wastedMemory = 0; + } + + public: + size_t usedMemory; + size_t wastedMemory; + + /** + Default constructor. Initializes a new pool. + */ + PooledAllocator() { + internal_init(); + } + + /** + * Destructor. Frees all the memory allocated in this pool. + */ + ~PooledAllocator() { + free_all(); + } + + /** Frees all allocated memory chunks */ + void free_all() + { + while (base != NULL) { + void *prev = *(static_cast( base)); /* Get pointer to prev block. */ + ::free(base); + base = prev; + } + internal_init(); + } + + /** + * Returns a pointer to a piece of new memory of the given size in bytes + * allocated from the pool. + */ + void* malloc(const size_t req_size) + { + /* Round size up to a multiple of wordsize. The following expression + only works for WORDSIZE that is a power of 2, by masking last bits of + incremented size to zero. + */ + const size_t size = (req_size + (WORDSIZE - 1)) & ~(WORDSIZE - 1); + + /* Check whether a new block must be allocated. Note that the first word + of a block is reserved for a pointer to the previous block. + */ + if (size > remaining) { + + wastedMemory += remaining; + + /* Allocate new storage. */ + const size_t blocksize = (size + sizeof(void*) + (WORDSIZE - 1) > BLOCKSIZE) ? + size + sizeof(void*) + (WORDSIZE - 1) : BLOCKSIZE; + + // use the standard C malloc to allocate memory + void* m = ::malloc(blocksize); + if (!m) { + fprintf(stderr, "Failed to allocate memory.\n"); + return NULL; + } + + /* Fill first word of new block with pointer to previous block. */ + static_cast(m)[0] = base; + base = m; + + size_t shift = 0; + //int size_t = (WORDSIZE - ( (((size_t)m) + sizeof(void*)) & (WORDSIZE-1))) & (WORDSIZE-1); + + remaining = blocksize - sizeof(void*) - shift; + loc = (static_cast(m) + sizeof(void*) + shift); + } + void* rloc = loc; + loc = static_cast(loc) + size; + remaining -= size; + + usedMemory += size; + + return rloc; + } + + /** + * Allocates (using this pool) a generic type T. + * + * Params: + * count = number of instances to allocate. + * Returns: pointer (of type T*) to memory buffer + */ + template + T* allocate(const size_t count = 1) + { + T* mem = static_cast(this->malloc(sizeof(T)*count)); + return mem; + } + + }; + /** @} */ + + /** @addtogroup nanoflann_metaprog_grp Auxiliary metaprogramming stuff + * @{ */ + + // ---------------- CArray ------------------------- + /** A STL container (as wrapper) for arrays of constant size defined at compile time (class imported from the MRPT project) + * This code is an adapted version from Boost, modifed for its integration + * within MRPT (JLBC, Dec/2009) (Renamed array -> CArray to avoid possible potential conflicts). + * See + * http://www.josuttis.com/cppcode + * for details and the latest version. + * See + * http://www.boost.org/libs/array for Documentation. + * for documentation. + * + * (C) Copyright Nicolai M. Josuttis 2001. + * Permission to copy, use, modify, sell and distribute this software + * is granted provided this copyright notice appears in all copies. + * This software is provided "as is" without express or implied + * warranty, and with no claim as to its suitability for any purpose. + * + * 29 Jan 2004 - minor fixes (Nico Josuttis) + * 04 Dec 2003 - update to synch with library TR1 (Alisdair Meredith) + * 23 Aug 2002 - fix for Non-MSVC compilers combined with MSVC libraries. + * 05 Aug 2001 - minor update (Nico Josuttis) + * 20 Jan 2001 - STLport fix (Beman Dawes) + * 29 Sep 2000 - Initial Revision (Nico Josuttis) + * + * Jan 30, 2004 + */ + template + class CArray { + public: + T elems[N]; // fixed-size array of elements of type T + + public: + // type definitions + typedef T value_type; + typedef T* iterator; + typedef const T* const_iterator; + typedef T& reference; + typedef const T& const_reference; + typedef std::size_t size_type; + typedef std::ptrdiff_t difference_type; + + // iterator support + inline iterator begin() { return elems; } + inline const_iterator begin() const { return elems; } + inline iterator end() { return elems+N; } + inline const_iterator end() const { return elems+N; } + + // reverse iterator support +#if !defined(BOOST_NO_TEMPLATE_PARTIAL_SPECIALIZATION) && !defined(BOOST_MSVC_STD_ITERATOR) && !defined(BOOST_NO_STD_ITERATOR_TRAITS) + typedef std::reverse_iterator reverse_iterator; + typedef std::reverse_iterator const_reverse_iterator; +#elif defined(_MSC_VER) && (_MSC_VER == 1300) && defined(BOOST_DINKUMWARE_STDLIB) && (BOOST_DINKUMWARE_STDLIB == 310) + // workaround for broken reverse_iterator in VC7 + typedef std::reverse_iterator > reverse_iterator; + typedef std::reverse_iterator > const_reverse_iterator; +#else + // workaround for broken reverse_iterator implementations + typedef std::reverse_iterator reverse_iterator; + typedef std::reverse_iterator const_reverse_iterator; +#endif + + reverse_iterator rbegin() { return reverse_iterator(end()); } + const_reverse_iterator rbegin() const { return const_reverse_iterator(end()); } + reverse_iterator rend() { return reverse_iterator(begin()); } + const_reverse_iterator rend() const { return const_reverse_iterator(begin()); } + // operator[] + inline reference operator[](size_type i) { return elems[i]; } + inline const_reference operator[](size_type i) const { return elems[i]; } + // at() with range check + reference at(size_type i) { rangecheck(i); return elems[i]; } + const_reference at(size_type i) const { rangecheck(i); return elems[i]; } + // front() and back() + reference front() { return elems[0]; } + const_reference front() const { return elems[0]; } + reference back() { return elems[N-1]; } + const_reference back() const { return elems[N-1]; } + // size is constant + static inline size_type size() { return N; } + static bool empty() { return false; } + static size_type max_size() { return N; } + enum { static_size = N }; + /** This method has no effects in this class, but raises an exception if the expected size does not match */ + inline void resize(const size_t nElements) { if (nElements!=N) throw std::logic_error("Try to change the size of a CArray."); } + // swap (note: linear complexity in N, constant for given instantiation) + void swap (CArray& y) { std::swap_ranges(begin(),end(),y.begin()); } + // direct access to data (read-only) + const T* data() const { return elems; } + // use array as C array (direct read/write access to data) + T* data() { return elems; } + // assignment with type conversion + template CArray& operator= (const CArray& rhs) { + std::copy(rhs.begin(),rhs.end(), begin()); + return *this; + } + // assign one value to all elements + inline void assign (const T& value) { for (size_t i=0;i= size()) { throw std::out_of_range("CArray<>: index out of range"); } } + }; // end of CArray + + /** Used to declare fixed-size arrays when DIM>0, dynamically-allocated vectors when DIM=-1. + * Fixed size version for a generic DIM: + */ + template + struct array_or_vector_selector + { + typedef CArray container_t; + }; + /** Dynamic size version */ + template + struct array_or_vector_selector<-1, T> { + typedef std::vector container_t; + }; + + /** @} */ + + /** kd-tree base-class + * + * Contains the member functions common to the classes KDTreeSingleIndexAdaptor and KDTreeSingleIndexDynamicAdaptor_. + * + * \tparam Derived The name of the class which inherits this class. + * \tparam DatasetAdaptor The user-provided adaptor (see comments above). + * \tparam Distance The distance metric to use, these are all classes derived from nanoflann::Metric + * \tparam DIM Dimensionality of data points (e.g. 3 for 3D points) + * \tparam IndexType Will be typically size_t or int + */ + + template + class KDTreeBaseClass + { + + public: + /** Frees the previously-built index. Automatically called within buildIndex(). */ + void freeIndex(Derived &obj) + { + obj.pool.free_all(); + obj.root_node = NULL; + obj.m_size_at_index_build = 0; + } + + typedef typename Distance::ElementType ElementType; + typedef typename Distance::DistanceType DistanceType; + + /*--------------------- Internal Data Structures --------------------------*/ + struct Node + { + /** Union used because a node can be either a LEAF node or a non-leaf node, so both data fields are never used simultaneously */ + union { + struct leaf + { + IndexType left, right; //!< Indices of points in leaf node + } lr; + struct nonleaf + { + int divfeat; //!< Dimension used for subdivision. + DistanceType divlow, divhigh; //!< The values used for subdivision. + } sub; + } node_type; + Node *child1, *child2; //!< Child nodes (both=NULL mean its a leaf node) + }; + + typedef Node* NodePtr; + + struct Interval + { + ElementType low, high; + }; + + /** + * Array of indices to vectors in the dataset. + */ + std::vector vind; + + NodePtr root_node; + + size_t m_leaf_max_size; + + size_t m_size; //!< Number of current points in the dataset + size_t m_size_at_index_build; //!< Number of points in the dataset when the index was built + int dim; //!< Dimensionality of each data point + + /** Define "BoundingBox" as a fixed-size or variable-size container depending on "DIM" */ + typedef typename array_or_vector_selector::container_t BoundingBox; + + /** Define "distance_vector_t" as a fixed-size or variable-size container depending on "DIM" */ + typedef typename array_or_vector_selector::container_t distance_vector_t; + + /** The KD-tree used to find neighbours */ + + BoundingBox root_bbox; + + /** + * Pooled memory allocator. + * + * Using a pooled memory allocator is more efficient + * than allocating memory directly when there is a large + * number small of memory allocations. + */ + PooledAllocator pool; + + /** Returns number of points in dataset */ + size_t size(const Derived &obj) const { return obj.m_size; } + + /** Returns the length of each point in the dataset */ + size_t veclen(const Derived &obj) { + return static_cast(DIM>0 ? DIM : obj.dim); + } + + /// Helper accessor to the dataset points: + inline ElementType dataset_get(const Derived &obj, size_t idx, int component) const{ + return obj.dataset.kdtree_get_pt(idx, component); + } + + /** + * Computes the inde memory usage + * Returns: memory used by the index + */ + size_t usedMemory(Derived &obj) + { + return obj.pool.usedMemory + obj.pool.wastedMemory + obj.dataset.kdtree_get_point_count() * sizeof(IndexType); // pool memory and vind array memory + } + + void computeMinMax(const Derived &obj, IndexType* ind, IndexType count, int element, ElementType& min_elem, ElementType& max_elem) + { + min_elem = dataset_get(obj, ind[0],element); + max_elem = dataset_get(obj, ind[0],element); + for (IndexType i = 1; i < count; ++i) { + ElementType val = dataset_get(obj, ind[i], element); + if (val < min_elem) min_elem = val; + if (val > max_elem) max_elem = val; + } + } + + /** + * Create a tree node that subdivides the list of vecs from vind[first] + * to vind[last]. The routine is called recursively on each sublist. + * + * @param left index of the first vector + * @param right index of the last vector + */ + NodePtr divideTree(Derived &obj, const IndexType left, const IndexType right, BoundingBox& bbox) + { + NodePtr node = obj.pool.template allocate(); // allocate memory + + /* If too few exemplars remain, then make this a leaf node. */ + if ( (right - left) <= static_cast(obj.m_leaf_max_size) ) { + node->child1 = node->child2 = NULL; /* Mark as leaf node. */ + node->node_type.lr.left = left; + node->node_type.lr.right = right; + + // compute bounding-box of leaf points + for (int i = 0; i < (DIM > 0 ? DIM : obj.dim); ++i) { + bbox[i].low = dataset_get(obj, obj.vind[left], i); + bbox[i].high = dataset_get(obj, obj.vind[left], i); + } + for (IndexType k = left + 1; k < right; ++k) { + for (int i = 0; i < (DIM > 0 ? DIM : obj.dim); ++i) { + if (bbox[i].low > dataset_get(obj, obj.vind[k], i)) bbox[i].low = dataset_get(obj, obj.vind[k], i); + if (bbox[i].high < dataset_get(obj, obj.vind[k], i)) bbox[i].high = dataset_get(obj, obj.vind[k], i); + } + } + } + else { + IndexType idx; + int cutfeat; + DistanceType cutval; + middleSplit_(obj, &obj.vind[0] + left, right - left, idx, cutfeat, cutval, bbox); + + node->node_type.sub.divfeat = cutfeat; + + BoundingBox left_bbox(bbox); + left_bbox[cutfeat].high = cutval; + node->child1 = divideTree(obj, left, left + idx, left_bbox); + + BoundingBox right_bbox(bbox); + right_bbox[cutfeat].low = cutval; + node->child2 = divideTree(obj, left + idx, right, right_bbox); + + node->node_type.sub.divlow = left_bbox[cutfeat].high; + node->node_type.sub.divhigh = right_bbox[cutfeat].low; + + for (int i = 0; i < (DIM > 0 ? DIM : obj.dim); ++i) { + bbox[i].low = std::min(left_bbox[i].low, right_bbox[i].low); + bbox[i].high = std::max(left_bbox[i].high, right_bbox[i].high); + } + } + + return node; + } + + void middleSplit_(Derived &obj, IndexType* ind, IndexType count, IndexType& index, int& cutfeat, DistanceType& cutval, const BoundingBox& bbox) + { + const DistanceType EPS = static_cast(0.00001); + ElementType max_span = bbox[0].high-bbox[0].low; + for (int i = 1; i < (DIM > 0 ? DIM : obj.dim); ++i) { + ElementType span = bbox[i].high - bbox[i].low; + if (span > max_span) { + max_span = span; + } + } + ElementType max_spread = -1; + cutfeat = 0; + for (int i = 0; i < (DIM > 0 ? DIM : obj.dim); ++i) { + ElementType span = bbox[i].high-bbox[i].low; + if (span > (1 - EPS) * max_span) { + ElementType min_elem, max_elem; + computeMinMax(obj, ind, count, i, min_elem, max_elem); + ElementType spread = max_elem - min_elem;; + if (spread > max_spread) { + cutfeat = i; + max_spread = spread; + } + } + } + // split in the middle + DistanceType split_val = (bbox[cutfeat].low + bbox[cutfeat].high) / 2; + ElementType min_elem, max_elem; + computeMinMax(obj, ind, count, cutfeat, min_elem, max_elem); + + if (split_val < min_elem) cutval = min_elem; + else if (split_val > max_elem) cutval = max_elem; + else cutval = split_val; + + IndexType lim1, lim2; + planeSplit(obj, ind, count, cutfeat, cutval, lim1, lim2); + + if (lim1 > count / 2) index = lim1; + else if (lim2 < count / 2) index = lim2; + else index = count/2; + } + + /** + * Subdivide the list of points by a plane perpendicular on axe corresponding + * to the 'cutfeat' dimension at 'cutval' position. + * + * On return: + * dataset[ind[0..lim1-1]][cutfeat]cutval + */ + void planeSplit(Derived &obj, IndexType* ind, const IndexType count, int cutfeat, DistanceType &cutval, IndexType& lim1, IndexType& lim2) + { + /* Move vector indices for left subtree to front of list. */ + IndexType left = 0; + IndexType right = count-1; + for (;; ) { + while (left <= right && dataset_get(obj, ind[left], cutfeat) < cutval) ++left; + while (right && left <= right && dataset_get(obj, ind[right], cutfeat) >= cutval) --right; + if (left > right || !right) break; // "!right" was added to support unsigned Index types + std::swap(ind[left], ind[right]); + ++left; + --right; + } + /* If either list is empty, it means that all remaining features + * are identical. Split in the middle to maintain a balanced tree. + */ + lim1 = left; + right = count-1; + for (;; ) { + while (left <= right && dataset_get(obj, ind[left], cutfeat) <= cutval) ++left; + while (right && left <= right && dataset_get(obj, ind[right], cutfeat) > cutval) --right; + if (left > right || !right) break; // "!right" was added to support unsigned Index types + std::swap(ind[left], ind[right]); + ++left; + --right; + } + lim2 = left; + } + + DistanceType computeInitialDistances(const Derived &obj, const ElementType* vec, distance_vector_t& dists) const + { + assert(vec); + DistanceType distsq = DistanceType(); + + for (int i = 0; i < (DIM>0 ? DIM : obj.dim); ++i) { + if (vec[i] < obj.root_bbox[i].low) { + dists[i] = obj.distance.accum_dist(vec[i], obj.root_bbox[i].low, i); + distsq += dists[i]; + } + if (vec[i] > obj.root_bbox[i].high) { + dists[i] = obj.distance.accum_dist(vec[i], obj.root_bbox[i].high, i); + distsq += dists[i]; + } + } + return distsq; + } + + void save_tree(Derived &obj, FILE* stream, NodePtr tree) + { + save_value(stream, *tree); + if (tree->child1 != NULL) { + save_tree(obj, stream, tree->child1); + } + if (tree->child2 != NULL) { + save_tree(obj, stream, tree->child2); + } + } + + + void load_tree(Derived &obj, FILE* stream, NodePtr& tree) + { + tree = obj.pool.template allocate(); + load_value(stream, *tree); + if (tree->child1 != NULL) { + load_tree(obj, stream, tree->child1); + } + if (tree->child2 != NULL) { + load_tree(obj, stream, tree->child2); + } + } + + /** Stores the index in a binary file. + * IMPORTANT NOTE: The set of data points is NOT stored in the file, so when loading the index object it must be constructed associated to the same source of data points used while building it. + * See the example: examples/saveload_example.cpp + * \sa loadIndex */ + void saveIndex_(Derived &obj, FILE* stream) + { + save_value(stream, obj.m_size); + save_value(stream, obj.dim); + save_value(stream, obj.root_bbox); + save_value(stream, obj.m_leaf_max_size); + save_value(stream, obj.vind); + save_tree(obj, stream, obj.root_node); + } + + /** Loads a previous index from a binary file. + * IMPORTANT NOTE: The set of data points is NOT stored in the file, so the index object must be constructed associated to the same source of data points used while building the index. + * See the example: examples/saveload_example.cpp + * \sa loadIndex */ + void loadIndex_(Derived &obj, FILE* stream) + { + load_value(stream, obj.m_size); + load_value(stream, obj.dim); + load_value(stream, obj.root_bbox); + load_value(stream, obj.m_leaf_max_size); + load_value(stream, obj.vind); + load_tree(obj, stream, obj.root_node); + } + + }; + + + /** @addtogroup kdtrees_grp KD-tree classes and adaptors + * @{ */ + + /** kd-tree static index + * + * Contains the k-d trees and other information for indexing a set of points + * for nearest-neighbor matching. + * + * The class "DatasetAdaptor" must provide the following interface (can be non-virtual, inlined methods): + * + * \code + * // Must return the number of data poins + * inline size_t kdtree_get_point_count() const { ... } + * + * + * // Must return the dim'th component of the idx'th point in the class: + * inline T kdtree_get_pt(const size_t idx, int dim) const { ... } + * + * // Optional bounding-box computation: return false to default to a standard bbox computation loop. + * // Return true if the BBOX was already computed by the class and returned in "bb" so it can be avoided to redo it again. + * // Look at bb.size() to find out the expected dimensionality (e.g. 2 or 3 for point clouds) + * template + * bool kdtree_get_bbox(BBOX &bb) const + * { + * bb[0].low = ...; bb[0].high = ...; // 0th dimension limits + * bb[1].low = ...; bb[1].high = ...; // 1st dimension limits + * ... + * return true; + * } + * + * \endcode + * + * \tparam DatasetAdaptor The user-provided adaptor (see comments above). + * \tparam Distance The distance metric to use: nanoflann::metric_L1, nanoflann::metric_L2, nanoflann::metric_L2_Simple, etc. + * \tparam DIM Dimensionality of data points (e.g. 3 for 3D points) + * \tparam IndexType Will be typically size_t or int + */ + template + class KDTreeSingleIndexAdaptor : public KDTreeBaseClass, Distance, DatasetAdaptor, DIM, IndexType> + { + public: + /** Deleted copy constructor*/ + KDTreeSingleIndexAdaptor(const KDTreeSingleIndexAdaptor&) = delete; + + /** + * The dataset used by this index + */ + const DatasetAdaptor &dataset; //!< The source of our data + + const KDTreeSingleIndexAdaptorParams index_params; + + Distance distance; + + typedef typename nanoflann::KDTreeBaseClass, Distance, DatasetAdaptor, DIM, IndexType> BaseClassRef; + + typedef typename BaseClassRef::ElementType ElementType; + typedef typename BaseClassRef::DistanceType DistanceType; + + typedef typename BaseClassRef::Node Node; + typedef Node* NodePtr; + + typedef typename BaseClassRef::Interval Interval; + /** Define "BoundingBox" as a fixed-size or variable-size container depending on "DIM" */ + typedef typename BaseClassRef::BoundingBox BoundingBox; + + /** Define "distance_vector_t" as a fixed-size or variable-size container depending on "DIM" */ + typedef typename BaseClassRef::distance_vector_t distance_vector_t; + + /** + * KDTree constructor + * + * Refer to docs in README.md or online in https://github.com/jlblancoc/nanoflann + * + * The KD-Tree point dimension (the length of each point in the datase, e.g. 3 for 3D points) + * is determined by means of: + * - The \a DIM template parameter if >0 (highest priority) + * - Otherwise, the \a dimensionality parameter of this constructor. + * + * @param inputData Dataset with the input features + * @param params Basically, the maximum leaf node size + */ + KDTreeSingleIndexAdaptor(const int dimensionality, const DatasetAdaptor& inputData, const KDTreeSingleIndexAdaptorParams& params = KDTreeSingleIndexAdaptorParams() ) : + dataset(inputData), index_params(params), distance(inputData) + { + BaseClassRef::root_node = NULL; + BaseClassRef::m_size = dataset.kdtree_get_point_count(); + BaseClassRef::m_size_at_index_build = BaseClassRef::m_size; + BaseClassRef::dim = dimensionality; + if (DIM>0) BaseClassRef::dim = DIM; + BaseClassRef::m_leaf_max_size = params.leaf_max_size; + + // Create a permutable array of indices to the input vectors. + init_vind(); + } + + /** + * Builds the index + */ + void buildIndex() + { + BaseClassRef::m_size = dataset.kdtree_get_point_count(); + BaseClassRef::m_size_at_index_build = BaseClassRef::m_size; + init_vind(); + this->freeIndex(*this); + BaseClassRef::m_size_at_index_build = BaseClassRef::m_size; + if(BaseClassRef::m_size == 0) return; + computeBoundingBox(BaseClassRef::root_bbox); + BaseClassRef::root_node = this->divideTree(*this, 0, BaseClassRef::m_size, BaseClassRef::root_bbox ); // construct the tree + } + + /** \name Query methods + * @{ */ + + /** + * Find set of nearest neighbors to vec[0:dim-1]. Their indices are stored inside + * the result object. + * + * Params: + * result = the result object in which the indices of the nearest-neighbors are stored + * vec = the vector for which to search the nearest neighbors + * + * \tparam RESULTSET Should be any ResultSet + * \return True if the requested neighbors could be found. + * \sa knnSearch, radiusSearch + */ + template + bool findNeighbors(RESULTSET& result, const ElementType* vec, const SearchParams& searchParams) const + { + assert(vec); + if (this->size(*this) == 0) + return false; + if (!BaseClassRef::root_node) + throw std::runtime_error("[nanoflann] findNeighbors() called before building the index."); + float epsError = 1 + searchParams.eps; + + distance_vector_t dists; // fixed or variable-sized container (depending on DIM) + dists.assign((DIM > 0 ? DIM : BaseClassRef::dim), 0); // Fill it with zeros. + DistanceType distsq = this->computeInitialDistances(*this, vec, dists); + searchLevel(result, vec, BaseClassRef::root_node, distsq, dists, epsError); // "count_leaf" parameter removed since was neither used nor returned to the user. + return result.full(); + } + + /** + * Find the "num_closest" nearest neighbors to the \a query_point[0:dim-1]. Their indices are stored inside + * the result object. + * \sa radiusSearch, findNeighbors + * \note nChecks_IGNORED is ignored but kept for compatibility with the original FLANN interface. + * \return Number `N` of valid points in the result set. Only the first `N` entries in `out_indices` and `out_distances_sq` will be valid. + * Return may be less than `num_closest` only if the number of elements in the tree is less than `num_closest`. + */ + size_t knnSearch(const ElementType *query_point, const size_t num_closest, IndexType *out_indices, DistanceType *out_distances_sq, const int /* nChecks_IGNORED */ = 10) const + { + nanoflann::KNNResultSet resultSet(num_closest); + resultSet.init(out_indices, out_distances_sq); + this->findNeighbors(resultSet, query_point, nanoflann::SearchParams()); + return resultSet.size(); + } + + /** + * Find all the neighbors to \a query_point[0:dim-1] within a maximum radius. + * The output is given as a vector of pairs, of which the first element is a point index and the second the corresponding distance. + * Previous contents of \a IndicesDists are cleared. + * + * If searchParams.sorted==true, the output list is sorted by ascending distances. + * + * For a better performance, it is advisable to do a .reserve() on the vector if you have any wild guess about the number of expected matches. + * + * \sa knnSearch, findNeighbors, radiusSearchCustomCallback + * \return The number of points within the given radius (i.e. indices.size() or dists.size() ) + */ + size_t radiusSearch(const ElementType *query_point, const DistanceType &radius, std::vector >& IndicesDists, const SearchParams& searchParams) const + { + RadiusResultSet resultSet(radius, IndicesDists); + const size_t nFound = radiusSearchCustomCallback(query_point, resultSet, searchParams); + if (searchParams.sorted) + std::sort(IndicesDists.begin(), IndicesDists.end(), IndexDist_Sorter() ); + return nFound; + } + + /** + * Just like radiusSearch() but with a custom callback class for each point found in the radius of the query. + * See the source of RadiusResultSet<> as a start point for your own classes. + * \sa radiusSearch + */ + template + size_t radiusSearchCustomCallback(const ElementType *query_point, SEARCH_CALLBACK &resultSet, const SearchParams& searchParams = SearchParams() ) const + { + this->findNeighbors(resultSet, query_point, searchParams); + return resultSet.size(); + } + + /** @} */ + + public: + /** Make sure the auxiliary list \a vind has the same size than the current dataset, and re-generate if size has changed. */ + void init_vind() + { + // Create a permutable array of indices to the input vectors. + BaseClassRef::m_size = dataset.kdtree_get_point_count(); + if (BaseClassRef::vind.size() != BaseClassRef::m_size) BaseClassRef::vind.resize(BaseClassRef::m_size); + for (size_t i = 0; i < BaseClassRef::m_size; i++) BaseClassRef::vind[i] = i; + } + + void computeBoundingBox(BoundingBox& bbox) + { + bbox.resize((DIM > 0 ? DIM : BaseClassRef::dim)); + if (dataset.kdtree_get_bbox(bbox)) + { + // Done! It was implemented in derived class + } + else + { + const size_t N = dataset.kdtree_get_point_count(); + if (!N) throw std::runtime_error("[nanoflann] computeBoundingBox() called but no data points found."); + for (int i = 0; i < (DIM > 0 ? DIM : BaseClassRef::dim); ++i) { + bbox[i].low = + bbox[i].high = this->dataset_get(*this, 0, i); + } + for (size_t k = 1; k < N; ++k) { + for (int i = 0; i < (DIM > 0 ? DIM : BaseClassRef::dim); ++i) { + if (this->dataset_get(*this, k, i) < bbox[i].low) bbox[i].low = this->dataset_get(*this, k, i); + if (this->dataset_get(*this, k, i) > bbox[i].high) bbox[i].high = this->dataset_get(*this, k, i); + } + } + } + } + + /** + * Performs an exact search in the tree starting from a node. + * \tparam RESULTSET Should be any ResultSet + * \return true if the search should be continued, false if the results are sufficient + */ + template + bool searchLevel(RESULTSET& result_set, const ElementType* vec, const NodePtr node, DistanceType mindistsq, + distance_vector_t& dists, const float epsError) const + { + /* If this is a leaf node, then do check and return. */ + if ((node->child1 == NULL) && (node->child2 == NULL)) { + //count_leaf += (node->lr.right-node->lr.left); // Removed since was neither used nor returned to the user. + DistanceType worst_dist = result_set.worstDist(); + for (IndexType i = node->node_type.lr.left; inode_type.lr.right; ++i) { + const IndexType index = BaseClassRef::vind[i];// reorder... : i; + DistanceType dist = distance.evalMetric(vec, index, (DIM > 0 ? DIM : BaseClassRef::dim)); + if (dist < worst_dist) { + if(!result_set.addPoint(dist, BaseClassRef::vind[i])) { + // the resultset doesn't want to receive any more points, we're done searching! + return false; + } + } + } + return true; + } + + /* Which child branch should be taken first? */ + int idx = node->node_type.sub.divfeat; + ElementType val = vec[idx]; + DistanceType diff1 = val - node->node_type.sub.divlow; + DistanceType diff2 = val - node->node_type.sub.divhigh; + + NodePtr bestChild; + NodePtr otherChild; + DistanceType cut_dist; + if ((diff1 + diff2) < 0) { + bestChild = node->child1; + otherChild = node->child2; + cut_dist = distance.accum_dist(val, node->node_type.sub.divhigh, idx); + } + else { + bestChild = node->child2; + otherChild = node->child1; + cut_dist = distance.accum_dist( val, node->node_type.sub.divlow, idx); + } + + /* Call recursively to search next level down. */ + if(!searchLevel(result_set, vec, bestChild, mindistsq, dists, epsError)) { + // the resultset doesn't want to receive any more points, we're done searching! + return false; + } + + DistanceType dst = dists[idx]; + mindistsq = mindistsq + cut_dist - dst; + dists[idx] = cut_dist; + if (mindistsq*epsError <= result_set.worstDist()) { + if(!searchLevel(result_set, vec, otherChild, mindistsq, dists, epsError)) { + // the resultset doesn't want to receive any more points, we're done searching! + return false; + } + } + dists[idx] = dst; + return true; + } + + public: + /** Stores the index in a binary file. + * IMPORTANT NOTE: The set of data points is NOT stored in the file, so when loading the index object it must be constructed associated to the same source of data points used while building it. + * See the example: examples/saveload_example.cpp + * \sa loadIndex */ + void saveIndex(FILE* stream) + { + this->saveIndex_(*this, stream); + } + + /** Loads a previous index from a binary file. + * IMPORTANT NOTE: The set of data points is NOT stored in the file, so the index object must be constructed associated to the same source of data points used while building the index. + * See the example: examples/saveload_example.cpp + * \sa loadIndex */ + void loadIndex(FILE* stream) + { + this->loadIndex_(*this, stream); + } + + }; // class KDTree + + + /** kd-tree dynamic index + * + * Contains the k-d trees and other information for indexing a set of points + * for nearest-neighbor matching. + * + * The class "DatasetAdaptor" must provide the following interface (can be non-virtual, inlined methods): + * + * \code + * // Must return the number of data poins + * inline size_t kdtree_get_point_count() const { ... } + * + * // Must return the dim'th component of the idx'th point in the class: + * inline T kdtree_get_pt(const size_t idx, int dim) const { ... } + * + * // Optional bounding-box computation: return false to default to a standard bbox computation loop. + * // Return true if the BBOX was already computed by the class and returned in "bb" so it can be avoided to redo it again. + * // Look at bb.size() to find out the expected dimensionality (e.g. 2 or 3 for point clouds) + * template + * bool kdtree_get_bbox(BBOX &bb) const + * { + * bb[0].low = ...; bb[0].high = ...; // 0th dimension limits + * bb[1].low = ...; bb[1].high = ...; // 1st dimension limits + * ... + * return true; + * } + * + * \endcode + * + * \tparam DatasetAdaptor The user-provided adaptor (see comments above). + * \tparam Distance The distance metric to use: nanoflann::metric_L1, nanoflann::metric_L2, nanoflann::metric_L2_Simple, etc. + * \tparam DIM Dimensionality of data points (e.g. 3 for 3D points) + * \tparam IndexType Will be typically size_t or int + */ + template + class KDTreeSingleIndexDynamicAdaptor_ : public KDTreeBaseClass, Distance, DatasetAdaptor, DIM, IndexType> + { + public: + + /** + * The dataset used by this index + */ + const DatasetAdaptor &dataset; //!< The source of our data + + KDTreeSingleIndexAdaptorParams index_params; + + std::vector &treeIndex; + + Distance distance; + + typedef typename nanoflann::KDTreeBaseClass, Distance, DatasetAdaptor, DIM, IndexType> BaseClassRef; + + typedef typename BaseClassRef::ElementType ElementType; + typedef typename BaseClassRef::DistanceType DistanceType; + + typedef typename BaseClassRef::Node Node; + typedef Node* NodePtr; + + typedef typename BaseClassRef::Interval Interval; + /** Define "BoundingBox" as a fixed-size or variable-size container depending on "DIM" */ + typedef typename BaseClassRef::BoundingBox BoundingBox; + + /** Define "distance_vector_t" as a fixed-size or variable-size container depending on "DIM" */ + typedef typename BaseClassRef::distance_vector_t distance_vector_t; + + /** + * KDTree constructor + * + * Refer to docs in README.md or online in https://github.com/jlblancoc/nanoflann + * + * The KD-Tree point dimension (the length of each point in the datase, e.g. 3 for 3D points) + * is determined by means of: + * - The \a DIM template parameter if >0 (highest priority) + * - Otherwise, the \a dimensionality parameter of this constructor. + * + * @param inputData Dataset with the input features + * @param params Basically, the maximum leaf node size + */ + KDTreeSingleIndexDynamicAdaptor_(const int dimensionality, const DatasetAdaptor& inputData, std::vector& treeIndex_, const KDTreeSingleIndexAdaptorParams& params = KDTreeSingleIndexAdaptorParams()) : + dataset(inputData), index_params(params), treeIndex(treeIndex_), distance(inputData) + { + BaseClassRef::root_node = NULL; + BaseClassRef::m_size = 0; + BaseClassRef::m_size_at_index_build = 0; + BaseClassRef::dim = dimensionality; + if (DIM>0) BaseClassRef::dim = DIM; + BaseClassRef::m_leaf_max_size = params.leaf_max_size; + } + + + /** Assignment operator definiton */ + KDTreeSingleIndexDynamicAdaptor_ operator=( const KDTreeSingleIndexDynamicAdaptor_& rhs ) { + KDTreeSingleIndexDynamicAdaptor_ tmp( rhs ); + std::swap( BaseClassRef::vind, tmp.BaseClassRef::vind ); + std::swap( BaseClassRef::m_leaf_max_size, tmp.BaseClassRef::m_leaf_max_size ); + std::swap( index_params, tmp.index_params ); + std::swap( treeIndex, tmp.treeIndex ); + std::swap( BaseClassRef::m_size, tmp.BaseClassRef::m_size ); + std::swap( BaseClassRef::m_size_at_index_build, tmp.BaseClassRef::m_size_at_index_build ); + std::swap( BaseClassRef::root_node, tmp.BaseClassRef::root_node ); + std::swap( BaseClassRef::root_bbox, tmp.BaseClassRef::root_bbox ); + std::swap( BaseClassRef::pool, tmp.BaseClassRef::pool ); + return *this; + } + + /** + * Builds the index + */ + void buildIndex() + { + BaseClassRef::m_size = BaseClassRef::vind.size(); + this->freeIndex(*this); + BaseClassRef::m_size_at_index_build = BaseClassRef::m_size; + if(BaseClassRef::m_size == 0) return; + computeBoundingBox(BaseClassRef::root_bbox); + BaseClassRef::root_node = this->divideTree(*this, 0, BaseClassRef::m_size, BaseClassRef::root_bbox ); // construct the tree + } + + /** \name Query methods + * @{ */ + + /** + * Find set of nearest neighbors to vec[0:dim-1]. Their indices are stored inside + * the result object. + * + * Params: + * result = the result object in which the indices of the nearest-neighbors are stored + * vec = the vector for which to search the nearest neighbors + * + * \tparam RESULTSET Should be any ResultSet + * \return True if the requested neighbors could be found. + * \sa knnSearch, radiusSearch + */ + template + bool findNeighbors(RESULTSET& result, const ElementType* vec, const SearchParams& searchParams) const + { + assert(vec); + if (this->size(*this) == 0) + return false; + if (!BaseClassRef::root_node) + return false; + float epsError = 1 + searchParams.eps; + + distance_vector_t dists; // fixed or variable-sized container (depending on DIM) + dists.assign((DIM > 0 ? DIM : BaseClassRef::dim) , 0); // Fill it with zeros. + DistanceType distsq = this->computeInitialDistances(*this, vec, dists); + searchLevel(result, vec, BaseClassRef::root_node, distsq, dists, epsError); // "count_leaf" parameter removed since was neither used nor returned to the user. + return result.full(); + } + + /** + * Find the "num_closest" nearest neighbors to the \a query_point[0:dim-1]. Their indices are stored inside + * the result object. + * \sa radiusSearch, findNeighbors + * \note nChecks_IGNORED is ignored but kept for compatibility with the original FLANN interface. + * \return Number `N` of valid points in the result set. Only the first `N` entries in `out_indices` and `out_distances_sq` will be valid. + * Return may be less than `num_closest` only if the number of elements in the tree is less than `num_closest`. + */ + size_t knnSearch(const ElementType *query_point, const size_t num_closest, IndexType *out_indices, DistanceType *out_distances_sq, const int /* nChecks_IGNORED */ = 10) const + { + nanoflann::KNNResultSet resultSet(num_closest); + resultSet.init(out_indices, out_distances_sq); + this->findNeighbors(resultSet, query_point, nanoflann::SearchParams()); + return resultSet.size(); + } + + /** + * Find all the neighbors to \a query_point[0:dim-1] within a maximum radius. + * The output is given as a vector of pairs, of which the first element is a point index and the second the corresponding distance. + * Previous contents of \a IndicesDists are cleared. + * + * If searchParams.sorted==true, the output list is sorted by ascending distances. + * + * For a better performance, it is advisable to do a .reserve() on the vector if you have any wild guess about the number of expected matches. + * + * \sa knnSearch, findNeighbors, radiusSearchCustomCallback + * \return The number of points within the given radius (i.e. indices.size() or dists.size() ) + */ + size_t radiusSearch(const ElementType *query_point, const DistanceType &radius, std::vector >& IndicesDists, const SearchParams& searchParams) const + { + RadiusResultSet resultSet(radius, IndicesDists); + const size_t nFound = radiusSearchCustomCallback(query_point, resultSet, searchParams); + if (searchParams.sorted) + std::sort(IndicesDists.begin(), IndicesDists.end(), IndexDist_Sorter() ); + return nFound; + } + + /** + * Just like radiusSearch() but with a custom callback class for each point found in the radius of the query. + * See the source of RadiusResultSet<> as a start point for your own classes. + * \sa radiusSearch + */ + template + size_t radiusSearchCustomCallback(const ElementType *query_point, SEARCH_CALLBACK &resultSet, const SearchParams& searchParams = SearchParams() ) const + { + this->findNeighbors(resultSet, query_point, searchParams); + return resultSet.size(); + } + + /** @} */ + + public: + + + void computeBoundingBox(BoundingBox& bbox) + { + bbox.resize((DIM > 0 ? DIM : BaseClassRef::dim)); + if (dataset.kdtree_get_bbox(bbox)) + { + // Done! It was implemented in derived class + } + else + { + const size_t N = BaseClassRef::m_size; + if (!N) throw std::runtime_error("[nanoflann] computeBoundingBox() called but no data points found."); + for (int i = 0; i < (DIM > 0 ? DIM : BaseClassRef::dim); ++i) { + bbox[i].low = + bbox[i].high = this->dataset_get(*this, BaseClassRef::vind[0], i); + } + for (size_t k = 1; k < N; ++k) { + for (int i = 0; i < (DIM > 0 ? DIM : BaseClassRef::dim); ++i) { + if (this->dataset_get(*this, BaseClassRef::vind[k], i) < bbox[i].low) bbox[i].low = this->dataset_get(*this, BaseClassRef::vind[k], i); + if (this->dataset_get(*this, BaseClassRef::vind[k], i) > bbox[i].high) bbox[i].high = this->dataset_get(*this, BaseClassRef::vind[k], i); + } + } + } + } + + /** + * Performs an exact search in the tree starting from a node. + * \tparam RESULTSET Should be any ResultSet + */ + template + void searchLevel(RESULTSET& result_set, const ElementType* vec, const NodePtr node, DistanceType mindistsq, + distance_vector_t& dists, const float epsError) const + { + /* If this is a leaf node, then do check and return. */ + if ((node->child1 == NULL) && (node->child2 == NULL)) { + //count_leaf += (node->lr.right-node->lr.left); // Removed since was neither used nor returned to the user. + DistanceType worst_dist = result_set.worstDist(); + for (IndexType i = node->node_type.lr.left; i < node->node_type.lr.right; ++i) { + const IndexType index = BaseClassRef::vind[i];// reorder... : i; + if(treeIndex[index] == -1) + continue; + DistanceType dist = distance.evalMetric(vec, index, (DIM > 0 ? DIM : BaseClassRef::dim)); + if (distnode_type.sub.divfeat; + ElementType val = vec[idx]; + DistanceType diff1 = val - node->node_type.sub.divlow; + DistanceType diff2 = val - node->node_type.sub.divhigh; + + NodePtr bestChild; + NodePtr otherChild; + DistanceType cut_dist; + if ((diff1 + diff2) < 0) { + bestChild = node->child1; + otherChild = node->child2; + cut_dist = distance.accum_dist(val, node->node_type.sub.divhigh, idx); + } + else { + bestChild = node->child2; + otherChild = node->child1; + cut_dist = distance.accum_dist( val, node->node_type.sub.divlow, idx); + } + + /* Call recursively to search next level down. */ + searchLevel(result_set, vec, bestChild, mindistsq, dists, epsError); + + DistanceType dst = dists[idx]; + mindistsq = mindistsq + cut_dist - dst; + dists[idx] = cut_dist; + if (mindistsq*epsError <= result_set.worstDist()) { + searchLevel(result_set, vec, otherChild, mindistsq, dists, epsError); + } + dists[idx] = dst; + } + + public: + /** Stores the index in a binary file. + * IMPORTANT NOTE: The set of data points is NOT stored in the file, so when loading the index object it must be constructed associated to the same source of data points used while building it. + * See the example: examples/saveload_example.cpp + * \sa loadIndex */ + void saveIndex(FILE* stream) + { + this->saveIndex_(*this, stream); + } + + /** Loads a previous index from a binary file. + * IMPORTANT NOTE: The set of data points is NOT stored in the file, so the index object must be constructed associated to the same source of data points used while building the index. + * See the example: examples/saveload_example.cpp + * \sa loadIndex */ + void loadIndex(FILE* stream) + { + this->loadIndex_(*this, stream); + } + + }; + + + /** kd-tree dynaimic index + * + * class to create multiple static index and merge their results to behave as single dynamic index as proposed in Logarithmic Approach. + * + * Example of usage: + * examples/dynamic_pointcloud_example.cpp + * + * \tparam DatasetAdaptor The user-provided adaptor (see comments above). + * \tparam Distance The distance metric to use: nanoflann::metric_L1, nanoflann::metric_L2, nanoflann::metric_L2_Simple, etc. + * \tparam DIM Dimensionality of data points (e.g. 3 for 3D points) + * \tparam IndexType Will be typically size_t or int + */ + template + class KDTreeSingleIndexDynamicAdaptor + { + public: + typedef typename Distance::ElementType ElementType; + typedef typename Distance::DistanceType DistanceType; + protected: + + size_t m_leaf_max_size; + size_t treeCount; + size_t pointCount; + + /** + * The dataset used by this index + */ + const DatasetAdaptor &dataset; //!< The source of our data + + std::vector treeIndex; //!< treeIndex[idx] is the index of tree in which point at idx is stored. treeIndex[idx]=-1 means that point has been removed. + + KDTreeSingleIndexAdaptorParams index_params; + + int dim; //!< Dimensionality of each data point + + typedef KDTreeSingleIndexDynamicAdaptor_ index_container_t; + std::vector index; + + public: + /** Get a const ref to the internal list of indices; the number of indices is adapted dynamically as + * the dataset grows in size. */ + const std::vector & getAllIndices() const { + return index; + } + + private: + /** finds position of least significant unset bit */ + int First0Bit(IndexType num) + { + int pos = 0; + while(num&1) + { + num = num>>1; + pos++; + } + return pos; + } + + /** Creates multiple empty trees to handle dynamic support */ + void init() + { + typedef KDTreeSingleIndexDynamicAdaptor_ my_kd_tree_t; + std::vector index_(treeCount, my_kd_tree_t(dim /*dim*/, dataset, treeIndex, index_params)); + index=index_; + } + + public: + + Distance distance; + + /** + * KDTree constructor + * + * Refer to docs in README.md or online in https://github.com/jlblancoc/nanoflann + * + * The KD-Tree point dimension (the length of each point in the datase, e.g. 3 for 3D points) + * is determined by means of: + * - The \a DIM template parameter if >0 (highest priority) + * - Otherwise, the \a dimensionality parameter of this constructor. + * + * @param inputData Dataset with the input features + * @param params Basically, the maximum leaf node size + */ + KDTreeSingleIndexDynamicAdaptor(const int dimensionality, const DatasetAdaptor& inputData, const KDTreeSingleIndexAdaptorParams& params = KDTreeSingleIndexAdaptorParams() , const size_t maximumPointCount = 1000000000U) : + dataset(inputData), index_params(params), distance(inputData) + { + treeCount = std::log2(maximumPointCount); + pointCount = 0U; + dim = dimensionality; + treeIndex.clear(); + if (DIM > 0) dim = DIM; + m_leaf_max_size = params.leaf_max_size; + init(); + int num_initial_points = dataset.kdtree_get_point_count(); + if (num_initial_points > 0) { + addPoints(0, num_initial_points - 1); + } + } + + /** Deleted copy constructor*/ + KDTreeSingleIndexDynamicAdaptor(const KDTreeSingleIndexDynamicAdaptor&) = delete; + + + /** Add points to the set, Inserts all points from [start, end] */ + void addPoints(IndexType start, IndexType end) + { + int count = end - start + 1; + treeIndex.resize(treeIndex.size() + count); + for(IndexType idx = start; idx <= end; idx++) { + int pos = First0Bit(pointCount); + index[pos].vind.clear(); + treeIndex[pointCount]=pos; + for(int i = 0; i < pos; i++) { + for(int j = 0; j < static_cast(index[i].vind.size()); j++) { + index[pos].vind.push_back(index[i].vind[j]); + treeIndex[index[i].vind[j]] = pos; + } + index[i].vind.clear(); + index[i].freeIndex(index[i]); + } + index[pos].vind.push_back(idx); + index[pos].buildIndex(); + pointCount++; + } + } + + /** Remove a point from the set (Lazy Deletion) */ + void removePoint(size_t idx) + { + if(idx >= pointCount) + return; + treeIndex[idx] = -1; + } + + /** + * Find set of nearest neighbors to vec[0:dim-1]. Their indices are stored inside + * the result object. + * + * Params: + * result = the result object in which the indices of the nearest-neighbors are stored + * vec = the vector for which to search the nearest neighbors + * + * \tparam RESULTSET Should be any ResultSet + * \return True if the requested neighbors could be found. + * \sa knnSearch, radiusSearch + */ + template + bool findNeighbors(RESULTSET& result, const ElementType* vec, const SearchParams& searchParams) const + { + for(size_t i = 0; i < treeCount; i++) + { + index[i].findNeighbors(result, &vec[0], searchParams); + } + return result.full(); + } + + }; + + /** An L2-metric KD-tree adaptor for working with data directly stored in an Eigen Matrix, without duplicating the data storage. + * Each row in the matrix represents a point in the state space. + * + * Example of usage: + * \code + * Eigen::Matrix mat; + * // Fill out "mat"... + * + * typedef KDTreeEigenMatrixAdaptor< Eigen::Matrix > my_kd_tree_t; + * const int max_leaf = 10; + * my_kd_tree_t mat_index(mat, max_leaf ); + * mat_index.index->buildIndex(); + * mat_index.index->... + * \endcode + * + * \tparam DIM If set to >0, it specifies a compile-time fixed dimensionality for the points in the data set, allowing more compiler optimizations. + * \tparam Distance The distance metric to use: nanoflann::metric_L1, nanoflann::metric_L2, nanoflann::metric_L2_Simple, etc. + */ + template + struct KDTreeEigenMatrixAdaptor + { + typedef KDTreeEigenMatrixAdaptor self_t; + typedef typename MatrixType::Scalar num_t; + typedef typename MatrixType::Index IndexType; + typedef typename Distance::template traits::distance_t metric_t; + typedef KDTreeSingleIndexAdaptor< metric_t,self_t, MatrixType::ColsAtCompileTime,IndexType> index_t; + + index_t* index; //! The kd-tree index for the user to call its methods as usual with any other FLANN index. + + /// Constructor: takes a const ref to the matrix object with the data points + KDTreeEigenMatrixAdaptor(const MatrixType &mat, const int leaf_max_size = 10) : m_data_matrix(mat) + { + const IndexType dims = mat.cols(); + index = new index_t( dims, *this /* adaptor */, nanoflann::KDTreeSingleIndexAdaptorParams(leaf_max_size ) ); + index->buildIndex(); + } + public: + /** Deleted copy constructor */ + KDTreeEigenMatrixAdaptor(const self_t&) = delete; + + ~KDTreeEigenMatrixAdaptor() { + delete index; + } + + const MatrixType &m_data_matrix; + + /** Query for the \a num_closest closest points to a given point (entered as query_point[0:dim-1]). + * Note that this is a short-cut method for index->findNeighbors(). + * The user can also call index->... methods as desired. + * \note nChecks_IGNORED is ignored but kept for compatibility with the original FLANN interface. + */ + inline void query(const num_t *query_point, const size_t num_closest, IndexType *out_indices, num_t *out_distances_sq, const int /* nChecks_IGNORED */ = 10) const + { + nanoflann::KNNResultSet resultSet(num_closest); + resultSet.init(out_indices, out_distances_sq); + index->findNeighbors(resultSet, query_point, nanoflann::SearchParams()); + } + + /** @name Interface expected by KDTreeSingleIndexAdaptor + * @{ */ + + const self_t & derived() const { + return *this; + } + self_t & derived() { + return *this; + } + + // Must return the number of data points + inline size_t kdtree_get_point_count() const { + return m_data_matrix.rows(); + } + + // Returns the dim'th component of the idx'th point in the class: + inline num_t kdtree_get_pt(const IndexType idx, int dim) const { + return m_data_matrix.coeff(idx, IndexType(dim)); + } + + // Optional bounding-box computation: return false to default to a standard bbox computation loop. + // Return true if the BBOX was already computed by the class and returned in "bb" so it can be avoided to redo it again. + // Look at bb.size() to find out the expected dimensionality (e.g. 2 or 3 for point clouds) + template + bool kdtree_get_bbox(BBOX& /*bb*/) const { + return false; + } + + /** @} */ + + }; // end of KDTreeEigenMatrixAdaptor + /** @} */ + +/** @} */ // end of grouping +} // end of NS + + +#endif /* NANOFLANN_HPP_ */ diff --git a/competing_methods/my_RandLANet/utils/nearest_neighbors/setup.py b/competing_methods/my_RandLANet/utils/nearest_neighbors/setup.py new file mode 100644 index 00000000..5dac0de3 --- /dev/null +++ b/competing_methods/my_RandLANet/utils/nearest_neighbors/setup.py @@ -0,0 +1,21 @@ +from distutils.core import setup +from distutils.extension import Extension +from Cython.Distutils import build_ext +import numpy + + + +ext_modules = [Extension( + "nearest_neighbors", + sources=["knn.pyx", "knn_.cxx",], # source file(s) + include_dirs=["./", numpy.get_include()], + language="c++", + extra_compile_args = [ "-std=c++11", "-fopenmp",], + extra_link_args=["-std=c++11", '-fopenmp'], + )] + +setup( + name = "KNN NanoFLANN", + ext_modules = ext_modules, + cmdclass = {'build_ext': build_ext}, +) diff --git a/competing_methods/my_RandLANet/utils/nearest_neighbors/test.py b/competing_methods/my_RandLANet/utils/nearest_neighbors/test.py new file mode 100644 index 00000000..b6c67fc9 --- /dev/null +++ b/competing_methods/my_RandLANet/utils/nearest_neighbors/test.py @@ -0,0 +1,15 @@ +import numpy as np +import lib.python.nearest_neighbors as nearest_neighbors +import time + +batch_size = 16 +num_points = 81920 +K = 16 +pc = np.random.rand(batch_size, num_points, 3).astype(np.float32) + +# nearest neighbours +start = time.time() +neigh_idx = nearest_neighbors.knn_batch(pc, pc, K, omp=True) +print(time.time() - start) + + diff --git a/competing_methods/my_RandLANet/utils/semantic-kitti.yaml b/competing_methods/my_RandLANet/utils/semantic-kitti.yaml new file mode 100644 index 00000000..62810655 --- /dev/null +++ b/competing_methods/my_RandLANet/utils/semantic-kitti.yaml @@ -0,0 +1,211 @@ +# This file is covered by the LICENSE file in the root of this project. +labels: + 0 : "unlabeled" + 1 : "outlier" + 10: "car" + 11: "bicycle" + 13: "bus" + 15: "motorcycle" + 16: "on-rails" + 18: "truck" + 20: "other-vehicle" + 30: "person" + 31: "bicyclist" + 32: "motorcyclist" + 40: "road" + 44: "parking" + 48: "sidewalk" + 49: "other-ground" + 50: "building" + 51: "fence" + 52: "other-structure" + 60: "lane-marking" + 70: "vegetation" + 71: "trunk" + 72: "terrain" + 80: "pole" + 81: "traffic-sign" + 99: "other-object" + 252: "moving-car" + 253: "moving-bicyclist" + 254: "moving-person" + 255: "moving-motorcyclist" + 256: "moving-on-rails" + 257: "moving-bus" + 258: "moving-truck" + 259: "moving-other-vehicle" +color_map: # bgr + 0 : [0, 0, 0] + 1 : [0, 0, 255] + 10: [245, 150, 100] + 11: [245, 230, 100] + 13: [250, 80, 100] + 15: [150, 60, 30] + 16: [255, 0, 0] + 18: [180, 30, 80] + 20: [255, 0, 0] + 30: [30, 30, 255] + 31: [200, 40, 255] + 32: [90, 30, 150] + 40: [255, 0, 255] + 44: [255, 150, 255] + 48: [75, 0, 75] + 49: [75, 0, 175] + 50: [0, 200, 255] + 51: [50, 120, 255] + 52: [0, 150, 255] + 60: [170, 255, 150] + 70: [0, 175, 0] + 71: [0, 60, 135] + 72: [80, 240, 150] + 80: [150, 240, 255] + 81: [0, 0, 255] + 99: [255, 255, 50] + 252: [245, 150, 100] + 256: [255, 0, 0] + 253: [200, 40, 255] + 254: [30, 30, 255] + 255: [90, 30, 150] + 257: [250, 80, 100] + 258: [180, 30, 80] + 259: [255, 0, 0] +content: # as a ratio with the total number of points + 0: 0.018889854628292943 + 1: 0.0002937197336781505 + 10: 0.040818519255974316 + 11: 0.00016609538710764618 + 13: 2.7879693665067774e-05 + 15: 0.00039838616015114444 + 16: 0.0 + 18: 0.0020633612104619787 + 20: 0.0016218197275284021 + 30: 0.00017698551338515307 + 31: 1.1065903904919655e-08 + 32: 5.532951952459828e-09 + 40: 0.1987493871255525 + 44: 0.014717169549888214 + 48: 0.14392298360372 + 49: 0.0039048553037472045 + 50: 0.1326861944777486 + 51: 0.0723592229456223 + 52: 0.002395131480328884 + 60: 4.7084144280367186e-05 + 70: 0.26681502148037506 + 71: 0.006035012012626033 + 72: 0.07814222006271769 + 80: 0.002855498193863172 + 81: 0.0006155958086189918 + 99: 0.009923127583046915 + 252: 0.001789309418528068 + 253: 0.00012709999297008662 + 254: 0.00016059776092534436 + 255: 3.745553104802113e-05 + 256: 0.0 + 257: 0.00011351574470342043 + 258: 0.00010157861367183268 + 259: 4.3840131989471124e-05 +# classes that are indistinguishable from single scan or inconsistent in +# ground truth are mapped to their closest equivalent +learning_map: + 0 : 0 # "unlabeled" + 1 : 0 # "outlier" mapped to "unlabeled" --------------------------mapped + 10: 1 # "car" + 11: 2 # "bicycle" + 13: 5 # "bus" mapped to "other-vehicle" --------------------------mapped + 15: 3 # "motorcycle" + 16: 5 # "on-rails" mapped to "other-vehicle" ---------------------mapped + 18: 4 # "truck" + 20: 5 # "other-vehicle" + 30: 6 # "person" + 31: 7 # "bicyclist" + 32: 8 # "motorcyclist" + 40: 9 # "road" + 44: 10 # "parking" + 48: 11 # "sidewalk" + 49: 12 # "other-ground" + 50: 13 # "building" + 51: 14 # "fence" + 52: 0 # "other-structure" mapped to "unlabeled" ------------------mapped + 60: 9 # "lane-marking" to "road" ---------------------------------mapped + 70: 15 # "vegetation" + 71: 16 # "trunk" + 72: 17 # "terrain" + 80: 18 # "pole" + 81: 19 # "traffic-sign" + 99: 0 # "other-object" to "unlabeled" ----------------------------mapped + 252: 1 # "moving-car" to "car" ------------------------------------mapped + 253: 7 # "moving-bicyclist" to "bicyclist" ------------------------mapped + 254: 6 # "moving-person" to "person" ------------------------------mapped + 255: 8 # "moving-motorcyclist" to "motorcyclist" ------------------mapped + 256: 5 # "moving-on-rails" mapped to "other-vehicle" --------------mapped + 257: 5 # "moving-bus" mapped to "other-vehicle" -------------------mapped + 258: 4 # "moving-truck" to "truck" --------------------------------mapped + 259: 5 # "moving-other"-vehicle to "other-vehicle" ----------------mapped +learning_map_inv: # inverse of previous map + 0: 0 # "unlabeled", and others ignored + 1: 10 # "car" + 2: 11 # "bicycle" + 3: 15 # "motorcycle" + 4: 18 # "truck" + 5: 20 # "other-vehicle" + 6: 30 # "person" + 7: 31 # "bicyclist" + 8: 32 # "motorcyclist" + 9: 40 # "road" + 10: 44 # "parking" + 11: 48 # "sidewalk" + 12: 49 # "other-ground" + 13: 50 # "building" + 14: 51 # "fence" + 15: 70 # "vegetation" + 16: 71 # "trunk" + 17: 72 # "terrain" + 18: 80 # "pole" + 19: 81 # "traffic-sign" +learning_ignore: # Ignore classes + 0: True # "unlabeled", and others ignored + 1: False # "car" + 2: False # "bicycle" + 3: False # "motorcycle" + 4: False # "truck" + 5: False # "other-vehicle" + 6: False # "person" + 7: False # "bicyclist" + 8: False # "motorcyclist" + 9: False # "road" + 10: False # "parking" + 11: False # "sidewalk" + 12: False # "other-ground" + 13: False # "building" + 14: False # "fence" + 15: False # "vegetation" + 16: False # "trunk" + 17: False # "terrain" + 18: False # "pole" + 19: False # "traffic-sign" +split: # sequence numbers + train: + - 0 + - 1 + - 2 + - 3 + - 4 + - 5 + - 6 + - 7 + - 9 + - 10 + valid: + - 8 + test: + - 11 + - 12 + - 13 + - 14 + - 15 + - 16 + - 17 + - 18 + - 19 + - 20 + - 21 diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/.gitignore b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/.gitignore new file mode 100644 index 00000000..c5daaf83 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/.gitignore @@ -0,0 +1,39 @@ +/data/ +*.pyc +__pycache__ +.vscode +*outputs* +wandb +logs +results +site +*.DS_Store +kernels/dispositions/* +notebooks/*checkpoints +/notebooks/*.html +py_scripts +.ipynb_checkpoints +measurements/*.pickle +/forward_scripts/test_data +/forward_scripts/out +_build +docs_old +/test/kernels/dispositions +*.egg-info* +*.prof + +# Python egg metadata, regenerated from source files by setuptools. +/*.egg +build/* +dist/* +.venv +.mypy_cache +lightning_logs +/data +.tox +tp3d +eval +cv_s3dis_models/ +test/test_data +torch_points3d/applications/weights/* +.python-version diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/CHANGELOG.md b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/CHANGELOG.md new file mode 100644 index 00000000..928f00f9 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/CHANGELOG.md @@ -0,0 +1,113 @@ +# Changelog + +All notable changes to this project will be documented in this file. + +The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), +and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). + +## Unreleased + +### Added + +- Support for the IRALab benchmark (https://arxiv.org/abs/2003.12841), with data from the ETH, Canadian Planetary, Kaist and TUM datasets. (thanks @simone-fontana) +- Added Kitti for semantic segmentation and registration (first outdoor dataset for semantic seg) +- Possibility to load pretrained models by adding the path in the confs for finetuning. +- Lottery transform to use randomly selected transforms for data augmentation +- Batch size campling function to ensure that batches don't get too large +- [TorchSparse](https://github.com/mit-han-lab/torchsparse) backend for sparse convolutions +- Possibility to build sparse convolution networks with Minkowski Engine or TorchSparse +- [PVCNN](https://arxiv.org/abs/1907.03739) model for semantic segmentation (thanks @CCInc) + +### Bug fix + +- Dataset configurations are saved in the checkpoints so that models can be created without requiring the actual dataset +- Trainer was giving a warning for models that could not be re created when they actually could +- BatchNorm1d fix (thanks @Wundersam) +- Fix process hanging when processing scannet with multiprocessing (thanks @zetyquickly) +- wandb does not log the weights when set in private mode (thanks @jamesjiro) + +### Changed + +- More general API for Minkowski with support for Bottleneck blocks and Squeeze and excite. +- Docker images tags on dockerhub are now `latest-gpu` and `latest-cpu` for the latest CPU adn GPU images. + +## 1.1.1 + +### Added + +- Teaser support for registration +- Examples for using pretrained registration models +- Pointnet2 forward examples for classification, segmentation +- S3DIS automatic download and panoptic support and cylinder sampling + +### Changed + +- Moved to PyTorch 1.6 as officialy supported PyTorch version + +### Bug fix + +- Add `context = ssl._create_unverified_context()`, `data = urllib.request.urlopen(url, context=context)` within `download_ulr`, so ModelNet and ShapeNet can download. + +## 1.1.0 + +### Added + +- Support scannet test dataset and automatic generation of submission files using the eval.py script +- Full res predictions on Scannet with voting +- VoteNet model and losses +- Tracker for object detection +- Models can specify which attributes they need from the data in order to forward and train properly +- Full res predictions on ShapeNet with voting +- Trainer class to handle train / eval +- Add testing for Trainer: + - Segmentation: PointNet2 on cap ShapeNet + - Segmentation: KPConv on scannetV2 + - Object Detection: VoteNet on scannetV2 +- Add VoteNet Paper / Backbones within API +- Windows support +- Weights are uploaded to wandb at the end of the run +- Added PointGroup https://arxiv.org/pdf/2007.01294.pdf +- Added PretrainedRegistry allowing model weight to be downloaded directly from wandb and DatasetMocking +- Added script for s3dis cross-validation [scripts/cv_s3dis.py]. 6 different pretrained models will be downloaded, evaluated on full resolution and confusion matrice will be summed to get all metrics. +- mAP tracker for Panoptic segmentation + +### Changed + +- evaluation output folder is now a subfolder of the checkpoint it uses +- saves model checkpoints to wandb +- GridSampling3D now creates a new attribute `coords` that stores the non quantized position when the transform is called in `quantize` mode +- cuda parameter can be given in command line to select the GPU to use +- Updated to pytorch geometric 1.6.0 + +### Bugfix + +- LR secheduler resume is broken for update on batch number #328 +- ElasticDistortion transform is now fully functional + +### Removed + +## 1.0.1 + +### Changed + +- We now support the latest PyTorch +- Migration to the latest PyTorch Geometric and dependencies + +### Bugfixes + +- #273 (support python 3.7) + +## 0.2.2 + +### Bugfix + +- Pre transform is being overriden by the inference transform + +## 0.2.1 + +### Added + +- Customizable number of channels at the output of the API models +- API models expose output number of channels as a property +- Added Encoder to the API +- Sampled ModelNet dataset for point clouds diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/LICENSE b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/LICENSE new file mode 100644 index 00000000..f288702d --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/LICENSE @@ -0,0 +1,674 @@ + GNU GENERAL PUBLIC LICENSE + Version 3, 29 June 2007 + + Copyright (C) 2007 Free Software Foundation, Inc. + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + Preamble + + The GNU General Public License is a free, copyleft license for +software and other kinds of works. + + The licenses for most software and other practical works are designed +to take away your freedom to share and change the works. By contrast, +the GNU General Public License is intended to guarantee your freedom to +share and change all versions of a program--to make sure it remains free +software for all its users. We, the Free Software Foundation, use the +GNU General Public License for most of our software; it applies also to +any other work released this way by its authors. You can apply it to +your programs, too. + + When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +them if you wish), that you receive source code or can get it if you +want it, that you can change the software or use pieces of it in new +free programs, and that you know you can do these things. + + To protect your rights, we need to prevent others from denying you +these rights or asking you to surrender the rights. Therefore, you have +certain responsibilities if you distribute copies of the software, or if +you modify it: responsibilities to respect the freedom of others. + + For example, if you distribute copies of such a program, whether +gratis or for a fee, you must pass on to the recipients the same +freedoms that you received. You must make sure that they, too, receive +or can get the source code. And you must show them these terms so they +know their rights. + + Developers that use the GNU GPL protect your rights with two steps: +(1) assert copyright on the software, and (2) offer you this License +giving you legal permission to copy, distribute and/or modify it. + + For the developers' and authors' protection, the GPL clearly explains +that there is no warranty for this free software. For both users' and +authors' sake, the GPL requires that modified versions be marked as +changed, so that their problems will not be attributed erroneously to +authors of previous versions. + + Some devices are designed to deny users access to install or run +modified versions of the software inside them, although the manufacturer +can do so. This is fundamentally incompatible with the aim of +protecting users' freedom to change the software. The systematic +pattern of such abuse occurs in the area of products for individuals to +use, which is precisely where it is most unacceptable. Therefore, we +have designed this version of the GPL to prohibit the practice for those +products. If such problems arise substantially in other domains, we +stand ready to extend this provision to those domains in future versions +of the GPL, as needed to protect the freedom of users. + + Finally, every program is threatened constantly by software patents. +States should not allow patents to restrict development and use of +software on general-purpose computers, but in those that do, we wish to +avoid the special danger that patents applied to a free program could +make it effectively proprietary. To prevent this, the GPL assures that +patents cannot be used to render the program non-free. + + The precise terms and conditions for copying, distribution and +modification follow. + + TERMS AND CONDITIONS + + 0. Definitions. + + "This License" refers to version 3 of the GNU General Public License. + + "Copyright" also means copyright-like laws that apply to other kinds of +works, such as semiconductor masks. + + "The Program" refers to any copyrightable work licensed under this +License. Each licensee is addressed as "you". "Licensees" and +"recipients" may be individuals or organizations. + + To "modify" a work means to copy from or adapt all or part of the work +in a fashion requiring copyright permission, other than the making of an +exact copy. The resulting work is called a "modified version" of the +earlier work or a work "based on" the earlier work. + + A "covered work" means either the unmodified Program or a work based +on the Program. + + To "propagate" a work means to do anything with it that, without +permission, would make you directly or secondarily liable for +infringement under applicable copyright law, except executing it on a +computer or modifying a private copy. Propagation includes copying, +distribution (with or without modification), making available to the +public, and in some countries other activities as well. + + To "convey" a work means any kind of propagation that enables other +parties to make or receive copies. Mere interaction with a user through +a computer network, with no transfer of a copy, is not conveying. + + An interactive user interface displays "Appropriate Legal Notices" +to the extent that it includes a convenient and prominently visible +feature that (1) displays an appropriate copyright notice, and (2) +tells the user that there is no warranty for the work (except to the +extent that warranties are provided), that licensees may convey the +work under this License, and how to view a copy of this License. If +the interface presents a list of user commands or options, such as a +menu, a prominent item in the list meets this criterion. + + 1. Source Code. + + The "source code" for a work means the preferred form of the work +for making modifications to it. "Object code" means any non-source +form of a work. + + A "Standard Interface" means an interface that either is an official +standard defined by a recognized standards body, or, in the case of +interfaces specified for a particular programming language, one that +is widely used among developers working in that language. + + The "System Libraries" of an executable work include anything, other +than the work as a whole, that (a) is included in the normal form of +packaging a Major Component, but which is not part of that Major +Component, and (b) serves only to enable use of the work with that +Major Component, or to implement a Standard Interface for which an +implementation is available to the public in source code form. A +"Major Component", in this context, means a major essential component +(kernel, window system, and so on) of the specific operating system +(if any) on which the executable work runs, or a compiler used to +produce the work, or an object code interpreter used to run it. + + The "Corresponding Source" for a work in object code form means all +the source code needed to generate, install, and (for an executable +work) run the object code and to modify the work, including scripts to +control those activities. However, it does not include the work's +System Libraries, or general-purpose tools or generally available free +programs which are used unmodified in performing those activities but +which are not part of the work. For example, Corresponding Source +includes interface definition files associated with source files for +the work, and the source code for shared libraries and dynamically +linked subprograms that the work is specifically designed to require, +such as by intimate data communication or control flow between those +subprograms and other parts of the work. + + The Corresponding Source need not include anything that users +can regenerate automatically from other parts of the Corresponding +Source. + + The Corresponding Source for a work in source code form is that +same work. + + 2. Basic Permissions. + + All rights granted under this License are granted for the term of +copyright on the Program, and are irrevocable provided the stated +conditions are met. This License explicitly affirms your unlimited +permission to run the unmodified Program. The output from running a +covered work is covered by this License only if the output, given its +content, constitutes a covered work. This License acknowledges your +rights of fair use or other equivalent, as provided by copyright law. + + You may make, run and propagate covered works that you do not +convey, without conditions so long as your license otherwise remains +in force. You may convey covered works to others for the sole purpose +of having them make modifications exclusively for you, or provide you +with facilities for running those works, provided that you comply with +the terms of this License in conveying all material for which you do +not control copyright. Those thus making or running the covered works +for you must do so exclusively on your behalf, under your direction +and control, on terms that prohibit them from making any copies of +your copyrighted material outside their relationship with you. + + Conveying under any other circumstances is permitted solely under +the conditions stated below. Sublicensing is not allowed; section 10 +makes it unnecessary. + + 3. Protecting Users' Legal Rights From Anti-Circumvention Law. + + No covered work shall be deemed part of an effective technological +measure under any applicable law fulfilling obligations under article +11 of the WIPO copyright treaty adopted on 20 December 1996, or +similar laws prohibiting or restricting circumvention of such +measures. + + When you convey a covered work, you waive any legal power to forbid +circumvention of technological measures to the extent such circumvention +is effected by exercising rights under this License with respect to +the covered work, and you disclaim any intention to limit operation or +modification of the work as a means of enforcing, against the work's +users, your or third parties' legal rights to forbid circumvention of +technological measures. + + 4. Conveying Verbatim Copies. + + You may convey verbatim copies of the Program's source code as you +receive it, in any medium, provided that you conspicuously and +appropriately publish on each copy an appropriate copyright notice; +keep intact all notices stating that this License and any +non-permissive terms added in accord with section 7 apply to the code; +keep intact all notices of the absence of any warranty; and give all +recipients a copy of this License along with the Program. + + You may charge any price or no price for each copy that you convey, +and you may offer support or warranty protection for a fee. + + 5. Conveying Modified Source Versions. + + You may convey a work based on the Program, or the modifications to +produce it from the Program, in the form of source code under the +terms of section 4, provided that you also meet all of these conditions: + + a) The work must carry prominent notices stating that you modified + it, and giving a relevant date. + + b) The work must carry prominent notices stating that it is + released under this License and any conditions added under section + 7. This requirement modifies the requirement in section 4 to + "keep intact all notices". + + c) You must license the entire work, as a whole, under this + License to anyone who comes into possession of a copy. This + License will therefore apply, along with any applicable section 7 + additional terms, to the whole of the work, and all its parts, + regardless of how they are packaged. This License gives no + permission to license the work in any other way, but it does not + invalidate such permission if you have separately received it. + + d) If the work has interactive user interfaces, each must display + Appropriate Legal Notices; however, if the Program has interactive + interfaces that do not display Appropriate Legal Notices, your + work need not make them do so. + + A compilation of a covered work with other separate and independent +works, which are not by their nature extensions of the covered work, +and which are not combined with it such as to form a larger program, +in or on a volume of a storage or distribution medium, is called an +"aggregate" if the compilation and its resulting copyright are not +used to limit the access or legal rights of the compilation's users +beyond what the individual works permit. Inclusion of a covered work +in an aggregate does not cause this License to apply to the other +parts of the aggregate. + + 6. Conveying Non-Source Forms. + + You may convey a covered work in object code form under the terms +of sections 4 and 5, provided that you also convey the +machine-readable Corresponding Source under the terms of this License, +in one of these ways: + + a) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by the + Corresponding Source fixed on a durable physical medium + customarily used for software interchange. + + b) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by a + written offer, valid for at least three years and valid for as + long as you offer spare parts or customer support for that product + model, to give anyone who possesses the object code either (1) a + copy of the Corresponding Source for all the software in the + product that is covered by this License, on a durable physical + medium customarily used for software interchange, for a price no + more than your reasonable cost of physically performing this + conveying of source, or (2) access to copy the + Corresponding Source from a network server at no charge. + + c) Convey individual copies of the object code with a copy of the + written offer to provide the Corresponding Source. This + alternative is allowed only occasionally and noncommercially, and + only if you received the object code with such an offer, in accord + with subsection 6b. + + d) Convey the object code by offering access from a designated + place (gratis or for a charge), and offer equivalent access to the + Corresponding Source in the same way through the same place at no + further charge. You need not require recipients to copy the + Corresponding Source along with the object code. If the place to + copy the object code is a network server, the Corresponding Source + may be on a different server (operated by you or a third party) + that supports equivalent copying facilities, provided you maintain + clear directions next to the object code saying where to find the + Corresponding Source. Regardless of what server hosts the + Corresponding Source, you remain obligated to ensure that it is + available for as long as needed to satisfy these requirements. + + e) Convey the object code using peer-to-peer transmission, provided + you inform other peers where the object code and Corresponding + Source of the work are being offered to the general public at no + charge under subsection 6d. + + A separable portion of the object code, whose source code is excluded +from the Corresponding Source as a System Library, need not be +included in conveying the object code work. + + A "User Product" is either (1) a "consumer product", which means any +tangible personal property which is normally used for personal, family, +or household purposes, or (2) anything designed or sold for incorporation +into a dwelling. In determining whether a product is a consumer product, +doubtful cases shall be resolved in favor of coverage. For a particular +product received by a particular user, "normally used" refers to a +typical or common use of that class of product, regardless of the status +of the particular user or of the way in which the particular user +actually uses, or expects or is expected to use, the product. A product +is a consumer product regardless of whether the product has substantial +commercial, industrial or non-consumer uses, unless such uses represent +the only significant mode of use of the product. + + "Installation Information" for a User Product means any methods, +procedures, authorization keys, or other information required to install +and execute modified versions of a covered work in that User Product from +a modified version of its Corresponding Source. The information must +suffice to ensure that the continued functioning of the modified object +code is in no case prevented or interfered with solely because +modification has been made. + + If you convey an object code work under this section in, or with, or +specifically for use in, a User Product, and the conveying occurs as +part of a transaction in which the right of possession and use of the +User Product is transferred to the recipient in perpetuity or for a +fixed term (regardless of how the transaction is characterized), the +Corresponding Source conveyed under this section must be accompanied +by the Installation Information. But this requirement does not apply +if neither you nor any third party retains the ability to install +modified object code on the User Product (for example, the work has +been installed in ROM). + + The requirement to provide Installation Information does not include a +requirement to continue to provide support service, warranty, or updates +for a work that has been modified or installed by the recipient, or for +the User Product in which it has been modified or installed. Access to a +network may be denied when the modification itself materially and +adversely affects the operation of the network or violates the rules and +protocols for communication across the network. + + Corresponding Source conveyed, and Installation Information provided, +in accord with this section must be in a format that is publicly +documented (and with an implementation available to the public in +source code form), and must require no special password or key for +unpacking, reading or copying. + + 7. Additional Terms. + + "Additional permissions" are terms that supplement the terms of this +License by making exceptions from one or more of its conditions. +Additional permissions that are applicable to the entire Program shall +be treated as though they were included in this License, to the extent +that they are valid under applicable law. If additional permissions +apply only to part of the Program, that part may be used separately +under those permissions, but the entire Program remains governed by +this License without regard to the additional permissions. + + When you convey a copy of a covered work, you may at your option +remove any additional permissions from that copy, or from any part of +it. (Additional permissions may be written to require their own +removal in certain cases when you modify the work.) You may place +additional permissions on material, added by you to a covered work, +for which you have or can give appropriate copyright permission. + + Notwithstanding any other provision of this License, for material you +add to a covered work, you may (if authorized by the copyright holders of +that material) supplement the terms of this License with terms: + + a) Disclaiming warranty or limiting liability differently from the + terms of sections 15 and 16 of this License; or + + b) Requiring preservation of specified reasonable legal notices or + author attributions in that material or in the Appropriate Legal + Notices displayed by works containing it; or + + c) Prohibiting misrepresentation of the origin of that material, or + requiring that modified versions of such material be marked in + reasonable ways as different from the original version; or + + d) Limiting the use for publicity purposes of names of licensors or + authors of the material; or + + e) Declining to grant rights under trademark law for use of some + trade names, trademarks, or service marks; or + + f) Requiring indemnification of licensors and authors of that + material by anyone who conveys the material (or modified versions of + it) with contractual assumptions of liability to the recipient, for + any liability that these contractual assumptions directly impose on + those licensors and authors. + + All other non-permissive additional terms are considered "further +restrictions" within the meaning of section 10. If the Program as you +received it, or any part of it, contains a notice stating that it is +governed by this License along with a term that is a further +restriction, you may remove that term. If a license document contains +a further restriction but permits relicensing or conveying under this +License, you may add to a covered work material governed by the terms +of that license document, provided that the further restriction does +not survive such relicensing or conveying. + + If you add terms to a covered work in accord with this section, you +must place, in the relevant source files, a statement of the +additional terms that apply to those files, or a notice indicating +where to find the applicable terms. + + Additional terms, permissive or non-permissive, may be stated in the +form of a separately written license, or stated as exceptions; +the above requirements apply either way. + + 8. Termination. + + You may not propagate or modify a covered work except as expressly +provided under this License. Any attempt otherwise to propagate or +modify it is void, and will automatically terminate your rights under +this License (including any patent licenses granted under the third +paragraph of section 11). + + However, if you cease all violation of this License, then your +license from a particular copyright holder is reinstated (a) +provisionally, unless and until the copyright holder explicitly and +finally terminates your license, and (b) permanently, if the copyright +holder fails to notify you of the violation by some reasonable means +prior to 60 days after the cessation. + + Moreover, your license from a particular copyright holder is +reinstated permanently if the copyright holder notifies you of the +violation by some reasonable means, this is the first time you have +received notice of violation of this License (for any work) from that +copyright holder, and you cure the violation prior to 30 days after +your receipt of the notice. + + Termination of your rights under this section does not terminate the +licenses of parties who have received copies or rights from you under +this License. If your rights have been terminated and not permanently +reinstated, you do not qualify to receive new licenses for the same +material under section 10. + + 9. Acceptance Not Required for Having Copies. + + You are not required to accept this License in order to receive or +run a copy of the Program. Ancillary propagation of a covered work +occurring solely as a consequence of using peer-to-peer transmission +to receive a copy likewise does not require acceptance. However, +nothing other than this License grants you permission to propagate or +modify any covered work. These actions infringe copyright if you do +not accept this License. Therefore, by modifying or propagating a +covered work, you indicate your acceptance of this License to do so. + + 10. Automatic Licensing of Downstream Recipients. + + Each time you convey a covered work, the recipient automatically +receives a license from the original licensors, to run, modify and +propagate that work, subject to this License. You are not responsible +for enforcing compliance by third parties with this License. + + An "entity transaction" is a transaction transferring control of an +organization, or substantially all assets of one, or subdividing an +organization, or merging organizations. If propagation of a covered +work results from an entity transaction, each party to that +transaction who receives a copy of the work also receives whatever +licenses to the work the party's predecessor in interest had or could +give under the previous paragraph, plus a right to possession of the +Corresponding Source of the work from the predecessor in interest, if +the predecessor has it or can get it with reasonable efforts. + + You may not impose any further restrictions on the exercise of the +rights granted or affirmed under this License. For example, you may +not impose a license fee, royalty, or other charge for exercise of +rights granted under this License, and you may not initiate litigation +(including a cross-claim or counterclaim in a lawsuit) alleging that +any patent claim is infringed by making, using, selling, offering for +sale, or importing the Program or any portion of it. + + 11. Patents. + + A "contributor" is a copyright holder who authorizes use under this +License of the Program or a work on which the Program is based. The +work thus licensed is called the contributor's "contributor version". + + A contributor's "essential patent claims" are all patent claims +owned or controlled by the contributor, whether already acquired or +hereafter acquired, that would be infringed by some manner, permitted +by this License, of making, using, or selling its contributor version, +but do not include claims that would be infringed only as a +consequence of further modification of the contributor version. For +purposes of this definition, "control" includes the right to grant +patent sublicenses in a manner consistent with the requirements of +this License. + + Each contributor grants you a non-exclusive, worldwide, royalty-free +patent license under the contributor's essential patent claims, to +make, use, sell, offer for sale, import and otherwise run, modify and +propagate the contents of its contributor version. + + In the following three paragraphs, a "patent license" is any express +agreement or commitment, however denominated, not to enforce a patent +(such as an express permission to practice a patent or covenant not to +sue for patent infringement). To "grant" such a patent license to a +party means to make such an agreement or commitment not to enforce a +patent against the party. + + If you convey a covered work, knowingly relying on a patent license, +and the Corresponding Source of the work is not available for anyone +to copy, free of charge and under the terms of this License, through a +publicly available network server or other readily accessible means, +then you must either (1) cause the Corresponding Source to be so +available, or (2) arrange to deprive yourself of the benefit of the +patent license for this particular work, or (3) arrange, in a manner +consistent with the requirements of this License, to extend the patent +license to downstream recipients. "Knowingly relying" means you have +actual knowledge that, but for the patent license, your conveying the +covered work in a country, or your recipient's use of the covered work +in a country, would infringe one or more identifiable patents in that +country that you have reason to believe are valid. + + If, pursuant to or in connection with a single transaction or +arrangement, you convey, or propagate by procuring conveyance of, a +covered work, and grant a patent license to some of the parties +receiving the covered work authorizing them to use, propagate, modify +or convey a specific copy of the covered work, then the patent license +you grant is automatically extended to all recipients of the covered +work and works based on it. + + A patent license is "discriminatory" if it does not include within +the scope of its coverage, prohibits the exercise of, or is +conditioned on the non-exercise of one or more of the rights that are +specifically granted under this License. You may not convey a covered +work if you are a party to an arrangement with a third party that is +in the business of distributing software, under which you make payment +to the third party based on the extent of your activity of conveying +the work, and under which the third party grants, to any of the +parties who would receive the covered work from you, a discriminatory +patent license (a) in connection with copies of the covered work +conveyed by you (or copies made from those copies), or (b) primarily +for and in connection with specific products or compilations that +contain the covered work, unless you entered into that arrangement, +or that patent license was granted, prior to 28 March 2007. + + Nothing in this License shall be construed as excluding or limiting +any implied license or other defenses to infringement that may +otherwise be available to you under applicable patent law. + + 12. No Surrender of Others' Freedom. + + If conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot convey a +covered work so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you may +not convey it at all. For example, if you agree to terms that obligate you +to collect a royalty for further conveying from those to whom you convey +the Program, the only way you could satisfy both those terms and this +License would be to refrain entirely from conveying the Program. + + 13. Use with the GNU Affero General Public License. + + Notwithstanding any other provision of this License, you have +permission to link or combine any covered work with a work licensed +under version 3 of the GNU Affero General Public License into a single +combined work, and to convey the resulting work. The terms of this +License will continue to apply to the part which is the covered work, +but the special requirements of the GNU Affero General Public License, +section 13, concerning interaction through a network will apply to the +combination as such. + + 14. Revised Versions of this License. + + The Free Software Foundation may publish revised and/or new versions of +the GNU General Public License from time to time. Such new versions will +be similar in spirit to the present version, but may differ in detail to +address new problems or concerns. + + Each version is given a distinguishing version number. If the +Program specifies that a certain numbered version of the GNU General +Public License "or any later version" applies to it, you have the +option of following the terms and conditions either of that numbered +version or of any later version published by the Free Software +Foundation. If the Program does not specify a version number of the +GNU General Public License, you may choose any version ever published +by the Free Software Foundation. + + If the Program specifies that a proxy can decide which future +versions of the GNU General Public License can be used, that proxy's +public statement of acceptance of a version permanently authorizes you +to choose that version for the Program. + + Later license versions may give you additional or different +permissions. However, no additional obligations are imposed on any +author or copyright holder as a result of your choosing to follow a +later version. + + 15. Disclaimer of Warranty. + + THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY +APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT +HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY +OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, +THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM +IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF +ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + + 16. Limitation of Liability. + + IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS +THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY +GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE +USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF +DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD +PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), +EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF +SUCH DAMAGES. + + 17. Interpretation of Sections 15 and 16. + + If the disclaimer of warranty and limitation of liability provided +above cannot be given local legal effect according to their terms, +reviewing courts shall apply local law that most closely approximates +an absolute waiver of all civil liability in connection with the +Program, unless a warranty or assumption of liability accompanies a +copy of the Program in return for a fee. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Programs + + If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + + To do so, attach the following notices to the program. It is safest +to attach them to the start of each source file to most effectively +state the exclusion of warranty; and each file should have at least +the "copyright" line and a pointer to where the full notice is found. + + + Copyright (C) + + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . + +Also add information on how to contact you by electronic and paper mail. + + If the program does terminal interaction, make it output a short +notice like this when it starts in an interactive mode: + + Copyright (C) + This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. + This is free software, and you are welcome to redistribute it + under certain conditions; type `show c' for details. + +The hypothetical commands `show w' and `show c' should show the appropriate +parts of the General Public License. Of course, your program's commands +might be different; for a GUI interface, you would use an "about box". + + You should also get your employer (if you work as a programmer) or school, +if any, to sign a "copyright disclaimer" for the program, if necessary. +For more information on this, and how to apply and follow the GNU GPL, see +. + + The GNU General Public License does not permit incorporating your program +into proprietary programs. If your program is a subroutine library, you +may consider it more useful to permit linking proprietary applications with +the library. If this is what you want to do, use the GNU Lesser General +Public License instead of this License. But first, please read +. diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/LICENSE.md b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/LICENSE.md new file mode 100644 index 00000000..1fffad37 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/LICENSE.md @@ -0,0 +1,44 @@ +License +========== +Unless otherwise indicated, all files this repository are + + Copyright (c) 2020, Principia Labs Ltd + (nicolas.chaulet@gmail.com & thomas.chaton.ai@gmail.com) + +and are released under the terms of the BSD open source license. + +Overall license (BSD) +--------------------- + + Copyright (c) 2020, Principia Labs Ltd + (nicolas.chaulet@gmail.com & thomas.chaton.ai@gmail.com) + + All rights reserved. + + Redistribution and use in source and binary forms, with or without + modification, are permitted provided that the following + conditions are met: + + * Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in + the documentation and/or other materials provided + with the distribution. + * Neither the name of Principia Labs Ltd nor the + names of its contributors may be used to endorse or promote + products derived from this software without specific prior + written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, + BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS + OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED + AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, + OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT + OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY + OF SUCH DAMAGE. diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/Makefile b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/Makefile new file mode 100644 index 00000000..cb551563 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/Makefile @@ -0,0 +1,4 @@ +.PHONY: staticchecks +staticchecks: + flake8 . --count --select=E9,F402,F6,F7,F5,F8,F9 --show-source --statistics + mypy torch_points3d diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/README.md b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/README.md new file mode 100644 index 00000000..ee4903a7 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/README.md @@ -0,0 +1,489 @@ +

+ +

+ +[![PyPI version](https://badge.fury.io/py/torch-points3d.svg)](https://badge.fury.io/py/torch-points3d) [![codecov](https://codecov.io/gh/nicolas-chaulet/torch-points3d/branch/master/graph/badge.svg)](https://codecov.io/gh/nicolas-chaulet/torch-points3d) [![Actions Status](https://github.com/nicolas-chaulet/torch-points3d/workflows/unittest/badge.svg)](https://github.com/nicolas-chaulet/torch-points3d/actions) [![Documentation Status](https://readthedocs.org/projects/torch-points3d/badge/?version=latest)](https://torch-points3d.readthedocs.io/en/latest/?badge=latest) [![slack](https://img.shields.io/badge/slack-tp3d-brightgreen)](https://torchgeometricco.slack.com/join/shared_invite/zt-hn9vter8-EQn4L4wLfE7PZEYbLMlw~Q#/) + +This is a framework for running common deep learning models for point cloud analysis tasks against classic benchmark. It heavily relies on [Pytorch Geometric](https://pytorch-geometric.readthedocs.io/en/latest/notes/resources.html) and [Facebook Hydra](https://hydra.cc/). + +The framework allows lean and yet complex model to be built with minimum effort and great reproducibility. +It also provide a high level API to democratize deep learning on pointclouds. +See our [paper](https://arxiv.org/pdf/2010.04642.pdf) at 3DV for an overview of the framework capacities and benchmarks of state-of-the-art networks. + +# Table of Contents + + * [Overview](#overview) + * [Requirements](#requirements) + * [Project structure](#project-structure) + * [Methods currently implemented](#methods-currently-implemented) + * [Available Tasks](#available-tasks) + * [Available datasets](#available-datasets) + * [Segmentation](#segmentation) + * [Object detection and panoptic](#object-detection-and-panoptic) + * [Registration](#registration) + * [Classification](#classification) + * [3D Sparse convolution support](#3d-sparse-convolution-support) + * [Adding your model to the PretrainedRegistry.](#adding-your-model-to-the-pretrainedregistry) + * [Developer guidelines](#developer-guidelines) + * [Setting repo](#setting-repo) + * [Getting started: Train pointnet on part segmentation task for dataset shapenet](#getting-started-train-pointnet-on-part-segmentation-task-for-dataset-shapenet) + * [Inference](#inference) + * [Inference script](#inference-script) + * [Containerizing your model with Docker](#containerizing-your-model-with-docker) + * [Profiling](#profiling) + * [Troubleshooting](#troubleshooting) + * [Exploring your experiments](#exploring-your-experiments) + * [Contributing](#contributing) + * [Citing](#citing) + +# Overview +## Requirements + +- CUDA 10 or higher (if you want GPU version) +- Python 3.7 or higher + headers (python-dev) +- PyTorch 1.5 or higher (1.4 and 1.3.1 should also be working but are not actively supported moving forward) +- A Sparse convolution backend (optional) see [here](https://github.com/nicolas-chaulet/torch-points3d#3d-sparse-convolution-support) for installation instructions + +Install with + +```bash +pip install torch +pip install torch-points3d +``` + +## Project structure + +```bash +├─ benchmark # Output from various benchmark runs +├─ conf # All configurations for training nad evaluation leave there +├─ notebooks # A collection of notebooks that allow result exploration and network debugging +├─ docker # Docker image that can be used for inference or training +├─ docs # All the doc +├─ eval.py # Eval script +├─ find_neighbour_dist.py # Script to find optimal #neighbours within neighbour search operations +├─ forward_scripts # Script that runs a forward pass on possibly non annotated data +├─ outputs # All outputs from your runs sorted by date +├─ scripts # Some scripts to help manage the project +├─ torch_points3d + ├─ core # Core components + ├─ datasets # All code related to datasets + ├─ metrics # All metrics and trackers + ├─ models # All models + ├─ modules # Basic modules that can be used in a modular way + ├─ utils # Various utils + └─ visualization # Visualization +├─ test +└─ train.py # Main script to launch a training +``` + +As a general philosophy we have split datasets and models by task. For example, datasets has five subfolders: + +- segmentation +- classification +- registration +- object_detection +- panoptic + +where each folder contains the dataset related to each task. + +## Methods currently implemented + +- **[PointNet](https://github.com/nicolas-chaulet/torch-points3d/blob/master/torch_points3d/modules/PointNet/modules.py#L54)** from Charles R. Qi _et al._: [PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation](https://arxiv.org/abs/1612.00593) (CVPR 2017) +- **[PointNet++](https://github.com/nicolas-chaulet/torch-points3d/tree/master/torch_points3d/modules/pointnet2)** from Charles from Charles R. Qi _et al._: [PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space](https://arxiv.org/abs/1706.02413) +- **[RSConv](https://github.com/nicolas-chaulet/torch-points3d/tree/master/torch_points3d/modules/RSConv)** from Yongcheng Liu _et al._: [Relation-Shape Convolutional Neural Network for Point Cloud Analysis](https://arxiv.org/abs/1904.07601) (CVPR 2019) +- **[RandLA-Net](https://github.com/nicolas-chaulet/torch-points3d/tree/master/torch_points3d/modules/RandLANet)** from Qingyong Hu _et al._: [RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds](https://arxiv.org/abs/1911.11236) +- **[PointCNN](https://github.com/nicolas-chaulet/torch-points3d/tree/master/torch_points3d/modules/PointCNN)** from Yangyan Li _et al._: [PointCNN: Convolution On X-Transformed Points](https://arxiv.org/abs/1801.07791) (NIPS 2018) +- **[KPConv](https://github.com/nicolas-chaulet/torch-points3d/tree/master/torch_points3d/modules/KPConv)** from Hugues Thomas _et al._: [KPConv: Flexible and Deformable Convolution for Point Clouds](https://arxiv.org/abs/1904.08889) (ICCV 2019) +- **[MinkowskiEngine](https://github.com/nicolas-chaulet/torch-points3d/tree/master/torch_points3d/modules/MinkowskiEngine)** from Christopher Choy _et al._: [4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks](https://arxiv.org/abs/1904.08755) (CVPR19) +- **[VoteNet](https://github.com/nicolas-chaulet/torch-points3d/tree/master/torch_points3d/models/object_detection/votenet.py)** from Charles R. Qi _et al._: [Deep Hough Voting for 3D Object Detection in Point Clouds](https://arxiv.org/abs/1904.09664) (ICCV 19) +- **[FCGF](https://github.com/chrischoy/FCGF)** from Christopher Choy _et al._: [Fully Convolutional Geometric Features](https://node1.chrischoy.org/data/publications/fcgf/fcgf.pdf) (ICCV'19) +- **[PointGroup](https://github.com/Jia-Research-Lab/PointGroup)** from Li Jiang _et al._: [PointGroup: Dual-Set Point Grouping for 3D Instance Segmentation](https://arxiv.org/abs/2004.01658) +- **[PPNet (PosPool)](https://github.com/zeliu98/CloserLook3D)** from Ze Liu _et al._: [A Closer Look at Local Aggregation Operators in Point Cloud Analysis](https://arxiv.org/pdf/2007.01294.pdf) (ECCV 2020) +- **[TorchSparse](https://github.com/mit-han-lab/torchsparse)** from Haotian Tang _et al_: [Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution](https://arxiv.org/abs/2007.16100) +- **[PVCNN](https://github.com/mit-han-lab/pvcnn)** model for semantic segmentation from Zhijian Liu _et al_:[Point-Voxel CNN for Efficient 3D Deep Learning](https://arxiv.org/abs/1907.03739) + +Please refer to our [documentation](https://torch-points3d.readthedocs.io/en/latest/src/api/models.html) for accessing some of those models directly from the API and see our example notebooks for [KPconv](https://colab.research.google.com/github/nicolas-chaulet/torch-points3d/blob/master/notebooks/PartSegmentationKPConv.ipynb) and [RSConv](https://colab.research.google.com/github/nicolas-chaulet/torch-points3d/blob/master/notebooks/ObjectClassificationRSConv.ipynb) for more details. + +# Available Tasks + +|

Tasks

|

Examples

| +| :-------------------------------------------: | :-----------------------------------------------------------------------: | +|

Classification / Part Segmentation

|
| +|

Segmentation

|
| +|

Object Detection

| | +|

Panoptic Segmentation

| | +|

Registration

| | + +# Available datasets + +## Segmentation + +- **[Scannet](https://github.com/ScanNet/ScanNet)** from Angela Dai _et al._: [ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes](https://arxiv.org/abs/1702.04405) + +- **[S3DIS](http://buildingparser.stanford.edu/dataset.html)** from Iro Armeni _et al._: [Joint 2D-3D-Semantic Data for Indoor Scene Understanding](https://arxiv.org/abs/1702.01105) + +``` +* S3DIS 1x1 +* S3DIS Room +* S3DIS Fused - Sphere | Cylinder +``` + +- **[Shapenet](https://www.shapenet.org/)** from Angel X. Chang _et al._: [ShapeNet: An Information-Rich 3D Model Repository](https://arxiv.org/abs/1512.03012) + +## Object detection and panoptic + +- **[Scannet](https://github.com/ScanNet/ScanNet)** from Angela Dai _et al._: [ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes](https://arxiv.org/abs/1702.04405) +- **[S3DIS](http://buildingparser.stanford.edu/dataset.html)** from Iro Armeni _et al._: [Joint 2D-3D-Semantic Data for Indoor Scene Understanding](https://arxiv.org/abs/1702.01105) + +``` +* S3DIS Fused - Sphere | Cylinder +``` + +- **[SemanticKitti](http://semantic-kitti.org/)** from J. Behley _et al_: [SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences](https://arxiv.org/abs/1904.01416) + +## Registration + +- **[3DMatch](http://3dmatch.cs.princeton.edu)** from Andy Zeng _et al._: [3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions](https://arxiv.org/abs/1603.08182) + +- **[The IRALab Benchmark](https://github.com/iralabdisco/point_clouds_registration_benchmark)** from Simone Fontana _et al._:[A Benchmark for Point Clouds Registration Algorithms](https://arxiv.org/abs/2003.12841), which is composed of data from: + - [the ETH datasets](https://projects.asl.ethz.ch/datasets/doku.php?id=laserregistration:laserregistration); + - [the Canadian Planetary Emulation Terrain 3D Mapping datasets](http://asrl.utias.utoronto.ca/datasets/3dmap/index.html); + - [the TUM Vision Groud RGBD datasets](https://vision.in.tum.de/data/datasets/rgbd-dataset); + - [the KAIST Urban datasets](https://irap.kaist.ac.kr/dataset/). + +- **[Kitti odometry](http://www.cvlibs.net/datasets/kitti/eval_odometry.php)** with corrected poses (thanks to @humanpose1) from A. Geiger _et al_: [Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite](http://www.cvlibs.net/publications/Geiger2012CVPR.pdf) + +## Classification + +- **[ModelNet](https://modelnet.cs.princeton.edu)** from Zhirong Wu _et al._: [3D ShapeNets: A Deep Representation for Volumetric Shapes](https://people.csail.mit.edu/khosla/papers/cvpr2015_wu.pdf) + + +# 3D Sparse convolution support +We currently support [Minkowski Engine](https://github.com/StanfordVL/MinkowskiEngine) and [torchsparse](https://github.com/mit-han-lab/torchsparse) as backends for sparse convolutions. Those packages need to be installed independently from Torch Points3d, please follow installation instructions and troubleshooting notes on the respective repositories. At the moment `torchsparse` demonstrates faster training and inference on GPU but comes with limited functionalities. For example, `MinkowskiEngine` can be used **Please be aware that `torchsparse` is still in beta and does not support CPU processing only for example.** + +Once you have setup one of those two sparse convolution framework you can start using are high level to define a unet backbone or simply an encoder: + +```python +from torch_points3d.applications.sparseconv3d import SparseConv3d + +model = SparseConv3d("unet", input_nc=3, output_nc=5, num_layers=4, backbone="torchsparse") # minkowski by default +``` + +You can also assemble your own networks by using the modules provided in `torch_points3d/modules/SparseConv3d/nn`. For example if you wish to use `torchsparse` backend you can do the following: +```python +import torch_points3d.modules.SparseConv3d as sp3d + +sp3d.nn.set_backend("torchsparse") +conv = sp3d.nn.Conv3d(10, 10) +bn = sp3d.nn.BatchNorm(10) +``` + +# Adding your model to the PretrainedRegistry. + +The `PretrainedRegistry` enables anyone to add their own pre-trained models and `re-create` them with only 2 lines of code for `finetunning` or `production` purposes. + +- `[You]` Launch your model training with [Wandb](https://www.wandb.com) activated (`wandb.log=True`) +- `[TorchPoints3d]` Once the training finished, `TorchPoints3d` will upload your trained model within [our custom checkpoint](https://app.wandb.ai/nicolas/scannet/runs/1sd84bf1) to your wandb. +- `[You]` Within [`PretainedRegistry`](https://github.com/nicolas-chaulet/torch-points3d/blob/master/torch_points3d/applications/pretrained_api.py#L31) class, add a `key-value pair` within its attribute `MODELS`. The `key` should be describe your model, dataset and training hyper-parameters (possibly the best model), the `value` should be the `url` referencing the `.pt` file on your wandb. + +Example: Key: `pointnet2_largemsg-s3dis-1` and URL value: `https://api.wandb.ai/files/loicland/benchmark-torch-points-3d-s3dis/1e1p0csk/pointnet2_largemsg.pt` for the `pointnet2_largemsg.pt` file. +The key desribes a `pointnet2 largemsg trained on s3dis fold 1`. + +- `[Anyone]` By using the `PretainedRegistry` class and by providing the `key`, the associated model weights will be `downloaded` and the pre-trained model will be `ready to use` with its transforms. + +```python +[In]: +from torch_points3d.applications.pretrained_api import PretainedRegistry + +model = PretainedRegistry.from_pretrained("pointnet2_largemsg-s3dis-1") + +print(model.wandb) +print(model.print_transforms()) + +[Out]: +=================================================== WANDB URLS ====================================================== +WEIGHT_URL: https://api.wandb.ai/files/loicland/benchmark-torch-points-3d-s3dis/1e1p0csk/pointnet2_largemsg.pt +LOG_URL: https://app.wandb.ai/loicland/benchmark-torch-points-3d-s3dis/runs/1e1p0csk/logs +CHART_URL: https://app.wandb.ai/loicland/benchmark-torch-points-3d-s3dis/runs/1e1p0csk +OVERVIEW_URL: https://app.wandb.ai/loicland/benchmark-torch-points-3d-s3dis/runs/1e1p0csk/overview +HYDRA_CONFIG_URL: https://app.wandb.ai/loicland/benchmark-torch-points-3d-s3dis/runs/1e1p0csk/files/hydra-config.yaml +OVERRIDES_URL: https://app.wandb.ai/loicland/benchmark-torch-points-3d-s3dis/runs/1e1p0csk/files/overrides.yaml +====================================================================================================================== + +pre_transform = None +test_transform = Compose([ + FixedPoints(20000, replace=True), + XYZFeature(axis=['z']), + AddFeatsByKeys(rgb=True, pos_z=True), + Center(), + ScalePos(scale=0.5), +]) +train_transform = Compose([ + FixedPoints(20000, replace=True), + RandomNoise(sigma=0.001, clip=0.05), + RandomRotate((-180, 180), axis=2), + RandomScaleAnisotropic([0.8, 1.2]), + RandomAxesSymmetry(x=True, y=False, z=False), + DropFeature(proba=0.2, feature='rgb'), + XYZFeature(axis=['z']), + AddFeatsByKeys(rgb=True, pos_z=True), + Center(), + ScalePos(scale=0.5), +]) +val_transform = Compose([ + FixedPoints(20000, replace=True), + XYZFeature(axis=['z']), + AddFeatsByKeys(rgb=True, pos_z=True), + Center(), + ScalePos(scale=0.5), +]) +inference_transform = Compose([ + FixedPoints(20000, replace=True), + XYZFeature(axis=['z']), + AddFeatsByKeys(rgb=True, pos_z=True), + Center(), + ScalePos(scale=0.5), +]) +pre_collate_transform = Compose([ + PointCloudFusion(), + SaveOriginalPosId, + GridSampling3D(grid_size=0.04, quantize_coords=False, mode=mean), +]) +``` + + +# Developer guidelines + +## Setting repo + +We use [Poetry](https://poetry.eustace.io/) for managing our packages. +In order to get started, clone this repositories and run the following command from the root of the repo + +``` +poetry install --no-root +``` + +This will install all required dependencies in a new virtual environment. + +Activate the environment + +```bash +poetry shell +``` + +You can check that the install has been successful by running + +```bash +python -m unittest -v +``` + +For `pycuda` support (only needed for the registration tasks): + +```bash +pip install pycuda +``` + +## Getting started: Train pointnet++ on part segmentation task for dataset shapenet + +```bash +poetry run python train.py task=segmentation model_type=pointnet2 model_name=pointnet2_charlesssg dataset=shapenet-fixed +``` + +And you should see something like that + +![logging](https://raw.githubusercontent.com/nicolas-chaulet/torch-points3d/master/docs/imgs/logging.png) + +The [config](https://raw.githubusercontent.com/nicolas-chaulet/torch-points3d/master/conf/models/segmentation/pointnet2.yaml) for pointnet++ is a good example of how to define a model and is as follow: + +```yaml +# PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space (https://arxiv.org/abs/1706.02413) +# Credit Charles R. Qi: https://github.com/charlesq34/pointnet2/blob/master/models/pointnet2_part_seg_msg_one_hot.py + +pointnet2_onehot: + architecture: pointnet2.PointNet2_D + conv_type: 'DENSE' + use_category: True + down_conv: + module_name: PointNetMSGDown + npoint: [1024, 256, 64, 16] + radii: [[0.05, 0.1], [0.1, 0.2], [0.2, 0.4], [0.4, 0.8]] + nsamples: [[16, 32], [16, 32], [16, 32], [16, 32]] + down_conv_nn: + [ + [[FEAT, 16, 16, 32], [FEAT, 32, 32, 64]], + [[32 + 64, 64, 64, 128], [32 + 64, 64, 96, 128]], + [[128 + 128, 128, 196, 256], [128 + 128, 128, 196, 256]], + [[256 + 256, 256, 256, 512], [256 + 256, 256, 384, 512]], + ] + up_conv: + module_name: DenseFPModule + up_conv_nn: + [ + [512 + 512 + 256 + 256, 512, 512], + [512 + 128 + 128, 512, 512], + [512 + 64 + 32, 256, 256], + [256 + FEAT, 128, 128], + ] + skip: True + mlp_cls: + nn: [128, 128] + dropout: 0.5 +``` +## Inference + +### Inference script + +We provide a script for running a given pre trained model on custom data that may not be annotated. You will find an [example](https://github.com/nicolas-chaulet/torch-points3d/blob/master/forward_scripts/forward.py) of this for the part segmentation task on Shapenet. Just like for the rest of the codebase most of the customization happens through config files and the provided example can be extended to other datasets. You can also easily create your own from there. Going back to the part segmentation task, say you have a folder full of point clouds that you know are Airplanes, and you have the checkpoint of a model trained on Airplanes and potentially other classes, simply edit the [config.yaml](https://github.com/nicolas-chaulet/torch-points3d/blob/master/forward_scripts/conf/config.yaml) and [shapenet.yaml](https://github.com/nicolas-chaulet/torch-points3d/blob/master/forward_scripts/conf/dataset/shapenet.yaml) and run the following command: + +```bash +python forward_scripts/forward.py +``` + +The result of the forward run will be placed in the specified `output_folder` and you can use the [notebook](https://github.com/nicolas-chaulet/torch-points3d/blob/master/forward_scripts/notebooks/viz_shapenet.ipynb) provided to explore the results. Below is an example of the outcome of using a model trained on caps only to find the parts of airplanes and caps. + +![resexplore](https://raw.githubusercontent.com/nicolas-chaulet/torch-points3d/master/docs/imgs/inference_demo.gif) + +### Containerizing your model with Docker + +Finally, for people interested in deploying their models to production environments, we provide a [Dockerfile](https://github.com/nicolas-chaulet/torch-points3d/blob/master/docker/Dockerfile) as well as a [build script](https://github.com/nicolas-chaulet/torch-points3d/blob/master/docker/build.sh). Say you have trained a network for semantic segmentation that gave the weight ``, the following command will build a docker image for you: + +```bash +cd docker +./build.sh outputfolder/weights.pt +``` + +You can then use it to run a forward pass on a all the point clouds in `input_path` and generate the results in `output_path` + +```bash +docker run -v /test_data:/in -v /test_data/out:/out pointnet2_charlesssg:latest python3 forward_scripts/forward.py dataset=shapenet data.forward_category=Cap input_path="/in" output_path="/out" +``` + +The `-v` option mounts a local directory to the container's file system. For example in the command line above, `/test_data/out` will be mounted at the location `/out`. As a consequence, all files written in `/out` will be available in the folder `/test_data/out` on your machine. + +## Profiling + +We advice to use [`snakeviz`](https://jiffyclub.github.io/snakeviz/) and [`cProfile`](https://docs.python.org/2/library/profile.html) + +Use cProfile to profile your code + +``` +poetry run python -m cProfile -o {your_name}.prof train.py ... debugging.profiling=True +``` + +And visualize results using snakeviz. + +``` +snakeviz {your_name}.prof +``` + +It is also possible to use [`torch.utils.bottleneck`](https://pytorch.org/docs/stable/bottleneck.html) + +``` +python -m torch.utils.bottleneck /path/to/source/script.py [args] +``` + +## Troubleshooting + +### Cannot compile certain CUDA Kernels or seg faults while running the tests + +Ensure that at least PyTorch 1.4.0 is installed and verify that `cuda/bin` and `cuda/include` are in your `$PATH` and `$CPATH` respectively, e.g.: + +``` +$ python -c "import torch; print(torch.__version__)" +>>> 1.4.0 + +$ echo $PATH +>>> /usr/local/cuda/bin:... + +$ echo $CPATH +>>> /usr/local/cuda/include:... +``` + +### Undefined symbol / Updating Pytorch + +When we update the version of Pytorch that is used, the compiled packages need to be reinstalled, otherwise you will run into an error that looks like this: + +``` +... scatter_cpu.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN3c1012CUDATensorIdEv +``` + +This can happen for the following libraries: + +- torch-points-kernels +- torch-scatter +- torch-cluster +- torch-sparse + +An easy way to fix this is to run the following command with the virtual env activated: + +``` +pip uninstall torch-scatter torch-sparse torch-cluster torch-points-kernels -y +rm -rf ~/.cache/pip +poetry install +``` + +### CUDA kernel failed : no kernel image is available for execution on the device + +This can happen when trying to run the code on a different GPU than the one used to compile the `torch-points-kernels` library. Uninstall `torch-points-kernels`, clear cache, and reinstall after setting the `TORCH_CUDA_ARCH_LIST` environment variable. For example, for compiling with a Tesla T4 (Turing 7.5) and running the code on a Tesla V100 (Volta 7.0) use: + +``` +export TORCH_CUDA_ARCH_LIST="7.0;7.5" +``` + +See [this useful chart](http://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/) for more architecture compatibility. + +### Cannot use wandb on Windows + +Raises `OSError: [WinError 6] The handle is invalid` / `wandb: ERROR W&B process failed to launch` +Wandb is currently broken on Windows (see [this issue](https://github.com/wandb/client/issues/862)), a workaround is to use the command line argument `wandb.log=false` + + +# Exploring your experiments + +We provide a [notebook](https://github.com/nicolas-chaulet/torch-points3d/blob/master/notebooks/dashboard.ipynb) based [pyvista](https://docs.pyvista.org/) and [panel](https://panel.holoviz.org/) that allows you to explore your past experiments visually. When using jupyter lab you will have to install an extension: + +``` +jupyter labextension install @pyviz/jupyterlab_pyviz +``` + +Run through the notebook and you should see a dashboard starting that looks like the following: + +![dashboard](https://raw.githubusercontent.com/nicolas-chaulet/torch-points3d/master/docs/imgs/Dashboard_demo.gif) + + +# Contributing + +Contributions are welcome! The only asks are that you stick to the styling and that you add tests as you add more features! + +For styling you can use [pre-commit hooks](https://ljvmiranda921.github.io/notebook/2018/06/21/precommits-using-black-and-flake8/) to help you: + +``` +pre-commit install +``` + +A sequence of checks will be run for you and you may have to add the fixed files again to the stashed files. + +When it comes to docstrings we use [numpy style](https://numpydoc.readthedocs.io/en/latest/format.html) docstrings, for those who use +Visual Studio Code, there is a great [extension](https://github.com/NilsJPWerner/autoDocstring) that can help with that. Install it and set the format to numpy and you should be good to go! + +Finaly, if you want to have a direct chat with us feel free to join our slack, just shoot us an email and we'll add you. + +# Citing + +If you find our work useful, do not hesitate to cite it: + +``` +@inproceedings{ + tp3d, + title={Torch-Points3D: A Modular Multi-Task Frameworkfor Reproducible Deep Learning on 3D Point Clouds}, + author={Chaton, Thomas and Chaulet Nicolas and Horache, Sofiane and Landrieu, Loic}, + booktitle={2020 International Conference on 3D Vision (3DV)}, + year={2020}, + organization={IEEE}, + url = {\url{https://github.com/nicolas-chaulet/torch-points3d}} +} +``` + +and please also include a citation to the +[models](https://github.com/nicolas-chaulet/torch-points3d#methods-currently-implemented) +or the [datasets](https://github.com/nicolas-chaulet/torch-points3d#available-datasets) you have used in your experiments! diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/SpatioTemporalSegmentation/RSCNN_MSN.md b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/SpatioTemporalSegmentation/RSCNN_MSN.md new file mode 100644 index 00000000..232de707 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/SpatioTemporalSegmentation/RSCNN_MSN.md @@ -0,0 +1,700 @@ +# https://github.com/Yochengliu/Relation-Shape-CNN/blob/master/models/rscnn_msn_seg.py +``` +RSCNN_MSN( + (SA_modules): ModuleList( + (0): PointnetSAModuleMSG( + (groupers): ModuleList( + (0): QueryAndGroup() + (1): QueryAndGroup() + (2): QueryAndGroup() + ) + (mlps): ModuleList( + (0): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 32, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(16, 64, kernel_size=(1,), stride=(1,)) + (xyz_raising): Conv2d(3, 16, kernel_size=(1, 1), stride=(1, 1)) + ) + ) + ) + (1): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 32, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(16, 64, kernel_size=(1,), stride=(1,)) + (xyz_raising): Conv2d(3, 16, kernel_size=(1, 1), stride=(1, 1)) + ) + ) + ) + (2): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 32, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(16, 64, kernel_size=(1,), stride=(1,)) + (xyz_raising): Conv2d(3, 16, kernel_size=(1, 1), stride=(1, 1)) + ) + ) + ) + ) + ) + (1): PointnetSAModuleMSG( + (groupers): ModuleList( + (0): QueryAndGroup() + (1): QueryAndGroup() + (2): QueryAndGroup() + ) + (mlps): ModuleList( + (0): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(195, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 32, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(32, 195, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(195, 128, kernel_size=(1,), stride=(1,)) + ) + ) + ) + (1): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(195, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 32, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(32, 195, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(195, 128, kernel_size=(1,), stride=(1,)) + ) + ) + ) + (2): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(195, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 32, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(32, 195, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(195, 128, kernel_size=(1,), stride=(1,)) + ) + ) + ) + ) + ) + (2): PointnetSAModuleMSG( + (groupers): ModuleList( + (0): QueryAndGroup() + (1): QueryAndGroup() + (2): QueryAndGroup() + ) + (mlps): ModuleList( + (0): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(387, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 64, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(64, 387, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(387, 256, kernel_size=(1,), stride=(1,)) + ) + ) + ) + (1): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(387, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 64, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(64, 387, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(387, 256, kernel_size=(1,), stride=(1,)) + ) + ) + ) + (2): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(387, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 64, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(64, 387, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(387, 256, kernel_size=(1,), stride=(1,)) + ) + ) + ) + ) + ) + (3): PointnetSAModuleMSG( + (groupers): ModuleList( + (0): QueryAndGroup() + (1): QueryAndGroup() + (2): QueryAndGroup() + ) + (mlps): ModuleList( + (0): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(771, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 128, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(128, 771, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(771, 512, kernel_size=(1,), stride=(1,)) + ) + ) + ) + (1): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(771, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 128, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(128, 771, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(771, 512, kernel_size=(1,), stride=(1,)) + ) + ) + ) + (2): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(771, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 128, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(128, 771, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(771, 512, kernel_size=(1,), stride=(1,)) + ) + ) + ) + ) + ) + (4): PointnetSAModule( + (groupers): ModuleList( + (0): GroupAll() + ) + (mlps): ModuleList( + (0): GloAvgConv( + (conv_avg): Conv2d(1539, 128, kernel_size=(1, 1), stride=(1, 1)) + (bn_avg): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + ) + ) + (5): PointnetSAModule( + (groupers): ModuleList( + (0): GroupAll() + ) + (mlps): ModuleList( + (0): GloAvgConv( + (conv_avg): Conv2d(771, 128, kernel_size=(1, 1), stride=(1, 1)) + (bn_avg): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + ) + ) + ) + (FP_modules): ModuleList( + (0): PointnetFPModule( + (mlp): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (bn): BatchNorm2d( + (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (bn): BatchNorm2d( + (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + ) + ) + (1): PointnetFPModule( + (mlp): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(704, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (bn): BatchNorm2d( + (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (bn): BatchNorm2d( + (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + ) + ) + (2): PointnetFPModule( + (mlp): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(896, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (bn): BatchNorm2d( + (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (bn): BatchNorm2d( + (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + ) + ) + (3): PointnetFPModule( + (mlp): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(2304, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (bn): BatchNorm2d( + (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (bn): BatchNorm2d( + (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + ) + ) + ) + (FC_layer): Sequential( + (0): Conv1d( + (conv): Conv1d(400, 128, kernel_size=(1,), stride=(1,), bias=False) + (bn): BatchNorm1d( + (bn): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + (1): Dropout(p=0.5) + (2): Conv1d( + (conv): Conv1d(128, 50, kernel_size=(1,), stride=(1,)) + ) + ) +) +Model size = %i 3,488,705 + +########################################################################### + +-1 -3 -2 +torch.Size([12, 64, 3]) torch.Size([12, 16, 3]) torch.Size([12, 768, 64]) torch.Size([12, 1536, 16]) SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(2304, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (bn): BatchNorm2d( + (bn): BatchNorm2d(512, eps=1e-05, momentum=1.8, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (bn): BatchNorm2d( + (bn): BatchNorm2d(512, eps=1e-05, momentum=1.8, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) +) +torch.Size([12, 512, 64, 1]) + + + +-2 -4 -3 +torch.Size([12, 256, 3]) torch.Size([12, 64, 3]) torch.Size([12, 384, 256]) torch.Size([12, 512, 64]) SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(896, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (bn): BatchNorm2d( + (bn): BatchNorm2d(512, eps=1e-05, momentum=1.8, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (bn): BatchNorm2d( + (bn): BatchNorm2d(512, eps=1e-05, momentum=1.8, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) +) +torch.Size([12, 512, 256, 1]) + + + +-3 -5 -4 +torch.Size([12, 1024, 3]) torch.Size([12, 256, 3]) torch.Size([12, 192, 1024]) torch.Size([12, 512, 256]) SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(704, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (bn): BatchNorm2d( + (bn): BatchNorm2d(256, eps=1e-05, momentum=1.8, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (bn): BatchNorm2d( + (bn): BatchNorm2d(256, eps=1e-05, momentum=1.8, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) +) +torch.Size([12, 256, 1024, 1]) + + + +-4 -6 -5 + +################################################################################### + +PointnetSAModuleMSG( + (groupers): ModuleList( + (0): QueryAndGroup() + (1): QueryAndGroup() + (2): QueryAndGroup() + ) + (mlps): ModuleList( + (0): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 32, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(16, 64, kernel_size=(1,), stride=(1,)) + (xyz_raising): Conv2d(3, 16, kernel_size=(1, 1), stride=(1, 1)) + ) + ) + ) + (1): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 32, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(16, 64, kernel_size=(1,), stride=(1,)) + (xyz_raising): Conv2d(3, 16, kernel_size=(1, 1), stride=(1, 1)) + ) + ) + ) + (2): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 32, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(16, 64, kernel_size=(1,), stride=(1,)) + (xyz_raising): Conv2d(3, 16, kernel_size=(1, 1), stride=(1, 1)) + ) + ) + ) + ) +) size = 2800 +PointnetSAModuleMSG( + (groupers): ModuleList( + (0): QueryAndGroup() + (1): QueryAndGroup() + (2): QueryAndGroup() + ) + (mlps): ModuleList( + (0): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(195, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 32, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(32, 195, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(195, 128, kernel_size=(1,), stride=(1,)) + ) + ) + ) + (1): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(195, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 32, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(32, 195, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(195, 128, kernel_size=(1,), stride=(1,)) + ) + ) + ) + (2): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(195, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 32, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(32, 195, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(195, 128, kernel_size=(1,), stride=(1,)) + ) + ) + ) + ) +) size = 34101 +PointnetSAModuleMSG( + (groupers): ModuleList( + (0): QueryAndGroup() + (1): QueryAndGroup() + (2): QueryAndGroup() + ) + (mlps): ModuleList( + (0): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(387, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 64, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(64, 387, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(387, 256, kernel_size=(1,), stride=(1,)) + ) + ) + ) + (1): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(387, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 64, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(64, 387, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(387, 256, kernel_size=(1,), stride=(1,)) + ) + ) + ) + (2): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(387, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 64, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(64, 387, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(387, 256, kernel_size=(1,), stride=(1,)) + ) + ) + ) + ) +) size = 129525 +PointnetSAModuleMSG( + (groupers): ModuleList( + (0): QueryAndGroup() + (1): QueryAndGroup() + (2): QueryAndGroup() + ) + (mlps): ModuleList( + (0): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(771, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 128, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(128, 771, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(771, 512, kernel_size=(1,), stride=(1,)) + ) + ) + ) + (1): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(771, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 128, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(128, 771, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(771, 512, kernel_size=(1,), stride=(1,)) + ) + ) + ) + (2): SharedRSConv( + (RSConvLayer0): RSConvLayer( + (RS_Conv): RSConv( + (bn_rsconv): BatchNorm2d(771, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_channel_raising): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_xyz_raising): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (bn_mapping): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + (mapping_func1): Conv2d(10, 128, kernel_size=(1, 1), stride=(1, 1)) + (mapping_func2): Conv2d(128, 771, kernel_size=(1, 1), stride=(1, 1)) + (cr_mapping): Conv1d(771, 512, kernel_size=(1,), stride=(1,)) + ) + ) + ) + ) +) size = 504693 +PointnetSAModule( + (groupers): ModuleList( + (0): GroupAll() + ) + (mlps): ModuleList( + (0): GloAvgConv( + (conv_avg): Conv2d(1539, 128, kernel_size=(1, 1), stride=(1, 1)) + (bn_avg): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + ) +) size = 197376 +PointnetSAModule( + (groupers): ModuleList( + (0): GroupAll() + ) + (mlps): ModuleList( + (0): GloAvgConv( + (conv_avg): Conv2d(771, 128, kernel_size=(1, 1), stride=(1, 1)) + (bn_avg): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + ) +) size = 99072 +PointnetFPModule( + (mlp): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (bn): BatchNorm2d( + (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (bn): BatchNorm2d( + (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + ) +) size = 49664 +PointnetFPModule( + (mlp): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(704, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (bn): BatchNorm2d( + (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (bn): BatchNorm2d( + (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + ) +) size = 246784 +PointnetFPModule( + (mlp): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(896, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (bn): BatchNorm2d( + (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (bn): BatchNorm2d( + (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + ) +) size = 722944 +PointnetFPModule( + (mlp): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(2304, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (bn): BatchNorm2d( + (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (bn): BatchNorm2d( + (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + ) +) size = 1443840 +Model size = %i 3488705 +``` diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/SpatioTemporalSegmentation/Res16UNet34C b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/SpatioTemporalSegmentation/Res16UNet34C new file mode 100644 index 00000000..b3894cbb --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/SpatioTemporalSegmentation/Res16UNet34C @@ -0,0 +1,120 @@ + + +https://github.com/chrischoy/SpatioTemporalSegmentation/issues/8 + +0-10k: https://pastebin.com/eej7Vg7D +10k-20k: https://pastebin.com/X04fhLYW +20k-30k: https://pastebin.com/qPJEpKU0 +30k-40k: https://pastebin.com/H20w7W5K +40k-50k: https://pastebin.com/5fbCnQxp +50k-60k: https://pastebin.com/kHeYE1d7 + +0-10k: https://pastebin.com/eej7Vg7D + +napoli115 09/11 14:59:47 model: Res16UNet34C +napoli115 09/11 14:59:47 conv1_kernel_size: 3 +napoli115 09/11 14:59:47 weights: None +napoli115 09/11 14:59:47 weights_for_inner_model: False +napoli115 09/11 14:59:47 dilations: [1, 1, 1, 1] +napoli115 09/11 14:59:47 wrapper_type: None +napoli115 09/11 14:59:47 wrapper_region_type: 1 +napoli115 09/11 14:59:47 wrapper_kernel_size: 3 +napoli115 09/11 14:59:47 wrapper_lr: 0.1 +napoli115 09/11 14:59:47 meanfield_iterations: 10 +napoli115 09/11 14:59:47 crf_spatial_sigma: 1 +napoli115 09/11 14:59:47 crf_chromatic_sigma: 12 +napoli115 09/11 14:59:47 nonlinearity: ReLU +napoli115 09/11 14:59:47 optimizer: SGD +napoli115 09/11 14:59:47 lr: 0.1 +napoli115 09/11 14:59:47 sgd_momentum: 0.9 +napoli115 09/11 14:59:47 sgd_dampening: 0.1 +napoli115 09/11 14:59:47 adam_beta1: 0.9 +napoli115 09/11 14:59:47 adam_beta2: 0.999 +napoli115 09/11 14:59:47 weight_decay: 0.0001 +napoli115 09/11 14:59:47 param_histogram_freq: 100 +napoli115 09/11 14:59:47 save_param_histogram: False +napoli115 09/11 14:59:47 iter_size: 1 +napoli115 09/11 14:59:47 bn_momentum: 0.02 +napoli115 09/11 14:59:47 scheduler: PolyLR +napoli115 09/11 14:59:47 max_iter: 60000 +napoli115 09/11 14:59:47 step_size: 20000.0 +napoli115 09/11 14:59:47 step_gamma: 0.1 +napoli115 09/11 14:59:47 poly_power: 0.9 +napoli115 09/11 14:59:47 exp_gamma: 0.95 +napoli115 09/11 14:59:47 exp_step_size: 445 +napoli115 09/11 14:59:47 log_dir: ./outputs/ScannetSparseVoxelization2cmDataset/Res16UNet34C/SGD-l1e-1-b8-PolyLR-i60000--test/2019-09-11_14-59-43 +napoli115 09/11 14:59:47 data_dir: data +napoli115 09/11 14:59:47 dataset: ScannetSparseVoxelization2cmDataset +napoli115 09/11 14:59:47 temporal_dilation: 30 +napoli115 09/11 14:59:47 temporal_numseq: 3 +napoli115 09/11 14:59:47 point_lim: -1 +napoli115 09/11 14:59:47 pre_point_lim: -1 +napoli115 09/11 14:59:47 batch_size: 8 +napoli115 09/11 14:59:47 val_batch_size: 1 +napoli115 09/11 14:59:47 test_batch_size: 1 +napoli115 09/11 14:59:47 cache_data: False +napoli115 09/11 14:59:47 threads: 1 +napoli115 09/11 14:59:47 val_threads: 1 +napoli115 09/11 14:59:47 ignore_label: 255 +napoli115 09/11 14:59:47 use_loc_feat: False +napoli115 09/11 14:59:47 train_elastic_distortion: True +napoli115 09/11 14:59:47 test_elastic_distortion: False +napoli115 09/11 14:59:47 return_transformation: True +napoli115 09/11 14:59:47 ignore_duplicate_class: False +napoli115 09/11 14:59:47 partial_crop: 0.0 +napoli115 09/11 14:59:47 train_limit_numpoints: 1200000 +napoli115 09/11 14:59:47 varcity3d_path: /cvgl2/u/jgwak/Datasets/varcity3d +napoli115 09/11 14:59:47 synthia_path: /cvgl/group/Synthia/synthia-processed/raw-pc-upright +napoli115 09/11 14:59:47 synthia_online_path: /cvgl2/u/jgwak/Datasets/synthia_subsampled +napoli115 09/11 14:59:47 scannet_path: /cvgl2/u/jgwak/Datasets/scannet +napoli115 09/11 14:59:47 synthia_camera_path: /cvgl/group/Synthia/%s/CameraParams/ +napoli115 09/11 14:59:47 synthia_camera_intrinsic_file: intrinsics.txt +napoli115 09/11 14:59:47 synthia_camera_extrinsics_file: Stereo_Right/Omni_F/%s.txt +napoli115 09/11 14:59:47 stanford3d_path: /cvgl/group/Stanford3dDataset_v1.2/Parsed +napoli115 09/11 14:59:47 stanford3d_online_path: /cvgl2/u/jgwak/Datasets/stanford_subsampled +napoli115 09/11 14:59:47 pc_type: voxel +napoli115 09/11 14:59:47 is_train: True +napoli115 09/11 14:59:47 stat_freq: 10 +napoli115 09/11 14:59:47 test_stat_freq: 100 +napoli115 09/11 14:59:47 save_freq: 1000 +napoli115 09/11 14:59:47 val_freq: 1000 +napoli115 09/11 14:59:47 empty_cache_freq: 10 +napoli115 09/11 14:59:47 train_phase: train +napoli115 09/11 14:59:47 val_phase: val +napoli115 09/11 14:59:47 overwrite_weights: True +napoli115 09/11 14:59:47 resume: None +napoli115 09/11 14:59:47 resume_optimizer: True +napoli115 09/11 14:59:47 eval_upsample: False +napoli115 09/11 14:59:47 use_feat_aug: True +napoli115 09/11 14:59:47 data_aug_color_trans_ratio: 0.1 +napoli115 09/11 14:59:47 data_aug_color_jitter_std: 0.05 +napoli115 09/11 14:59:47 data_aug_height_trans_std: 1 +napoli115 09/11 14:59:47 data_aug_height_jitter_std: 0.1 +napoli115 09/11 14:59:47 data_aug_normal_jitter_std: 0.01 +napoli115 09/11 14:59:47 normalize_color: True +napoli115 09/11 14:59:47 data_aug_scale_min: 0.8 +napoli115 09/11 14:59:47 data_aug_scale_max: 1.2 +napoli115 09/11 14:59:47 data_aug_hue_max: 0.5 +napoli115 09/11 14:59:47 data_aug_saturation_max: 0.2 +napoli115 09/11 14:59:47 temporal_rand_dilation: False +napoli115 09/11 14:59:47 temporal_rand_numseq: False +napoli115 09/11 14:59:47 test_config: None +napoli115 09/11 14:59:47 visualize: False +napoli115 09/11 14:59:47 test_temporal_average: False +napoli115 09/11 14:59:47 visualize_path: outputs/visualize +napoli115 09/11 14:59:47 test_rotation: 1 +napoli115 09/11 14:59:47 test_rotation_save: False +napoli115 09/11 14:59:47 test_rotation_save_dir: outputs/rotation_fulleval +napoli115 09/11 14:59:47 save_prediction: False +napoli115 09/11 14:59:47 save_pred_dir: outputs/pred +napoli115 09/11 14:59:47 test_phase: val +napoli115 09/11 14:59:47 evaluate_original_pointcloud: True +napoli115 09/11 14:59:47 test_original_pointcloud: False +napoli115 09/11 14:59:47 is_cuda: True +napoli115 09/11 14:59:47 load_path: +napoli115 09/11 14:59:47 log_step: 50 +napoli115 09/11 14:59:47 log_level: INFO +napoli115 09/11 14:59:47 num_gpu: 1 +napoli115 09/11 14:59:47 seed: 123 +napoli115 09/11 14:59:47 debug: True +napoli115 09/11 14:59:47 lenient_weight_loading: False \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/SpatioTemporalSegmentation/scannet2cmRes16UNet34C.txt b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/SpatioTemporalSegmentation/scannet2cmRes16UNet34C.txt new file mode 100644 index 00000000..aef080d7 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/SpatioTemporalSegmentation/scannet2cmRes16UNet34C.txt @@ -0,0 +1,5309 @@ +``` +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/05 08:44:01 ===> Configurations +thomas 04/05 08:44:01 model: Res16UNet34C +thomas 04/05 08:44:01 conv1_kernel_size: 3 +thomas 04/05 08:44:01 weights: None +thomas 04/05 08:44:01 weights_for_inner_model: False +thomas 04/05 08:44:01 dilations: [1, 1, 1, 1] +thomas 04/05 08:44:01 wrapper_type: None +thomas 04/05 08:44:01 wrapper_region_type: 1 +thomas 04/05 08:44:01 wrapper_kernel_size: 3 +thomas 04/05 08:44:01 wrapper_lr: 0.1 +thomas 04/05 08:44:01 meanfield_iterations: 10 +thomas 04/05 08:44:01 crf_spatial_sigma: 1 +thomas 04/05 08:44:01 crf_chromatic_sigma: 12 +thomas 04/05 08:44:01 optimizer: SGD +thomas 04/05 08:44:01 lr: 0.1 +thomas 04/05 08:44:01 sgd_momentum: 0.9 +thomas 04/05 08:44:01 sgd_dampening: 0.1 +thomas 04/05 08:44:01 adam_beta1: 0.9 +thomas 04/05 08:44:01 adam_beta2: 0.999 +thomas 04/05 08:44:01 weight_decay: 0.0001 +thomas 04/05 08:44:01 param_histogram_freq: 100 +thomas 04/05 08:44:01 save_param_histogram: False +thomas 04/05 08:44:01 iter_size: 1 +thomas 04/05 08:44:01 bn_momentum: 0.02 +thomas 04/05 08:44:01 scheduler: PolyLR +thomas 04/05 08:44:01 max_iter: 120000 +thomas 04/05 08:44:01 step_size: 20000.0 +thomas 04/05 08:44:01 step_gamma: 0.1 +thomas 04/05 08:44:01 poly_power: 0.9 +thomas 04/05 08:44:01 exp_gamma: 0.95 +thomas 04/05 08:44:01 exp_step_size: 445 +thomas 04/05 08:44:01 log_dir: ./outputs/ScanNet-default/2020-04-05_08-43-59 +thomas 04/05 08:44:01 data_dir: data +thomas 04/05 08:44:01 dataset: ScannetVoxelization2cmDataset +thomas 04/05 08:44:01 temporal_dilation: 30 +thomas 04/05 08:44:01 temporal_numseq: 3 +thomas 04/05 08:44:01 point_lim: -1 +thomas 04/05 08:44:01 pre_point_lim: -1 +thomas 04/05 08:44:01 batch_size: 4 +thomas 04/05 08:44:01 val_batch_size: 1 +thomas 04/05 08:44:01 test_batch_size: 1 +thomas 04/05 08:44:01 cache_data: False +thomas 04/05 08:44:01 num_workers: 0 +thomas 04/05 08:44:01 num_val_workers: 1 +thomas 04/05 08:44:01 ignore_label: 255 +thomas 04/05 08:44:01 return_transformation: False +thomas 04/05 08:44:01 ignore_duplicate_class: False +thomas 04/05 08:44:01 partial_crop: 0.0 +thomas 04/05 08:44:01 train_limit_numpoints: 120000000 +thomas 04/05 08:44:01 synthia_path: /home/chrischoy/datasets/Synthia/Synthia4D +thomas 04/05 08:44:01 synthia_camera_path: /home/chrischoy/datasets/Synthia/%s/CameraParams/ +thomas 04/05 08:44:01 synthia_camera_intrinsic_file: intrinsics.txt +thomas 04/05 08:44:01 synthia_camera_extrinsics_file: Stereo_Right/Omni_F/%s.txt +thomas 04/05 08:44:01 temporal_rand_dilation: False +thomas 04/05 08:44:01 temporal_rand_numseq: False +thomas 04/05 08:44:01 scannet_path: /home/tcn02/SpatioTemporalSegmentation/data/scannet/processed/train +thomas 04/05 08:44:01 stanford3d_path: /home/chrischoy/datasets/Stanford3D +thomas 04/05 08:44:01 is_train: True +thomas 04/05 08:44:01 stat_freq: 40 +thomas 04/05 08:44:01 test_stat_freq: 100 +thomas 04/05 08:44:01 save_freq: 1000 +thomas 04/05 08:44:01 val_freq: 1000 +thomas 04/05 08:44:01 empty_cache_freq: 1 +thomas 04/05 08:44:01 train_phase: train +thomas 04/05 08:44:01 val_phase: val +thomas 04/05 08:44:01 overwrite_weights: True +thomas 04/05 08:44:01 resume: None +thomas 04/05 08:44:01 resume_optimizer: True +thomas 04/05 08:44:01 eval_upsample: False +thomas 04/05 08:44:01 lenient_weight_loading: False +thomas 04/05 08:44:01 use_feat_aug: True +thomas 04/05 08:44:01 data_aug_color_trans_ratio: 0.1 +thomas 04/05 08:44:01 data_aug_color_jitter_std: 0.05 +thomas 04/05 08:44:01 normalize_color: True +thomas 04/05 08:44:01 data_aug_scale_min: 0.9 +thomas 04/05 08:44:01 data_aug_scale_max: 1.1 +thomas 04/05 08:44:01 data_aug_hue_max: 0.5 +thomas 04/05 08:44:01 data_aug_saturation_max: 0.2 +thomas 04/05 08:44:01 visualize: False +thomas 04/05 08:44:01 test_temporal_average: False +thomas 04/05 08:44:01 visualize_path: outputs/visualize +thomas 04/05 08:44:01 save_prediction: False +thomas 04/05 08:44:01 save_pred_dir: outputs/pred +thomas 04/05 08:44:01 test_phase: test +thomas 04/05 08:44:01 evaluate_original_pointcloud: False +thomas 04/05 08:44:01 test_original_pointcloud: False +thomas 04/05 08:44:01 is_cuda: True +thomas 04/05 08:44:01 load_path: +thomas 04/05 08:44:01 log_step: 50 +thomas 04/05 08:44:01 log_level: INFO +thomas 04/05 08:44:01 num_gpu: 1 +thomas 04/05 08:44:01 seed: 123 +thomas 04/05 08:44:01 ===> Initializing dataloader +thomas 04/05 08:44:01 Loading ScannetVoxelization2cmDataset: scannetv2_train.txt +thomas 04/05 08:44:01 Loading ScannetVoxelization2cmDataset: scannetv2_val.txt +thomas 04/05 08:44:01 ===> Building model +thomas 04/05 08:44:01 ===> Number of trainable parameters: Res16UNet34C: 37846644 +thomas 04/05 08:44:01 Res16UNet34C( + (conv0p1s1): MinkowskiConvolution(in=3, out=32, region_type=RegionType.HYPERCUBE, kernel_size=[3, 3, 3], stride=[1, 1, 1], dilation=[1, 1, 1]) + (bn0): MinkowskiBatchNorm(32, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + (conv1p1s2): MinkowskiConvolution(in=32, out=32, region_type=RegionType.HYPERCUBE, kernel_size=[2, 2, 2], stride=[2, 2, 2], dilation=[1, 1, 1]) + (bn1): MinkowskiBatchNorm(32, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + (block1): Sequential( + (0): BasicBlock( + (conv1): MinkowskiConvolution(in=32, out=32, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=32, out=32, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + (1): BasicBlock( + (conv1): MinkowskiConvolution(in=32, out=32, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=32, out=32, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + ) + (conv2p2s2): MinkowskiConvolution(in=32, out=32, region_type=RegionType.HYPERCUBE, kernel_size=[2, 2, 2], stride=[2, 2, 2], dilation=[1, 1, 1]) + (bn2): MinkowskiBatchNorm(32, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + (block2): Sequential( + (0): BasicBlock( + (conv1): MinkowskiConvolution(in=32, out=64, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=64, out=64, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + (downsample): Sequential( + (0): MinkowskiConvolution(in=32, out=64, region_type=RegionType.HYPERCUBE, kernel_size=[1, 1, 1], stride=[1, 1, 1], dilation=[1, 1, 1]) + (1): MinkowskiBatchNorm(64, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + ) + ) + (1): BasicBlock( + (conv1): MinkowskiConvolution(in=64, out=64, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=64, out=64, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + (2): BasicBlock( + (conv1): MinkowskiConvolution(in=64, out=64, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=64, out=64, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + ) + (conv3p4s2): MinkowskiConvolution(in=64, out=64, region_type=RegionType.HYPERCUBE, kernel_size=[2, 2, 2], stride=[2, 2, 2], dilation=[1, 1, 1]) + (bn3): MinkowskiBatchNorm(64, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + (block3): Sequential( + (0): BasicBlock( + (conv1): MinkowskiConvolution(in=64, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=128, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + (downsample): Sequential( + (0): MinkowskiConvolution(in=64, out=128, region_type=RegionType.HYPERCUBE, kernel_size=[1, 1, 1], stride=[1, 1, 1], dilation=[1, 1, 1]) + (1): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + ) + ) + (1): BasicBlock( + (conv1): MinkowskiConvolution(in=128, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=128, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + (2): BasicBlock( + (conv1): MinkowskiConvolution(in=128, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=128, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + (3): BasicBlock( + (conv1): MinkowskiConvolution(in=128, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=128, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + ) + (conv4p8s2): MinkowskiConvolution(in=128, out=128, region_type=RegionType.HYPERCUBE, kernel_size=[2, 2, 2], stride=[2, 2, 2], dilation=[1, 1, 1]) + (bn4): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + (block4): Sequential( + (0): BasicBlock( + (conv1): MinkowskiConvolution(in=128, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + (downsample): Sequential( + (0): MinkowskiConvolution(in=128, out=256, region_type=RegionType.HYPERCUBE, kernel_size=[1, 1, 1], stride=[1, 1, 1], dilation=[1, 1, 1]) + (1): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + ) + ) + (1): BasicBlock( + (conv1): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + (2): BasicBlock( + (conv1): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + (3): BasicBlock( + (conv1): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + (4): BasicBlock( + (conv1): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + (5): BasicBlock( + (conv1): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + ) + (convtr4p16s2): MinkowskiConvolutionTranspose(in=256, out=256, region_type=RegionType.HYPERCUBE, kernel_size=[2, 2, 2], stride=[2, 2, 2], dilation=[1, 1, 1]) + (bntr4): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + (block5): Sequential( + (0): BasicBlock( + (conv1): MinkowskiConvolution(in=384, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + (downsample): Sequential( + (0): MinkowskiConvolution(in=384, out=256, region_type=RegionType.HYPERCUBE, kernel_size=[1, 1, 1], stride=[1, 1, 1], dilation=[1, 1, 1]) + (1): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + ) + ) + (1): BasicBlock( + (conv1): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + ) + (convtr5p8s2): MinkowskiConvolutionTranspose(in=256, out=128, region_type=RegionType.HYPERCUBE, kernel_size=[2, 2, 2], stride=[2, 2, 2], dilation=[1, 1, 1]) + (bntr5): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + (block6): Sequential( + (0): BasicBlock( + (conv1): MinkowskiConvolution(in=192, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=128, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + (downsample): Sequential( + (0): MinkowskiConvolution(in=192, out=128, region_type=RegionType.HYPERCUBE, kernel_size=[1, 1, 1], stride=[1, 1, 1], dilation=[1, 1, 1]) + (1): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + ) + ) + (1): BasicBlock( + (conv1): MinkowskiConvolution(in=128, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=128, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + ) + (convtr6p4s2): MinkowskiConvolutionTranspose(in=128, out=96, region_type=RegionType.HYPERCUBE, kernel_size=[2, 2, 2], stride=[2, 2, 2], dilation=[1, 1, 1]) + (bntr6): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + (block7): Sequential( + (0): BasicBlock( + (conv1): MinkowskiConvolution(in=128, out=96, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=96, out=96, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + (downsample): Sequential( + (0): MinkowskiConvolution(in=128, out=96, region_type=RegionType.HYPERCUBE, kernel_size=[1, 1, 1], stride=[1, 1, 1], dilation=[1, 1, 1]) + (1): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + ) + ) + (1): BasicBlock( + (conv1): MinkowskiConvolution(in=96, out=96, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=96, out=96, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + ) + (convtr7p2s2): MinkowskiConvolutionTranspose(in=96, out=96, region_type=RegionType.HYPERCUBE, kernel_size=[2, 2, 2], stride=[2, 2, 2], dilation=[1, 1, 1]) + (bntr7): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + (block8): Sequential( + (0): BasicBlock( + (conv1): MinkowskiConvolution(in=128, out=96, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=96, out=96, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + (downsample): Sequential( + (0): MinkowskiConvolution(in=128, out=96, region_type=RegionType.HYPERCUBE, kernel_size=[1, 1, 1], stride=[1, 1, 1], dilation=[1, 1, 1]) + (1): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + ) + ) + (1): BasicBlock( + (conv1): MinkowskiConvolution(in=96, out=96, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=96, out=96, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + ) + (final): MinkowskiConvolution(in=96, out=20, region_type=RegionType.HYPERCUBE, kernel_size=[1, 1, 1], stride=[1, 1, 1], dilation=[1, 1, 1]) + (relu): MinkowskiReLU() +) +thomas 04/05 08:44:04 ===> Start training +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/05 08:44:09 ===> Epoch[1](1/301): Loss 3.2784 LR: 1.000e-01 Score 2.309 Data time: 1.8454, Total iter time: 4.9082 +thomas 04/05 08:47:53 ===> Epoch[1](40/301): Loss 1.8782 LR: 9.997e-02 Score 53.877 Data time: 2.2507, Total iter time: 5.6748 +thomas 04/05 08:51:58 ===> Epoch[1](80/301): Loss 1.6332 LR: 9.994e-02 Score 57.308 Data time: 2.3812, Total iter time: 6.0507 +thomas 04/05 08:55:50 ===> Epoch[1](120/301): Loss 1.5430 LR: 9.991e-02 Score 58.973 Data time: 2.1889, Total iter time: 5.7207 +thomas 04/05 08:59:52 ===> Epoch[1](160/301): Loss 1.4221 LR: 9.988e-02 Score 62.160 Data time: 2.3323, Total iter time: 5.9758 +thomas 04/05 09:03:41 ===> Epoch[1](200/301): Loss 1.3073 LR: 9.985e-02 Score 63.635 Data time: 2.2323, Total iter time: 5.6586 +thomas 04/05 09:07:29 ===> Epoch[1](240/301): Loss 1.3006 LR: 9.982e-02 Score 64.300 Data time: 2.2042, Total iter time: 5.6180 +thomas 04/05 09:11:40 ===> Epoch[1](280/301): Loss 1.2764 LR: 9.979e-02 Score 63.855 Data time: 2.4001, Total iter time: 6.1932 +thomas 04/05 09:15:29 ===> Epoch[2](320/301): Loss 1.2719 LR: 9.976e-02 Score 64.261 Data time: 2.2204, Total iter time: 5.6490 +thomas 04/05 09:19:15 ===> Epoch[2](360/301): Loss 1.3089 LR: 9.973e-02 Score 63.245 Data time: 2.2231, Total iter time: 5.5954 +thomas 04/05 09:23:13 ===> Epoch[2](400/301): Loss 1.2654 LR: 9.970e-02 Score 63.663 Data time: 2.2663, Total iter time: 5.8699 +thomas 04/05 09:27:23 ===> Epoch[2](440/301): Loss 1.2275 LR: 9.967e-02 Score 64.995 Data time: 2.3759, Total iter time: 6.1595 +thomas 04/05 09:31:32 ===> Epoch[2](480/301): Loss 1.2501 LR: 9.964e-02 Score 63.775 Data time: 2.3695, Total iter time: 6.1490 +thomas 04/05 09:35:47 ===> Epoch[2](520/301): Loss 1.2418 LR: 9.961e-02 Score 64.512 Data time: 2.4498, Total iter time: 6.3161 +thomas 04/05 09:39:31 ===> Epoch[2](560/301): Loss 1.1755 LR: 9.958e-02 Score 66.491 Data time: 2.1174, Total iter time: 5.5105 +thomas 04/05 09:43:09 ===> Epoch[2](600/301): Loss 1.1675 LR: 9.955e-02 Score 66.994 Data time: 2.0973, Total iter time: 5.3893 +thomas 04/05 09:46:59 ===> Epoch[3](640/301): Loss 1.1624 LR: 9.952e-02 Score 67.238 Data time: 2.2449, Total iter time: 5.6792 +thomas 04/05 09:50:49 ===> Epoch[3](680/301): Loss 1.1450 LR: 9.949e-02 Score 66.192 Data time: 2.2493, Total iter time: 5.6659 +thomas 04/05 09:54:39 ===> Epoch[3](720/301): Loss 1.1002 LR: 9.946e-02 Score 68.062 Data time: 2.2216, Total iter time: 5.6741 +thomas 04/05 09:58:30 ===> Epoch[3](760/301): Loss 1.2009 LR: 9.943e-02 Score 64.716 Data time: 2.2236, Total iter time: 5.7189 +thomas 04/05 10:02:31 ===> Epoch[3](800/301): Loss 1.1467 LR: 9.940e-02 Score 66.367 Data time: 2.3583, Total iter time: 5.9433 +thomas 04/05 10:06:23 ===> Epoch[3](840/301): Loss 1.1495 LR: 9.937e-02 Score 65.246 Data time: 2.2826, Total iter time: 5.7345 +thomas 04/05 10:10:02 ===> Epoch[3](880/301): Loss 1.1395 LR: 9.934e-02 Score 66.991 Data time: 2.1160, Total iter time: 5.4080 +thomas 04/05 10:14:04 ===> Epoch[4](920/301): Loss 1.1564 LR: 9.931e-02 Score 65.284 Data time: 2.3252, Total iter time: 5.9589 +thomas 04/05 10:18:07 ===> Epoch[4](960/301): Loss 1.1264 LR: 9.928e-02 Score 65.845 Data time: 2.3876, Total iter time: 6.0161 +thomas 04/05 10:22:05 ===> Epoch[4](1000/301): Loss 1.0797 LR: 9.925e-02 Score 67.672 Data time: 2.3049, Total iter time: 5.8703 +thomas 04/05 10:22:05 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/05 10:22:05 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/05 10:24:04 101/312: Data time: 0.0031, Iter time: 1.2758 Loss 1.354 (AVG: 1.145) Score 53.142 (AVG: 67.739) mIOU 17.390 mAP 33.949 mAcc 25.391 +IOU: 66.719 94.161 0.007 12.908 55.595 16.099 42.702 1.765 0.004 24.490 0.000 0.000 0.002 17.925 0.000 0.000 0.000 0.000 0.000 15.433 +mAP: 66.525 95.824 21.679 43.104 73.190 39.787 54.169 21.246 18.300 30.534 4.324 16.410 23.982 41.758 5.918 36.173 58.433 5.380 4.496 17.757 +mAcc: 93.163 98.367 0.007 23.677 59.620 57.971 73.278 1.993 0.004 39.896 0.000 0.000 0.002 24.904 0.000 0.000 0.000 0.000 0.000 34.940 + +thomas 04/05 10:25:57 201/312: Data time: 0.0027, Iter time: 0.7195 Loss 1.119 (AVG: 1.165) Score 75.195 (AVG: 67.514) mIOU 17.660 mAP 34.748 mAcc 25.739 +IOU: 66.877 94.544 0.007 11.978 56.304 22.864 39.868 2.012 0.006 25.626 0.000 0.000 0.003 17.076 0.000 0.000 0.000 0.000 0.000 16.044 +mAP: 67.258 94.921 23.302 43.175 74.240 45.727 53.397 21.855 21.439 36.263 4.198 19.823 23.681 43.051 5.653 33.833 53.418 4.043 5.562 20.127 +mAcc: 92.931 98.569 0.007 24.772 59.981 61.516 71.459 2.272 0.006 39.250 0.000 0.000 0.003 27.507 0.000 0.000 0.000 0.000 0.000 36.506 + +thomas 04/05 10:27:49 301/312: Data time: 0.0024, Iter time: 0.5095 Loss 1.129 (AVG: 1.158) Score 62.027 (AVG: 67.364) mIOU 17.858 mAP 35.157 mAcc 25.878 +IOU: 65.255 94.866 0.021 12.421 55.301 24.198 41.542 2.056 0.009 28.755 0.000 0.000 0.003 17.325 0.000 0.000 0.000 0.000 0.000 15.400 +mAP: 65.914 95.202 24.930 42.321 74.297 45.651 54.504 22.827 20.591 38.783 4.790 20.689 22.446 44.444 5.779 38.430 51.884 4.938 5.388 19.336 +mAcc: 92.565 98.590 0.021 25.639 58.821 61.685 72.078 2.321 0.009 42.492 0.000 0.000 0.003 28.888 0.000 0.000 0.000 0.000 0.000 34.458 + +thomas 04/05 10:28:01 312/312: Data time: 0.0028, Iter time: 0.7050 Loss 1.508 (AVG: 1.158) Score 55.619 (AVG: 67.310) mIOU 17.839 mAP 35.138 mAcc 25.810 +IOU: 65.264 94.912 0.022 12.788 54.971 24.097 41.017 2.060 0.009 28.730 0.000 0.000 0.004 17.496 0.000 0.000 0.000 0.000 0.000 15.415 +mAP: 65.919 95.245 24.866 42.783 73.448 45.396 54.369 22.838 20.514 38.696 4.714 20.379 22.334 45.057 5.754 38.430 52.483 4.892 5.407 19.245 +mAcc: 92.680 98.606 0.022 25.872 58.580 61.291 71.406 2.328 0.009 42.604 0.000 0.000 0.004 28.163 0.000 0.000 0.000 0.000 0.000 34.647 + +thomas 04/05 10:28:01 Finished test. Elapsed time: 356.0146 +thomas 04/05 10:28:01 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/05 10:28:01 Current best mIoU: 17.839 at iter 1000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/05 10:31:36 ===> Epoch[4](1040/301): Loss 1.0302 LR: 9.922e-02 Score 70.237 Data time: 2.1096, Total iter time: 5.2952 +thomas 04/05 10:35:32 ===> Epoch[4](1080/301): Loss 1.0577 LR: 9.919e-02 Score 68.692 Data time: 2.2888, Total iter time: 5.8224 +thomas 04/05 10:39:20 ===> Epoch[4](1120/301): Loss 1.1285 LR: 9.916e-02 Score 65.422 Data time: 2.2090, Total iter time: 5.6365 +thomas 04/05 10:43:07 ===> Epoch[4](1160/301): Loss 1.0705 LR: 9.913e-02 Score 68.311 Data time: 2.1980, Total iter time: 5.6002 +thomas 04/05 10:46:58 ===> Epoch[4](1200/301): Loss 1.0615 LR: 9.910e-02 Score 68.489 Data time: 2.2962, Total iter time: 5.6925 +thomas 04/05 10:50:56 ===> Epoch[5](1240/301): Loss 1.0239 LR: 9.907e-02 Score 69.178 Data time: 2.3320, Total iter time: 5.8643 +thomas 04/05 10:55:11 ===> Epoch[5](1280/301): Loss 1.0180 LR: 9.904e-02 Score 68.777 Data time: 2.4479, Total iter time: 6.2843 +thomas 04/05 10:58:44 ===> Epoch[5](1320/301): Loss 1.0504 LR: 9.901e-02 Score 68.547 Data time: 2.0683, Total iter time: 5.2556 +thomas 04/05 11:02:48 ===> Epoch[5](1360/301): Loss 1.0391 LR: 9.898e-02 Score 68.635 Data time: 2.3928, Total iter time: 6.0257 +thomas 04/05 11:06:29 ===> Epoch[5](1400/301): Loss 0.9856 LR: 9.895e-02 Score 70.176 Data time: 2.1567, Total iter time: 5.4286 +thomas 04/05 11:10:18 ===> Epoch[5](1440/301): Loss 1.0202 LR: 9.892e-02 Score 69.038 Data time: 2.2261, Total iter time: 5.6603 +thomas 04/05 11:14:00 ===> Epoch[5](1480/301): Loss 1.0312 LR: 9.889e-02 Score 68.299 Data time: 2.1459, Total iter time: 5.4822 +thomas 04/05 11:17:56 ===> Epoch[6](1520/301): Loss 1.0369 LR: 9.886e-02 Score 69.138 Data time: 2.3332, Total iter time: 5.8216 +thomas 04/05 11:21:27 ===> Epoch[6](1560/301): Loss 1.0242 LR: 9.883e-02 Score 69.354 Data time: 2.0502, Total iter time: 5.2168 +thomas 04/05 11:25:17 ===> Epoch[6](1600/301): Loss 0.9737 LR: 9.880e-02 Score 70.474 Data time: 2.1978, Total iter time: 5.6739 +thomas 04/05 11:29:21 ===> Epoch[6](1640/301): Loss 0.9785 LR: 9.877e-02 Score 70.282 Data time: 2.3351, Total iter time: 6.0236 +thomas 04/05 11:33:14 ===> Epoch[6](1680/301): Loss 0.9162 LR: 9.874e-02 Score 72.186 Data time: 2.3001, Total iter time: 5.7507 +thomas 04/05 11:37:04 ===> Epoch[6](1720/301): Loss 0.9213 LR: 9.871e-02 Score 72.006 Data time: 2.2281, Total iter time: 5.6830 +thomas 04/05 11:40:53 ===> Epoch[6](1760/301): Loss 0.9532 LR: 9.868e-02 Score 70.798 Data time: 2.2049, Total iter time: 5.6531 +thomas 04/05 11:44:46 ===> Epoch[6](1800/301): Loss 0.9415 LR: 9.865e-02 Score 71.234 Data time: 2.2320, Total iter time: 5.7382 +thomas 04/05 11:48:43 ===> Epoch[7](1840/301): Loss 0.9355 LR: 9.862e-02 Score 71.428 Data time: 2.3254, Total iter time: 5.8398 +thomas 04/05 11:52:23 ===> Epoch[7](1880/301): Loss 0.9188 LR: 9.859e-02 Score 71.779 Data time: 2.1259, Total iter time: 5.4307 +thomas 04/05 11:56:23 ===> Epoch[7](1920/301): Loss 0.8686 LR: 9.856e-02 Score 73.071 Data time: 2.3113, Total iter time: 5.9153 +thomas 04/05 12:00:34 ===> Epoch[7](1960/301): Loss 0.9329 LR: 9.853e-02 Score 71.225 Data time: 2.3935, Total iter time: 6.1986 +thomas 04/05 12:04:48 ===> Epoch[7](2000/301): Loss 0.8948 LR: 9.850e-02 Score 72.969 Data time: 2.4696, Total iter time: 6.2816 +thomas 04/05 12:04:50 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/05 12:04:50 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/05 12:06:40 101/312: Data time: 0.0025, Iter time: 0.6128 Loss 1.411 (AVG: 1.031) Score 56.993 (AVG: 70.179) mIOU 22.850 mAP 47.791 mAcc 33.808 +IOU: 67.603 94.758 3.163 29.257 64.431 39.295 46.594 13.984 2.202 24.739 0.000 5.658 16.444 37.892 0.000 0.000 0.000 0.000 0.000 10.970 +mAP: 70.483 96.630 34.771 59.903 79.384 59.881 59.732 29.090 34.559 56.895 9.526 36.390 45.819 65.695 20.090 56.664 54.379 18.915 40.028 26.979 +mAcc: 86.321 98.386 3.198 61.145 70.074 81.806 59.728 18.683 2.217 88.200 0.000 6.697 18.906 66.902 0.000 0.000 0.000 0.000 0.000 13.901 + +thomas 04/05 12:08:39 201/312: Data time: 0.0031, Iter time: 0.4086 Loss 0.906 (AVG: 1.046) Score 71.180 (AVG: 69.552) mIOU 23.051 mAP 48.894 mAcc 34.174 +IOU: 66.382 95.046 4.282 27.005 65.198 38.000 52.310 16.201 2.089 25.548 0.000 9.352 14.361 33.701 0.000 0.000 0.000 0.000 0.000 11.553 +mAP: 69.344 96.575 36.951 61.010 79.227 61.647 60.761 31.153 32.654 60.150 8.656 42.928 41.164 65.342 21.674 62.812 58.212 18.512 43.379 25.735 +mAcc: 85.643 98.604 4.322 61.414 71.370 78.380 64.654 21.380 2.107 89.892 0.000 10.258 17.379 63.581 0.000 0.000 0.000 0.000 0.000 14.496 + +thomas 04/05 12:10:27 301/312: Data time: 0.0028, Iter time: 0.6258 Loss 0.343 (AVG: 1.038) Score 89.874 (AVG: 69.718) mIOU 23.225 mAP 48.913 mAcc 34.280 +IOU: 66.903 95.063 5.446 26.202 65.034 42.372 51.027 15.187 1.972 24.820 0.000 11.987 13.997 32.925 0.000 0.000 0.000 0.000 0.000 11.569 +mAP: 69.707 96.509 39.607 58.566 79.405 64.987 59.226 30.285 31.921 59.123 9.478 45.634 40.219 61.778 19.644 61.037 62.333 19.305 42.926 26.561 +mAcc: 85.806 98.566 5.497 60.155 71.881 80.224 62.983 20.588 1.993 91.451 0.000 12.880 17.177 61.550 0.000 0.000 0.000 0.000 0.000 14.845 + +thomas 04/05 12:10:41 312/312: Data time: 0.0026, Iter time: 0.5065 Loss 0.427 (AVG: 1.033) Score 88.206 (AVG: 69.794) mIOU 23.246 mAP 49.080 mAcc 34.341 +IOU: 66.800 95.051 5.489 25.609 66.023 42.717 50.847 15.326 2.025 24.748 0.000 11.798 14.646 32.069 0.000 0.000 0.000 0.000 0.000 11.769 +mAP: 69.787 96.534 39.215 58.566 79.730 63.623 60.083 30.644 32.393 59.554 9.473 45.150 41.421 61.778 19.723 62.234 62.608 19.099 43.062 26.922 +mAcc: 85.580 98.557 5.541 60.155 72.773 80.344 63.240 20.760 2.047 90.489 0.000 12.736 17.807 61.550 0.000 0.000 0.000 0.000 0.000 15.236 + +thomas 04/05 12:10:41 Finished test. Elapsed time: 350.7428 +thomas 04/05 12:10:42 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/05 12:10:42 Current best mIoU: 23.246 at iter 2000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/05 12:14:13 ===> Epoch[7](2040/301): Loss 0.8868 LR: 9.847e-02 Score 72.891 Data time: 2.0151, Total iter time: 5.2057 +thomas 04/05 12:17:58 ===> Epoch[7](2080/301): Loss 0.9100 LR: 9.844e-02 Score 72.872 Data time: 2.2228, Total iter time: 5.5521 +thomas 04/05 12:21:40 ===> Epoch[8](2120/301): Loss 0.9031 LR: 9.841e-02 Score 72.457 Data time: 2.1932, Total iter time: 5.4719 +thomas 04/05 12:25:25 ===> Epoch[8](2160/301): Loss 0.8779 LR: 9.838e-02 Score 72.705 Data time: 2.1811, Total iter time: 5.5605 +thomas 04/05 12:29:35 ===> Epoch[8](2200/301): Loss 0.9011 LR: 9.835e-02 Score 71.792 Data time: 2.3894, Total iter time: 6.1790 +thomas 04/05 12:33:36 ===> Epoch[8](2240/301): Loss 0.8195 LR: 9.832e-02 Score 74.428 Data time: 2.3471, Total iter time: 5.9191 +thomas 04/05 12:37:32 ===> Epoch[8](2280/301): Loss 0.8848 LR: 9.829e-02 Score 73.645 Data time: 2.3018, Total iter time: 5.8341 +thomas 04/05 12:41:19 ===> Epoch[8](2320/301): Loss 0.8346 LR: 9.826e-02 Score 74.407 Data time: 2.1974, Total iter time: 5.6063 +thomas 04/05 12:45:01 ===> Epoch[8](2360/301): Loss 0.8609 LR: 9.823e-02 Score 73.572 Data time: 2.1357, Total iter time: 5.4782 +thomas 04/05 12:48:48 ===> Epoch[8](2400/301): Loss 0.8417 LR: 9.820e-02 Score 73.889 Data time: 2.2503, Total iter time: 5.5936 +thomas 04/05 12:52:41 ===> Epoch[9](2440/301): Loss 0.8048 LR: 9.817e-02 Score 76.044 Data time: 2.2399, Total iter time: 5.7517 +thomas 04/05 12:56:24 ===> Epoch[9](2480/301): Loss 0.8349 LR: 9.814e-02 Score 74.090 Data time: 2.1211, Total iter time: 5.4912 +thomas 04/05 13:00:19 ===> Epoch[9](2520/301): Loss 0.8506 LR: 9.811e-02 Score 73.475 Data time: 2.2802, Total iter time: 5.8022 +thomas 04/05 13:04:15 ===> Epoch[9](2560/301): Loss 0.8125 LR: 9.808e-02 Score 74.794 Data time: 2.2975, Total iter time: 5.8371 +thomas 04/05 13:08:11 ===> Epoch[9](2600/301): Loss 0.8043 LR: 9.805e-02 Score 75.193 Data time: 2.2757, Total iter time: 5.8024 +thomas 04/05 13:12:05 ===> Epoch[9](2640/301): Loss 0.8294 LR: 9.802e-02 Score 74.282 Data time: 2.2622, Total iter time: 5.7729 +thomas 04/05 13:16:00 ===> Epoch[9](2680/301): Loss 0.8834 LR: 9.799e-02 Score 73.526 Data time: 2.3013, Total iter time: 5.8115 +thomas 04/05 13:19:59 ===> Epoch[10](2720/301): Loss 0.8249 LR: 9.796e-02 Score 74.267 Data time: 2.3919, Total iter time: 5.8785 +thomas 04/05 13:24:20 ===> Epoch[10](2760/301): Loss 0.7553 LR: 9.793e-02 Score 76.779 Data time: 2.4885, Total iter time: 6.4332 +thomas 04/05 13:28:37 ===> Epoch[10](2800/301): Loss 0.7699 LR: 9.790e-02 Score 75.932 Data time: 2.4883, Total iter time: 6.3600 +thomas 04/05 13:32:36 ===> Epoch[10](2840/301): Loss 0.8170 LR: 9.787e-02 Score 75.270 Data time: 2.3643, Total iter time: 5.8851 +thomas 04/05 13:36:32 ===> Epoch[10](2880/301): Loss 0.8597 LR: 9.784e-02 Score 73.666 Data time: 2.3023, Total iter time: 5.8290 +thomas 04/05 13:40:19 ===> Epoch[10](2920/301): Loss 0.8283 LR: 9.781e-02 Score 75.246 Data time: 2.2082, Total iter time: 5.6026 +thomas 04/05 13:44:13 ===> Epoch[10](2960/301): Loss 0.7437 LR: 9.778e-02 Score 76.670 Data time: 2.2583, Total iter time: 5.7755 +thomas 04/05 13:48:20 ===> Epoch[10](3000/301): Loss 0.7766 LR: 9.775e-02 Score 75.811 Data time: 2.4358, Total iter time: 6.1053 +thomas 04/05 13:48:21 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/05 13:48:22 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/05 13:50:28 101/312: Data time: 0.0025, Iter time: 1.1494 Loss 1.404 (AVG: 0.870) Score 42.944 (AVG: 74.990) mIOU 32.036 mAP 56.946 mAcc 42.854 +IOU: 66.993 95.590 27.711 45.894 78.338 55.876 61.541 21.567 25.652 51.132 0.033 6.538 29.273 50.535 9.853 0.000 0.000 0.000 0.065 14.133 +mAP: 73.931 96.868 49.366 60.804 83.541 73.427 68.143 37.986 43.259 66.328 17.675 53.576 52.694 67.712 39.787 49.000 57.605 41.438 72.241 33.548 +mAcc: 81.056 98.702 65.879 63.118 88.613 90.085 66.585 33.293 32.269 71.549 0.033 6.606 41.708 67.246 28.663 0.000 0.000 0.000 0.065 21.612 + +thomas 04/05 13:52:24 201/312: Data time: 0.1199, Iter time: 0.5425 Loss 1.192 (AVG: 0.896) Score 55.255 (AVG: 73.487) mIOU 30.598 mAP 56.536 mAcc 41.813 +IOU: 67.226 95.585 28.387 38.416 72.066 56.051 55.701 22.762 24.911 50.804 0.024 5.634 28.429 39.279 13.446 0.000 0.050 0.000 0.233 12.962 +mAP: 73.818 96.447 49.799 55.365 80.592 73.428 65.353 38.652 40.809 59.033 16.991 52.666 51.914 68.023 40.676 59.883 64.959 38.622 71.541 32.141 +mAcc: 81.311 98.592 66.974 57.654 84.996 87.077 60.517 34.623 33.666 75.950 0.024 5.723 44.188 54.470 32.228 0.000 0.050 0.000 0.233 17.991 + +thomas 04/05 13:54:20 301/312: Data time: 0.0031, Iter time: 0.4169 Loss 0.626 (AVG: 0.884) Score 81.946 (AVG: 73.960) mIOU 30.894 mAP 56.305 mAcc 42.201 +IOU: 67.803 95.469 31.068 38.404 71.877 55.951 53.076 21.903 26.742 49.888 0.021 6.797 29.477 38.441 15.601 0.000 0.039 0.000 0.194 15.135 +mAP: 73.461 96.452 50.986 54.354 81.665 73.388 63.391 40.067 41.959 59.102 16.055 50.965 52.478 66.938 36.307 62.197 65.242 37.892 70.647 32.547 +mAcc: 81.503 98.390 70.329 57.875 84.011 85.335 59.173 32.348 36.114 76.485 0.021 6.905 44.100 54.180 35.480 0.000 0.039 0.000 0.194 21.536 + +thomas 04/05 13:54:36 312/312: Data time: 0.0027, Iter time: 0.3268 Loss 0.350 (AVG: 0.879) Score 86.531 (AVG: 74.187) mIOU 31.014 mAP 56.378 mAcc 42.265 +IOU: 67.938 95.468 30.730 41.073 71.989 56.680 53.237 21.645 26.278 49.868 0.020 6.796 29.401 38.626 15.008 0.000 0.038 0.000 0.189 15.288 +mAP: 73.495 96.444 50.421 55.759 81.885 73.974 63.459 40.028 41.872 59.736 15.416 50.965 53.024 66.989 35.649 61.148 65.180 38.304 71.047 32.768 +mAcc: 81.673 98.430 70.008 60.046 84.108 86.036 59.473 32.062 35.307 76.576 0.020 6.905 43.919 53.884 34.879 0.000 0.038 0.000 0.189 21.739 + +thomas 04/05 13:54:36 Finished test. Elapsed time: 374.8163 +thomas 04/05 13:54:38 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/05 13:54:38 Current best mIoU: 31.014 at iter 3000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/05 13:58:25 ===> Epoch[11](3040/301): Loss 0.7508 LR: 9.772e-02 Score 77.179 Data time: 2.1989, Total iter time: 5.5901 +thomas 04/05 14:02:18 ===> Epoch[11](3080/301): Loss 0.7982 LR: 9.769e-02 Score 75.426 Data time: 2.3052, Total iter time: 5.7562 +thomas 04/05 14:06:28 ===> Epoch[11](3120/301): Loss 0.7886 LR: 9.766e-02 Score 75.699 Data time: 2.4725, Total iter time: 6.1855 +thomas 04/05 14:10:48 ===> Epoch[11](3160/301): Loss 0.7540 LR: 9.763e-02 Score 76.742 Data time: 2.5336, Total iter time: 6.4085 +thomas 04/05 14:14:48 ===> Epoch[11](3200/301): Loss 0.7700 LR: 9.760e-02 Score 76.573 Data time: 2.3761, Total iter time: 5.9260 +thomas 04/05 14:19:02 ===> Epoch[11](3240/301): Loss 0.7485 LR: 9.757e-02 Score 76.941 Data time: 2.5156, Total iter time: 6.2866 +thomas 04/05 14:23:01 ===> Epoch[11](3280/301): Loss 0.7128 LR: 9.754e-02 Score 78.740 Data time: 2.3508, Total iter time: 5.8939 +thomas 04/05 14:27:26 ===> Epoch[12](3320/301): Loss 0.6738 LR: 9.751e-02 Score 78.928 Data time: 2.5695, Total iter time: 6.5272 +thomas 04/05 14:31:24 ===> Epoch[12](3360/301): Loss 0.7630 LR: 9.748e-02 Score 76.479 Data time: 2.2803, Total iter time: 5.8754 +thomas 04/05 14:35:27 ===> Epoch[12](3400/301): Loss 0.7230 LR: 9.745e-02 Score 77.582 Data time: 2.3540, Total iter time: 5.9819 +thomas 04/05 14:39:24 ===> Epoch[12](3440/301): Loss 0.7707 LR: 9.742e-02 Score 76.493 Data time: 2.3504, Total iter time: 5.8637 +thomas 04/05 14:43:23 ===> Epoch[12](3480/301): Loss 0.7041 LR: 9.739e-02 Score 77.891 Data time: 2.3492, Total iter time: 5.9182 +thomas 04/05 14:47:41 ===> Epoch[12](3520/301): Loss 0.7616 LR: 9.736e-02 Score 76.404 Data time: 2.5281, Total iter time: 6.3723 +thomas 04/05 14:52:01 ===> Epoch[12](3560/301): Loss 0.7410 LR: 9.733e-02 Score 77.184 Data time: 2.5412, Total iter time: 6.4097 +thomas 04/05 14:55:53 ===> Epoch[12](3600/301): Loss 0.6844 LR: 9.730e-02 Score 79.420 Data time: 2.2332, Total iter time: 5.7419 +thomas 04/05 14:59:53 ===> Epoch[13](3640/301): Loss 0.7611 LR: 9.727e-02 Score 76.774 Data time: 2.3349, Total iter time: 5.9383 +thomas 04/05 15:04:07 ===> Epoch[13](3680/301): Loss 0.7268 LR: 9.724e-02 Score 77.747 Data time: 2.4977, Total iter time: 6.2722 +thomas 04/05 15:08:19 ===> Epoch[13](3720/301): Loss 0.6489 LR: 9.721e-02 Score 79.872 Data time: 2.4864, Total iter time: 6.2043 +thomas 04/05 15:12:17 ===> Epoch[13](3760/301): Loss 0.7190 LR: 9.718e-02 Score 77.783 Data time: 2.3440, Total iter time: 5.8859 +thomas 04/05 15:16:16 ===> Epoch[13](3800/301): Loss 0.7493 LR: 9.715e-02 Score 77.099 Data time: 2.3541, Total iter time: 5.8907 +thomas 04/05 15:20:25 ===> Epoch[13](3840/301): Loss 0.7190 LR: 9.712e-02 Score 78.362 Data time: 2.3810, Total iter time: 6.1396 +thomas 04/05 15:24:32 ===> Epoch[13](3880/301): Loss 0.6971 LR: 9.709e-02 Score 78.151 Data time: 2.3966, Total iter time: 6.0896 +thomas 04/05 15:28:40 ===> Epoch[14](3920/301): Loss 0.7080 LR: 9.706e-02 Score 78.176 Data time: 2.4074, Total iter time: 6.1100 +thomas 04/05 15:32:47 ===> Epoch[14](3960/301): Loss 0.6787 LR: 9.703e-02 Score 79.375 Data time: 2.4338, Total iter time: 6.1050 +thomas 04/05 15:36:51 ===> Epoch[14](4000/301): Loss 0.7248 LR: 9.699e-02 Score 77.853 Data time: 2.4095, Total iter time: 6.0261 +thomas 04/05 15:36:53 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/05 15:36:53 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/05 15:38:55 101/312: Data time: 0.0031, Iter time: 0.4201 Loss 0.572 (AVG: 0.777) Score 79.162 (AVG: 76.637) mIOU 36.808 mAP 59.734 mAcc 49.295 +IOU: 68.475 93.982 41.589 45.125 76.632 65.266 60.993 22.078 26.225 45.920 0.051 32.730 34.428 47.796 22.196 0.094 20.036 7.764 2.837 21.943 +mAP: 77.727 95.408 48.605 62.306 87.317 74.567 62.811 47.139 40.551 60.666 17.538 53.222 52.017 69.380 43.525 67.680 75.958 58.288 64.288 35.683 +mAcc: 82.992 98.675 52.967 70.166 94.044 75.635 70.347 30.390 39.497 81.462 0.053 79.222 51.881 49.071 45.341 0.094 20.049 7.899 2.837 33.276 + +thomas 04/05 15:40:52 201/312: Data time: 0.0029, Iter time: 0.5868 Loss 0.640 (AVG: 0.789) Score 81.470 (AVG: 76.538) mIOU 36.853 mAP 60.639 mAcc 48.772 +IOU: 68.329 94.211 40.577 43.741 76.100 64.228 61.124 21.382 24.264 51.173 0.025 35.661 36.169 41.839 21.821 0.055 19.181 11.698 3.102 22.375 +mAP: 75.544 95.795 51.162 61.967 85.490 72.525 66.971 43.652 40.225 62.468 17.874 55.062 53.895 69.890 41.134 68.114 82.073 62.688 68.733 37.509 +mAcc: 83.075 98.691 52.492 70.966 94.241 73.168 70.190 29.624 34.029 86.340 0.028 75.103 54.869 43.092 41.525 0.055 19.318 11.966 3.102 33.565 + +thomas 04/05 15:43:02 301/312: Data time: 0.0027, Iter time: 0.7757 Loss 0.137 (AVG: 0.792) Score 98.386 (AVG: 76.343) mIOU 36.748 mAP 60.211 mAcc 48.600 +IOU: 68.150 94.262 42.459 40.231 74.383 64.801 56.616 21.999 24.852 52.295 0.230 42.906 36.331 42.735 20.245 0.046 19.779 10.288 2.974 19.378 +mAP: 74.591 95.714 51.678 58.164 85.326 75.507 62.892 44.362 40.200 61.671 16.286 59.950 53.472 68.199 42.868 64.128 81.606 60.862 70.280 36.471 +mAcc: 83.104 98.704 55.956 70.026 94.096 71.977 67.299 30.129 35.392 87.452 0.244 78.586 56.743 45.005 37.905 0.046 20.013 10.474 2.974 25.873 + +thomas 04/05 15:43:14 312/312: Data time: 0.0025, Iter time: 0.7762 Loss 0.882 (AVG: 0.799) Score 69.847 (AVG: 76.071) mIOU 36.402 mAP 60.091 mAcc 48.396 +IOU: 67.897 94.274 42.163 39.278 74.327 64.684 55.802 21.649 24.242 51.639 0.223 42.638 35.651 42.374 20.099 0.047 19.705 10.007 3.100 18.251 +mAP: 74.599 95.804 51.719 58.746 85.481 75.613 62.606 44.307 39.956 61.671 16.554 58.231 53.299 68.376 41.947 64.331 81.656 59.832 70.942 36.152 +mAcc: 83.107 98.727 55.944 70.495 93.932 72.281 66.623 29.494 34.774 87.452 0.235 78.235 56.745 45.027 37.728 0.047 19.934 10.183 3.100 23.847 + +thomas 04/05 15:43:14 Finished test. Elapsed time: 381.3205 +thomas 04/05 15:43:16 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/05 15:43:16 Current best mIoU: 36.402 at iter 4000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/05 15:47:25 ===> Epoch[14](4040/301): Loss 0.6983 LR: 9.696e-02 Score 78.088 Data time: 2.4423, Total iter time: 6.1528 +thomas 04/05 15:51:30 ===> Epoch[14](4080/301): Loss 0.6620 LR: 9.693e-02 Score 79.688 Data time: 2.4172, Total iter time: 6.0481 +thomas 04/05 15:55:50 ===> Epoch[14](4120/301): Loss 0.6725 LR: 9.690e-02 Score 79.295 Data time: 2.5546, Total iter time: 6.4162 +thomas 04/05 15:59:59 ===> Epoch[14](4160/301): Loss 0.6669 LR: 9.687e-02 Score 79.489 Data time: 2.4201, Total iter time: 6.1510 +thomas 04/05 16:03:49 ===> Epoch[14](4200/301): Loss 0.7143 LR: 9.684e-02 Score 77.935 Data time: 2.2470, Total iter time: 5.6698 +thomas 04/05 16:07:45 ===> Epoch[15](4240/301): Loss 0.6652 LR: 9.681e-02 Score 79.431 Data time: 2.2886, Total iter time: 5.8106 +thomas 04/05 16:12:23 ===> Epoch[15](4280/301): Loss 0.6785 LR: 9.678e-02 Score 78.836 Data time: 2.6601, Total iter time: 6.8627 +thomas 04/05 16:16:22 ===> Epoch[15](4320/301): Loss 0.6726 LR: 9.675e-02 Score 79.537 Data time: 2.3667, Total iter time: 5.9045 +thomas 04/05 16:20:29 ===> Epoch[15](4360/301): Loss 0.7103 LR: 9.672e-02 Score 78.900 Data time: 2.4308, Total iter time: 6.0870 +thomas 04/05 16:24:33 ===> Epoch[15](4400/301): Loss 0.6381 LR: 9.669e-02 Score 79.647 Data time: 2.3993, Total iter time: 6.0468 +thomas 04/05 16:28:26 ===> Epoch[15](4440/301): Loss 0.6673 LR: 9.666e-02 Score 79.497 Data time: 2.2504, Total iter time: 5.7424 +thomas 04/05 16:32:25 ===> Epoch[15](4480/301): Loss 0.6767 LR: 9.663e-02 Score 79.418 Data time: 2.2872, Total iter time: 5.9104 +thomas 04/05 16:36:23 ===> Epoch[16](4520/301): Loss 0.7178 LR: 9.660e-02 Score 78.121 Data time: 2.3001, Total iter time: 5.8616 +thomas 04/05 16:40:46 ===> Epoch[16](4560/301): Loss 0.7039 LR: 9.657e-02 Score 78.236 Data time: 2.5534, Total iter time: 6.4941 +thomas 04/05 16:44:45 ===> Epoch[16](4600/301): Loss 0.6486 LR: 9.654e-02 Score 79.682 Data time: 2.3926, Total iter time: 5.8993 +thomas 04/05 16:49:01 ===> Epoch[16](4640/301): Loss 0.6607 LR: 9.651e-02 Score 79.564 Data time: 2.4909, Total iter time: 6.3186 +thomas 04/05 16:53:02 ===> Epoch[16](4680/301): Loss 0.7141 LR: 9.648e-02 Score 78.011 Data time: 2.3152, Total iter time: 5.9349 +thomas 04/05 16:57:08 ===> Epoch[16](4720/301): Loss 0.6153 LR: 9.645e-02 Score 81.263 Data time: 2.3739, Total iter time: 6.0680 +thomas 04/05 17:01:06 ===> Epoch[16](4760/301): Loss 0.6505 LR: 9.642e-02 Score 79.859 Data time: 2.3373, Total iter time: 5.8862 +thomas 04/05 17:05:30 ===> Epoch[16](4800/301): Loss 0.6464 LR: 9.639e-02 Score 80.268 Data time: 2.6246, Total iter time: 6.5288 +thomas 04/05 17:09:33 ===> Epoch[17](4840/301): Loss 0.6740 LR: 9.636e-02 Score 79.420 Data time: 2.3936, Total iter time: 5.9965 +thomas 04/05 17:13:22 ===> Epoch[17](4880/301): Loss 0.6136 LR: 9.633e-02 Score 80.982 Data time: 2.2445, Total iter time: 5.6469 +thomas 04/05 17:17:12 ===> Epoch[17](4920/301): Loss 0.6422 LR: 9.630e-02 Score 79.973 Data time: 2.1952, Total iter time: 5.6824 +thomas 04/05 17:21:05 ===> Epoch[17](4960/301): Loss 0.6639 LR: 9.627e-02 Score 79.204 Data time: 2.2254, Total iter time: 5.7446 +thomas 04/05 17:25:19 ===> Epoch[17](5000/301): Loss 0.6594 LR: 9.624e-02 Score 79.850 Data time: 2.4709, Total iter time: 6.2568 +thomas 04/05 17:25:20 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/05 17:25:20 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/05 17:27:24 101/312: Data time: 0.0032, Iter time: 0.4434 Loss 0.816 (AVG: 0.774) Score 70.343 (AVG: 76.263) mIOU 40.893 mAP 63.724 mAcc 51.264 +IOU: 67.358 95.011 41.037 54.841 76.440 67.413 52.551 28.370 31.130 53.251 7.325 17.996 40.959 59.834 21.520 2.654 48.693 3.439 30.262 17.781 +mAP: 75.694 95.100 47.370 73.149 85.715 78.776 59.198 42.142 55.170 64.999 21.055 49.001 56.663 75.637 47.061 75.551 86.296 67.021 80.586 38.292 +mAcc: 83.776 98.486 67.250 80.155 83.869 92.714 57.635 57.549 36.085 58.597 10.319 18.274 68.505 82.770 22.898 2.709 49.055 3.440 30.294 20.909 + +thomas 04/05 17:29:30 201/312: Data time: 0.0030, Iter time: 1.2400 Loss 0.344 (AVG: 0.777) Score 91.043 (AVG: 76.560) mIOU 39.439 mAP 61.485 mAcc 49.485 +IOU: 68.661 95.100 38.837 53.582 78.170 72.392 52.758 30.518 27.916 57.347 5.970 15.423 40.536 46.089 17.143 2.688 42.617 2.025 24.326 16.678 +mAP: 75.220 94.783 49.591 67.617 84.850 78.657 60.107 47.235 47.249 61.783 21.382 48.676 58.500 64.838 38.572 72.956 89.219 66.082 66.132 36.243 +mAcc: 84.441 98.384 66.222 79.015 86.473 91.270 59.652 60.008 34.153 63.041 8.767 15.759 62.594 70.202 17.962 2.706 42.898 2.025 24.346 19.782 + +thomas 04/05 17:31:40 301/312: Data time: 0.0027, Iter time: 0.3547 Loss 0.752 (AVG: 0.729) Score 80.771 (AVG: 78.105) mIOU 40.231 mAP 62.078 mAcc 50.240 +IOU: 70.092 95.447 39.418 53.666 80.821 72.446 55.791 31.415 27.490 62.851 6.735 16.675 38.715 49.926 13.838 2.230 37.799 1.783 30.189 17.294 +mAP: 75.529 95.209 50.020 66.239 85.868 79.314 63.740 48.044 46.392 60.448 23.202 52.040 59.661 69.959 38.673 68.129 85.500 65.262 71.353 36.986 +mAcc: 84.843 98.457 67.324 81.700 89.397 90.570 62.126 63.043 33.217 68.383 10.244 16.939 59.262 71.944 14.390 2.243 38.113 1.783 30.209 20.619 + +thomas 04/05 17:31:52 312/312: Data time: 0.0024, Iter time: 0.5505 Loss 0.517 (AVG: 0.728) Score 81.915 (AVG: 78.163) mIOU 40.306 mAP 62.318 mAcc 50.264 +IOU: 70.260 95.400 39.818 54.033 81.006 72.541 55.979 31.188 26.897 62.452 6.748 17.030 38.980 50.156 14.287 2.230 37.799 1.808 30.189 17.316 +mAP: 75.679 95.248 50.929 66.641 86.116 79.383 64.370 48.211 46.059 60.979 23.431 52.723 59.740 69.312 41.006 68.129 85.500 64.849 71.353 36.705 +mAcc: 84.946 98.450 67.544 81.777 89.485 90.341 62.232 62.657 32.456 68.094 10.487 17.311 59.455 72.145 14.857 2.243 38.113 1.808 30.209 20.675 + +thomas 04/05 17:31:52 Finished test. Elapsed time: 391.7184 +thomas 04/05 17:31:53 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/05 17:31:53 Current best mIoU: 40.306 at iter 5000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/05 17:35:58 ===> Epoch[17](5040/301): Loss 0.6240 LR: 9.621e-02 Score 81.032 Data time: 2.3887, Total iter time: 6.0494 +thomas 04/05 17:39:58 ===> Epoch[17](5080/301): Loss 0.6308 LR: 9.618e-02 Score 80.468 Data time: 2.3056, Total iter time: 5.9232 +thomas 04/05 17:43:57 ===> Epoch[18](5120/301): Loss 0.6439 LR: 9.615e-02 Score 79.835 Data time: 2.3070, Total iter time: 5.8958 +thomas 04/05 17:48:06 ===> Epoch[18](5160/301): Loss 0.6338 LR: 9.612e-02 Score 80.523 Data time: 2.3958, Total iter time: 6.1540 +thomas 04/05 17:52:16 ===> Epoch[18](5200/301): Loss 0.6403 LR: 9.609e-02 Score 80.192 Data time: 2.4325, Total iter time: 6.1656 +thomas 04/05 17:56:38 ===> Epoch[18](5240/301): Loss 0.6605 LR: 9.606e-02 Score 79.368 Data time: 2.5909, Total iter time: 6.4676 +thomas 04/05 18:00:29 ===> Epoch[18](5280/301): Loss 0.5642 LR: 9.603e-02 Score 82.725 Data time: 2.2612, Total iter time: 5.7030 +thomas 04/05 18:04:28 ===> Epoch[18](5320/301): Loss 0.6463 LR: 9.600e-02 Score 80.035 Data time: 2.2952, Total iter time: 5.9121 +thomas 04/05 18:08:11 ===> Epoch[18](5360/301): Loss 0.6052 LR: 9.597e-02 Score 81.079 Data time: 2.1702, Total iter time: 5.5070 +thomas 04/05 18:11:58 ===> Epoch[18](5400/301): Loss 0.6461 LR: 9.594e-02 Score 80.130 Data time: 2.2121, Total iter time: 5.5896 +thomas 04/05 18:15:48 ===> Epoch[19](5440/301): Loss 0.6650 LR: 9.591e-02 Score 79.669 Data time: 2.2782, Total iter time: 5.6717 +thomas 04/05 18:19:55 ===> Epoch[19](5480/301): Loss 0.6111 LR: 9.588e-02 Score 81.402 Data time: 2.4448, Total iter time: 6.1017 +thomas 04/05 18:23:49 ===> Epoch[19](5520/301): Loss 0.6029 LR: 9.585e-02 Score 81.868 Data time: 2.2837, Total iter time: 5.7863 +thomas 04/05 18:27:57 ===> Epoch[19](5560/301): Loss 0.6098 LR: 9.582e-02 Score 80.896 Data time: 2.3858, Total iter time: 6.1115 +thomas 04/05 18:32:01 ===> Epoch[19](5600/301): Loss 0.6161 LR: 9.579e-02 Score 80.907 Data time: 2.3645, Total iter time: 6.0173 +thomas 04/05 18:36:09 ===> Epoch[19](5640/301): Loss 0.6020 LR: 9.576e-02 Score 81.324 Data time: 2.3927, Total iter time: 6.1423 +thomas 04/05 18:40:22 ===> Epoch[19](5680/301): Loss 0.6370 LR: 9.573e-02 Score 80.235 Data time: 2.5218, Total iter time: 6.2414 +thomas 04/05 18:44:32 ===> Epoch[20](5720/301): Loss 0.6711 LR: 9.570e-02 Score 79.468 Data time: 2.4845, Total iter time: 6.1630 +thomas 04/05 18:48:29 ===> Epoch[20](5760/301): Loss 0.6373 LR: 9.567e-02 Score 80.556 Data time: 2.3172, Total iter time: 5.8456 +thomas 04/05 18:52:51 ===> Epoch[20](5800/301): Loss 0.5620 LR: 9.564e-02 Score 82.487 Data time: 2.5298, Total iter time: 6.4846 +thomas 04/05 18:56:39 ===> Epoch[20](5840/301): Loss 0.6463 LR: 9.561e-02 Score 80.369 Data time: 2.1977, Total iter time: 5.6241 +thomas 04/05 19:00:55 ===> Epoch[20](5880/301): Loss 0.6602 LR: 9.558e-02 Score 79.595 Data time: 2.4518, Total iter time: 6.3125 +thomas 04/05 19:05:09 ===> Epoch[20](5920/301): Loss 0.6202 LR: 9.555e-02 Score 80.517 Data time: 2.5118, Total iter time: 6.2766 +thomas 04/05 19:09:23 ===> Epoch[20](5960/301): Loss 0.6143 LR: 9.552e-02 Score 81.190 Data time: 2.5039, Total iter time: 6.2517 +thomas 04/05 19:13:17 ===> Epoch[20](6000/301): Loss 0.6073 LR: 9.549e-02 Score 81.182 Data time: 2.2607, Total iter time: 5.7669 +thomas 04/05 19:13:18 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/05 19:13:18 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/05 19:15:23 101/312: Data time: 0.1177, Iter time: 0.4619 Loss 0.705 (AVG: 0.736) Score 75.429 (AVG: 77.327) mIOU 44.289 mAP 64.057 mAcc 56.687 +IOU: 71.779 95.005 35.943 62.274 73.402 63.553 44.697 30.926 25.824 32.836 4.971 52.372 30.481 55.347 11.135 30.672 67.691 24.428 47.153 25.295 +mAP: 79.116 96.134 54.808 67.733 83.807 80.464 64.457 45.624 45.779 60.805 35.196 63.289 52.293 78.496 30.063 83.891 89.055 64.272 65.266 40.598 +mAcc: 83.180 98.714 73.592 76.042 87.312 92.209 47.697 54.091 27.323 46.654 5.594 80.275 40.926 79.897 31.931 31.892 69.145 25.140 48.288 33.826 + +thomas 04/05 19:17:19 201/312: Data time: 0.0023, Iter time: 0.4546 Loss 0.462 (AVG: 0.716) Score 87.490 (AVG: 78.192) mIOU 46.560 mAP 65.408 mAcc 59.227 +IOU: 71.688 95.392 37.671 60.601 76.908 61.885 51.891 31.166 22.924 54.410 7.236 46.770 35.680 48.574 20.265 35.442 66.631 30.434 50.005 25.620 +mAP: 78.328 96.619 55.059 64.966 85.580 79.866 65.321 48.742 46.651 57.023 33.917 59.805 56.345 71.208 41.347 83.827 91.027 73.941 77.070 41.511 +mAcc: 83.231 98.650 73.655 76.130 89.757 92.572 55.244 55.618 24.523 69.512 8.019 71.745 44.391 72.968 47.673 36.752 67.689 31.475 50.886 34.057 + +thomas 04/05 19:19:27 301/312: Data time: 0.0032, Iter time: 0.8994 Loss 0.399 (AVG: 0.723) Score 91.420 (AVG: 77.992) mIOU 46.132 mAP 65.276 mAcc 58.860 +IOU: 70.784 95.434 37.440 55.048 78.777 63.015 52.013 32.382 22.453 54.205 5.729 46.315 35.293 55.436 21.062 32.927 67.745 25.352 50.479 20.748 +mAP: 76.725 96.980 53.165 64.764 86.079 82.018 65.609 49.906 46.712 55.230 29.837 60.466 57.370 74.292 42.173 83.447 90.628 71.078 77.428 41.608 +mAcc: 82.832 98.623 73.375 77.540 90.862 93.148 55.254 58.036 23.862 68.983 6.284 71.282 45.664 78.638 45.009 34.926 68.767 26.147 51.304 26.673 + +thomas 04/05 19:19:37 312/312: Data time: 0.0036, Iter time: 0.6733 Loss 0.550 (AVG: 0.723) Score 83.793 (AVG: 77.999) mIOU 45.986 mAP 65.206 mAcc 58.742 +IOU: 70.786 95.362 37.131 55.353 78.369 63.517 51.719 32.482 22.690 54.104 5.714 44.949 35.256 54.968 21.028 32.696 67.107 25.453 50.102 20.926 +mAP: 76.548 96.915 53.023 64.980 85.814 81.537 65.915 49.464 46.534 55.990 29.542 59.966 57.371 73.797 42.173 83.302 90.788 71.278 77.836 41.350 +mAcc: 82.941 98.547 72.372 77.847 90.337 93.314 54.954 57.952 24.073 68.552 6.268 71.075 45.714 79.051 45.009 34.602 68.103 26.251 50.898 26.975 + +thomas 04/05 19:19:37 Finished test. Elapsed time: 379.3508 +thomas 04/05 19:19:39 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/05 19:19:39 Current best mIoU: 45.986 at iter 6000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/05 19:23:41 ===> Epoch[21](6040/301): Loss 0.6262 LR: 9.546e-02 Score 80.855 Data time: 2.3486, Total iter time: 5.9588 +thomas 04/05 19:28:08 ===> Epoch[21](6080/301): Loss 0.6076 LR: 9.543e-02 Score 80.723 Data time: 2.6003, Total iter time: 6.6010 +thomas 04/05 19:32:23 ===> Epoch[21](6120/301): Loss 0.6231 LR: 9.540e-02 Score 80.672 Data time: 2.4982, Total iter time: 6.2906 +thomas 04/05 19:36:18 ===> Epoch[21](6160/301): Loss 0.6042 LR: 9.537e-02 Score 81.418 Data time: 2.3107, Total iter time: 5.7938 +thomas 04/05 19:40:14 ===> Epoch[21](6200/301): Loss 0.5839 LR: 9.534e-02 Score 81.751 Data time: 2.2804, Total iter time: 5.8155 +thomas 04/05 19:44:00 ===> Epoch[21](6240/301): Loss 0.5672 LR: 9.531e-02 Score 82.644 Data time: 2.2012, Total iter time: 5.5937 +thomas 04/05 19:48:16 ===> Epoch[21](6280/301): Loss 0.5500 LR: 9.528e-02 Score 82.800 Data time: 2.4672, Total iter time: 6.3104 +thomas 04/05 19:52:21 ===> Epoch[21](6320/301): Loss 0.5878 LR: 9.525e-02 Score 82.114 Data time: 2.4014, Total iter time: 6.0460 +thomas 04/05 19:56:35 ===> Epoch[22](6360/301): Loss 0.5720 LR: 9.522e-02 Score 82.649 Data time: 2.5076, Total iter time: 6.2817 +thomas 04/05 20:00:30 ===> Epoch[22](6400/301): Loss 0.6041 LR: 9.519e-02 Score 81.369 Data time: 2.2822, Total iter time: 5.8027 +thomas 04/05 20:04:29 ===> Epoch[22](6440/301): Loss 0.6050 LR: 9.516e-02 Score 81.071 Data time: 2.3087, Total iter time: 5.8852 +thomas 04/05 20:08:26 ===> Epoch[22](6480/301): Loss 0.6378 LR: 9.513e-02 Score 80.435 Data time: 2.2642, Total iter time: 5.8480 +thomas 04/05 20:12:15 ===> Epoch[22](6520/301): Loss 0.5842 LR: 9.510e-02 Score 81.921 Data time: 2.2112, Total iter time: 5.6455 +thomas 04/05 20:16:47 ===> Epoch[22](6560/301): Loss 0.5560 LR: 9.507e-02 Score 82.971 Data time: 2.6833, Total iter time: 6.7209 +thomas 04/05 20:21:13 ===> Epoch[22](6600/301): Loss 0.5955 LR: 9.504e-02 Score 81.206 Data time: 2.6312, Total iter time: 6.5750 +thomas 04/05 20:25:14 ===> Epoch[23](6640/301): Loss 0.5938 LR: 9.501e-02 Score 81.447 Data time: 2.3393, Total iter time: 5.9598 +thomas 04/05 20:29:20 ===> Epoch[23](6680/301): Loss 0.6023 LR: 9.498e-02 Score 81.583 Data time: 2.3654, Total iter time: 6.0689 +thomas 04/05 20:33:27 ===> Epoch[23](6720/301): Loss 0.5752 LR: 9.495e-02 Score 82.477 Data time: 2.3800, Total iter time: 6.1119 +thomas 04/05 20:37:39 ===> Epoch[23](6760/301): Loss 0.6180 LR: 9.492e-02 Score 81.128 Data time: 2.4430, Total iter time: 6.2241 +thomas 04/05 20:41:48 ===> Epoch[23](6800/301): Loss 0.5496 LR: 9.489e-02 Score 82.928 Data time: 2.4642, Total iter time: 6.1249 +thomas 04/05 20:45:45 ===> Epoch[23](6840/301): Loss 0.6445 LR: 9.486e-02 Score 80.094 Data time: 2.3370, Total iter time: 5.8673 +thomas 04/05 20:49:46 ===> Epoch[23](6880/301): Loss 0.5941 LR: 9.482e-02 Score 81.799 Data time: 2.3053, Total iter time: 5.9387 +thomas 04/05 20:53:46 ===> Epoch[23](6920/301): Loss 0.5875 LR: 9.479e-02 Score 81.713 Data time: 2.2920, Total iter time: 5.9345 +thomas 04/05 20:57:50 ===> Epoch[24](6960/301): Loss 0.5970 LR: 9.476e-02 Score 81.391 Data time: 2.3450, Total iter time: 6.0088 +thomas 04/05 21:02:02 ===> Epoch[24](7000/301): Loss 0.6395 LR: 9.473e-02 Score 80.305 Data time: 2.4695, Total iter time: 6.2359 +thomas 04/05 21:02:04 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/05 21:02:04 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/05 21:04:20 101/312: Data time: 0.0028, Iter time: 0.7225 Loss 0.698 (AVG: 0.691) Score 78.447 (AVG: 79.350) mIOU 45.127 mAP 64.793 mAcc 56.510 +IOU: 71.527 95.320 43.564 58.117 77.355 67.613 56.919 30.953 31.509 64.047 1.593 50.739 42.934 51.101 31.200 4.148 64.364 19.741 19.535 20.268 +mAP: 76.913 97.178 49.204 63.634 83.568 83.078 66.373 51.441 48.921 61.322 12.374 61.312 66.276 79.523 49.298 75.398 92.939 69.910 68.517 38.682 +mAcc: 85.463 97.623 61.734 70.514 82.758 93.620 68.575 64.328 34.361 78.269 2.055 68.875 79.079 67.516 38.794 4.346 65.445 19.921 19.742 27.179 + +thomas 04/05 21:06:28 201/312: Data time: 0.0066, Iter time: 1.4345 Loss 0.566 (AVG: 0.704) Score 82.111 (AVG: 79.004) mIOU 46.028 mAP 65.289 mAcc 57.583 +IOU: 71.543 95.324 44.420 56.528 75.326 60.455 56.822 33.215 29.833 66.584 2.910 49.105 42.055 50.468 30.599 10.959 65.035 23.571 34.397 21.412 +mAP: 76.333 97.358 52.781 61.891 82.897 83.483 62.266 52.466 44.798 67.881 21.960 59.732 65.826 74.191 44.972 81.102 91.155 71.339 71.142 42.206 +mAcc: 84.973 97.873 63.232 69.630 80.861 93.736 64.446 67.138 32.906 84.572 3.599 65.830 77.472 60.505 39.971 11.167 65.878 23.933 34.929 29.003 + +thomas 04/05 21:08:40 301/312: Data time: 0.0029, Iter time: 0.7644 Loss 0.796 (AVG: 0.685) Score 76.120 (AVG: 78.972) mIOU 47.006 mAP 65.811 mAcc 58.529 +IOU: 71.091 95.511 46.671 59.874 75.686 58.002 56.852 32.463 33.810 63.267 3.266 51.295 41.933 54.603 30.464 15.023 66.044 25.684 35.457 23.118 +mAP: 76.418 97.700 53.810 66.338 83.449 82.298 60.947 50.879 48.573 66.602 22.014 56.124 65.787 73.803 47.653 82.242 91.587 72.712 75.190 42.103 +mAcc: 84.480 97.902 64.537 73.191 80.950 94.196 63.603 66.112 38.361 79.716 4.002 68.452 77.198 61.301 40.528 15.547 66.861 26.141 35.847 31.663 + +thomas 04/05 21:08:53 312/312: Data time: 0.0021, Iter time: 0.4819 Loss 1.335 (AVG: 0.682) Score 68.538 (AVG: 79.098) mIOU 47.084 mAP 65.648 mAcc 58.637 +IOU: 71.374 95.537 46.422 59.675 75.344 56.694 57.202 32.792 33.156 65.938 3.262 51.206 41.780 55.975 30.319 14.492 66.681 25.757 35.133 22.947 +mAP: 76.540 97.610 53.973 66.188 83.507 81.172 61.510 51.147 47.742 66.045 22.120 56.124 65.297 74.331 47.653 82.042 91.812 73.058 72.713 42.369 +mAcc: 84.697 97.929 64.156 72.828 80.569 94.203 63.785 66.722 37.555 81.649 3.993 68.452 77.290 62.714 40.528 14.929 67.477 26.219 35.507 31.545 + +thomas 04/05 21:08:53 Finished test. Elapsed time: 408.8758 +thomas 04/05 21:08:54 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/05 21:08:54 Current best mIoU: 47.084 at iter 7000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/05 21:12:57 ===> Epoch[24](7040/301): Loss 0.5815 LR: 9.470e-02 Score 81.891 Data time: 2.3622, Total iter time: 5.9980 +thomas 04/05 21:16:48 ===> Epoch[24](7080/301): Loss 0.5716 LR: 9.467e-02 Score 82.436 Data time: 2.2321, Total iter time: 5.7054 +thomas 04/05 21:20:35 ===> Epoch[24](7120/301): Loss 0.5349 LR: 9.464e-02 Score 83.703 Data time: 2.1900, Total iter time: 5.6110 +thomas 04/05 21:24:44 ===> Epoch[24](7160/301): Loss 0.5521 LR: 9.461e-02 Score 82.851 Data time: 2.4347, Total iter time: 6.1626 +thomas 04/05 21:29:02 ===> Epoch[24](7200/301): Loss 0.5519 LR: 9.458e-02 Score 83.414 Data time: 2.6078, Total iter time: 6.3547 +thomas 04/05 21:33:13 ===> Epoch[25](7240/301): Loss 0.5766 LR: 9.455e-02 Score 82.539 Data time: 2.4576, Total iter time: 6.2152 +thomas 04/05 21:37:18 ===> Epoch[25](7280/301): Loss 0.5876 LR: 9.452e-02 Score 81.854 Data time: 2.3161, Total iter time: 6.0504 +thomas 04/05 21:41:22 ===> Epoch[25](7320/301): Loss 0.5453 LR: 9.449e-02 Score 83.340 Data time: 2.3428, Total iter time: 6.0118 +thomas 04/05 21:45:24 ===> Epoch[25](7360/301): Loss 0.5522 LR: 9.446e-02 Score 83.063 Data time: 2.3212, Total iter time: 5.9562 +thomas 04/05 21:49:39 ===> Epoch[25](7400/301): Loss 0.5128 LR: 9.443e-02 Score 84.079 Data time: 2.5028, Total iter time: 6.2890 +thomas 04/05 21:53:50 ===> Epoch[25](7440/301): Loss 0.5675 LR: 9.440e-02 Score 82.387 Data time: 2.5139, Total iter time: 6.1974 +thomas 04/05 21:58:01 ===> Epoch[25](7480/301): Loss 0.5615 LR: 9.437e-02 Score 82.936 Data time: 2.4644, Total iter time: 6.1815 +thomas 04/05 22:01:44 ===> Epoch[25](7520/301): Loss 0.5861 LR: 9.434e-02 Score 81.555 Data time: 2.1674, Total iter time: 5.4998 +thomas 04/05 22:05:57 ===> Epoch[26](7560/301): Loss 0.5805 LR: 9.431e-02 Score 81.864 Data time: 2.4664, Total iter time: 6.2483 +thomas 04/05 22:10:00 ===> Epoch[26](7600/301): Loss 0.5498 LR: 9.428e-02 Score 82.543 Data time: 2.3310, Total iter time: 6.0015 +thomas 04/05 22:14:12 ===> Epoch[26](7640/301): Loss 0.5177 LR: 9.425e-02 Score 83.819 Data time: 2.4340, Total iter time: 6.2186 +thomas 04/05 22:18:19 ===> Epoch[26](7680/301): Loss 0.5487 LR: 9.422e-02 Score 82.912 Data time: 2.4527, Total iter time: 6.1146 +thomas 04/05 22:22:25 ===> Epoch[26](7720/301): Loss 0.6059 LR: 9.419e-02 Score 81.238 Data time: 2.3562, Total iter time: 6.0764 +thomas 04/05 22:26:28 ===> Epoch[26](7760/301): Loss 0.5287 LR: 9.416e-02 Score 83.401 Data time: 2.3023, Total iter time: 5.9853 +thomas 04/05 22:29:59 ===> Epoch[26](7800/301): Loss 0.5588 LR: 9.413e-02 Score 82.739 Data time: 2.0374, Total iter time: 5.2007 +thomas 04/05 22:33:38 ===> Epoch[27](7840/301): Loss 0.5716 LR: 9.410e-02 Score 82.522 Data time: 2.0929, Total iter time: 5.4184 +thomas 04/05 22:37:53 ===> Epoch[27](7880/301): Loss 0.5466 LR: 9.407e-02 Score 83.038 Data time: 2.5301, Total iter time: 6.3005 +thomas 04/05 22:42:10 ===> Epoch[27](7920/301): Loss 0.5902 LR: 9.404e-02 Score 81.911 Data time: 2.5356, Total iter time: 6.3397 +thomas 04/05 22:46:14 ===> Epoch[27](7960/301): Loss 0.5351 LR: 9.401e-02 Score 83.539 Data time: 2.3676, Total iter time: 6.0307 +thomas 04/05 22:50:05 ===> Epoch[27](8000/301): Loss 0.4951 LR: 9.398e-02 Score 84.729 Data time: 2.2168, Total iter time: 5.7008 +thomas 04/05 22:50:07 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/05 22:50:07 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/05 22:52:18 101/312: Data time: 0.0025, Iter time: 0.8422 Loss 0.318 (AVG: 0.695) Score 91.367 (AVG: 80.547) mIOU 48.168 mAP 66.246 mAcc 58.903 +IOU: 72.256 96.233 48.908 47.560 81.705 58.621 66.623 40.080 21.060 76.341 3.503 33.908 34.935 53.402 37.658 27.854 78.785 18.678 42.175 23.067 +mAP: 75.029 97.115 58.226 76.392 84.120 85.160 68.555 52.676 47.445 72.227 34.657 49.141 53.471 60.773 46.376 85.547 92.049 63.182 75.125 47.653 +mAcc: 89.327 98.698 69.693 84.029 88.966 97.924 79.189 62.412 21.706 92.234 5.223 36.004 45.118 61.006 43.734 30.912 81.842 19.026 42.223 28.787 + +thomas 04/05 22:54:08 201/312: Data time: 0.0030, Iter time: 0.5346 Loss 0.658 (AVG: 0.692) Score 81.369 (AVG: 80.549) mIOU 48.124 mAP 64.819 mAcc 58.172 +IOU: 73.238 95.962 50.019 57.653 77.463 57.880 61.316 36.416 19.827 69.392 4.957 37.192 34.746 53.241 38.458 33.470 76.887 17.912 42.375 24.069 +mAP: 76.171 97.210 55.733 68.186 82.512 82.601 64.332 52.154 44.570 71.440 29.089 51.120 51.235 63.672 47.097 81.988 92.911 66.392 72.591 45.383 +mAcc: 88.693 98.630 68.170 79.272 86.346 98.425 74.815 58.905 20.435 88.556 6.217 39.901 42.738 60.795 43.548 36.028 79.626 18.205 42.403 31.730 + +thomas 04/05 22:56:02 301/312: Data time: 0.0028, Iter time: 0.5253 Loss 0.979 (AVG: 0.713) Score 72.725 (AVG: 79.917) mIOU 47.470 mAP 65.068 mAcc 57.630 +IOU: 72.468 95.556 47.721 57.726 77.528 54.283 60.238 35.407 22.293 67.010 4.986 37.317 33.725 60.252 33.287 36.111 74.917 15.715 40.391 22.471 +mAP: 75.546 97.046 54.242 68.337 83.753 82.729 64.384 51.521 44.801 72.051 28.522 52.312 51.750 69.506 44.507 82.630 90.738 66.512 75.866 44.598 +mAcc: 88.566 98.444 65.426 78.330 86.498 97.927 74.264 57.066 22.948 86.482 6.004 40.041 42.361 68.252 38.771 39.080 77.047 15.938 40.421 28.736 + +thomas 04/05 22:56:15 312/312: Data time: 0.0025, Iter time: 0.4474 Loss 1.023 (AVG: 0.708) Score 69.994 (AVG: 80.097) mIOU 47.478 mAP 65.052 mAcc 57.561 +IOU: 72.634 95.526 47.660 57.395 78.449 54.930 60.828 35.341 21.964 67.318 4.889 37.317 33.465 59.752 33.286 35.091 75.142 15.476 40.391 22.707 +mAP: 75.358 97.062 54.144 67.045 84.160 82.911 65.376 51.673 44.989 71.755 28.214 52.312 51.653 69.177 44.507 82.946 90.900 65.963 75.866 45.031 +mAcc: 88.700 98.435 65.411 77.762 87.206 98.014 74.904 56.864 22.607 86.615 5.980 40.041 41.889 67.696 38.771 37.888 77.237 15.692 40.421 29.083 + +thomas 04/05 22:56:15 Finished test. Elapsed time: 368.3164 +thomas 04/05 22:56:16 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/05 22:56:17 Current best mIoU: 47.478 at iter 8000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/05 23:00:22 ===> Epoch[27](8040/301): Loss 0.5213 LR: 9.395e-02 Score 83.489 Data time: 2.3371, Total iter time: 6.0719 +thomas 04/05 23:04:35 ===> Epoch[27](8080/301): Loss 0.5576 LR: 9.392e-02 Score 82.828 Data time: 2.5240, Total iter time: 6.2516 +thomas 04/05 23:08:24 ===> Epoch[27](8120/301): Loss 0.5618 LR: 9.389e-02 Score 82.964 Data time: 2.2655, Total iter time: 5.6363 +thomas 04/05 23:12:22 ===> Epoch[28](8160/301): Loss 0.5280 LR: 9.386e-02 Score 83.589 Data time: 2.2819, Total iter time: 5.8825 +thomas 04/05 23:16:11 ===> Epoch[28](8200/301): Loss 0.5643 LR: 9.383e-02 Score 82.798 Data time: 2.1677, Total iter time: 5.6500 +thomas 04/05 23:20:08 ===> Epoch[28](8240/301): Loss 0.6202 LR: 9.380e-02 Score 80.790 Data time: 2.2665, Total iter time: 5.8565 +thomas 04/05 23:24:07 ===> Epoch[28](8280/301): Loss 0.5242 LR: 9.377e-02 Score 84.040 Data time: 2.2978, Total iter time: 5.8830 +thomas 04/05 23:28:33 ===> Epoch[28](8320/301): Loss 0.5880 LR: 9.374e-02 Score 81.716 Data time: 2.6246, Total iter time: 6.5747 +thomas 04/05 23:32:40 ===> Epoch[28](8360/301): Loss 0.5031 LR: 9.371e-02 Score 84.559 Data time: 2.3995, Total iter time: 6.0909 +thomas 04/05 23:36:27 ===> Epoch[28](8400/301): Loss 0.5365 LR: 9.368e-02 Score 83.837 Data time: 2.1798, Total iter time: 5.6169 +thomas 04/05 23:40:25 ===> Epoch[29](8440/301): Loss 0.5398 LR: 9.365e-02 Score 83.034 Data time: 2.2633, Total iter time: 5.8691 +thomas 04/05 23:44:29 ===> Epoch[29](8480/301): Loss 0.5560 LR: 9.362e-02 Score 82.806 Data time: 2.3200, Total iter time: 6.0332 +thomas 04/05 23:48:29 ===> Epoch[29](8520/301): Loss 0.5168 LR: 9.359e-02 Score 83.786 Data time: 2.3465, Total iter time: 5.9151 +thomas 04/05 23:52:41 ===> Epoch[29](8560/301): Loss 0.5066 LR: 9.356e-02 Score 84.356 Data time: 2.5072, Total iter time: 6.2101 +thomas 04/05 23:57:02 ===> Epoch[29](8600/301): Loss 0.5772 LR: 9.353e-02 Score 82.107 Data time: 2.5343, Total iter time: 6.4353 +thomas 04/06 00:00:49 ===> Epoch[29](8640/301): Loss 0.5059 LR: 9.350e-02 Score 84.434 Data time: 2.1766, Total iter time: 5.5952 +thomas 04/06 00:04:29 ===> Epoch[29](8680/301): Loss 0.5209 LR: 9.347e-02 Score 83.791 Data time: 2.0980, Total iter time: 5.4306 +thomas 04/06 00:08:03 ===> Epoch[29](8720/301): Loss 0.5038 LR: 9.344e-02 Score 84.469 Data time: 2.0453, Total iter time: 5.2778 +thomas 04/06 00:12:05 ===> Epoch[30](8760/301): Loss 0.4929 LR: 9.341e-02 Score 85.051 Data time: 2.3487, Total iter time: 5.9739 +thomas 04/06 00:16:05 ===> Epoch[30](8800/301): Loss 0.5310 LR: 9.338e-02 Score 83.202 Data time: 2.4055, Total iter time: 5.9067 +thomas 04/06 00:20:09 ===> Epoch[30](8840/301): Loss 0.5091 LR: 9.334e-02 Score 83.854 Data time: 2.3862, Total iter time: 6.0308 +thomas 04/06 00:24:10 ===> Epoch[30](8880/301): Loss 0.5835 LR: 9.331e-02 Score 82.028 Data time: 2.2893, Total iter time: 5.9422 +thomas 04/06 00:28:05 ===> Epoch[30](8920/301): Loss 0.5200 LR: 9.328e-02 Score 83.919 Data time: 2.2380, Total iter time: 5.7836 +thomas 04/06 00:32:04 ===> Epoch[30](8960/301): Loss 0.5424 LR: 9.325e-02 Score 83.589 Data time: 2.2956, Total iter time: 5.9160 +thomas 04/06 00:36:07 ===> Epoch[30](9000/301): Loss 0.5214 LR: 9.322e-02 Score 84.029 Data time: 2.3540, Total iter time: 5.9859 +thomas 04/06 00:36:08 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/06 00:36:08 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/06 00:38:22 101/312: Data time: 0.0027, Iter time: 0.6976 Loss 0.376 (AVG: 0.660) Score 88.318 (AVG: 79.320) mIOU 51.783 mAP 67.426 mAcc 62.141 +IOU: 68.880 95.516 46.959 70.952 79.928 69.939 52.228 33.682 35.631 66.625 10.601 28.727 44.139 46.801 22.256 56.899 83.817 28.445 65.516 28.110 +mAP: 73.640 96.621 59.691 67.770 86.514 83.160 61.723 56.998 45.886 73.439 36.129 50.953 68.165 67.804 45.976 77.630 92.379 78.839 84.667 40.539 +mAcc: 80.152 98.626 60.914 77.987 83.778 93.146 58.412 82.659 40.471 81.357 13.524 30.855 83.566 49.414 27.146 59.173 87.382 28.998 66.188 39.077 + +thomas 04/06 00:40:32 201/312: Data time: 0.0034, Iter time: 0.6407 Loss 0.988 (AVG: 0.658) Score 72.062 (AVG: 79.664) mIOU 50.284 mAP 66.405 mAcc 60.361 +IOU: 69.614 95.563 47.997 66.269 83.205 73.373 53.874 33.442 29.715 66.315 8.854 28.727 40.218 45.802 22.245 41.527 81.584 25.817 63.333 28.211 +mAP: 75.593 96.332 56.047 66.560 86.547 82.627 64.584 57.462 46.796 63.874 30.910 55.203 63.429 68.912 44.196 78.468 92.935 78.658 77.680 41.294 +mAcc: 80.857 98.598 64.764 73.002 88.084 92.540 58.891 83.669 32.338 81.776 11.219 30.272 81.077 47.911 25.244 42.994 85.100 26.146 63.814 38.918 + +thomas 04/06 00:42:33 301/312: Data time: 0.0028, Iter time: 0.6246 Loss 0.430 (AVG: 0.670) Score 83.377 (AVG: 79.262) mIOU 50.590 mAP 66.886 mAcc 60.739 +IOU: 69.123 95.820 49.777 61.011 83.594 70.911 53.921 31.245 30.891 70.647 7.153 34.149 44.627 41.237 31.024 39.761 80.737 27.976 60.381 27.814 +mAP: 76.140 96.646 56.214 64.784 87.488 82.673 63.354 55.447 46.055 68.109 27.910 59.257 65.913 66.196 49.218 79.659 92.064 77.309 80.809 42.480 +mAcc: 80.561 98.655 68.236 71.827 88.581 92.542 59.358 82.019 33.444 85.812 9.111 35.808 80.948 43.162 34.246 41.963 83.564 28.363 60.835 35.736 + +thomas 04/06 00:42:43 312/312: Data time: 0.0029, Iter time: 0.3681 Loss 0.507 (AVG: 0.670) Score 80.943 (AVG: 79.253) mIOU 50.690 mAP 67.137 mAcc 60.894 +IOU: 69.127 95.825 49.830 61.079 83.724 70.842 53.013 31.394 30.432 71.299 7.078 33.820 43.304 43.616 30.769 39.184 81.472 27.971 61.695 28.328 +mAP: 75.845 96.715 55.697 65.307 87.652 82.673 63.627 56.660 46.232 68.377 27.535 59.264 66.101 68.051 49.427 79.119 92.311 77.309 81.925 42.904 +mAcc: 80.505 98.657 68.345 72.020 88.649 92.542 58.196 82.496 32.861 86.175 9.001 35.412 80.781 45.550 33.942 41.649 84.241 28.363 62.129 36.366 + +thomas 04/06 00:42:43 Finished test. Elapsed time: 394.9586 +thomas 04/06 00:42:45 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/06 00:42:45 Current best mIoU: 50.690 at iter 9000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/06 00:46:33 ===> Epoch[31](9040/301): Loss 0.5478 LR: 9.319e-02 Score 82.949 Data time: 2.2160, Total iter time: 5.6471 +thomas 04/06 00:50:19 ===> Epoch[31](9080/301): Loss 0.5106 LR: 9.316e-02 Score 83.711 Data time: 2.1677, Total iter time: 5.5704 +thomas 04/06 00:54:34 ===> Epoch[31](9120/301): Loss 0.5332 LR: 9.313e-02 Score 83.319 Data time: 2.4326, Total iter time: 6.2995 +thomas 04/06 00:58:39 ===> Epoch[31](9160/301): Loss 0.5297 LR: 9.310e-02 Score 83.621 Data time: 2.3659, Total iter time: 6.0568 +thomas 04/06 01:02:29 ===> Epoch[31](9200/301): Loss 0.4933 LR: 9.307e-02 Score 84.709 Data time: 2.2577, Total iter time: 5.6741 +thomas 04/06 01:06:36 ===> Epoch[31](9240/301): Loss 0.5352 LR: 9.304e-02 Score 83.204 Data time: 2.4509, Total iter time: 6.0996 +thomas 04/06 01:10:32 ===> Epoch[31](9280/301): Loss 0.5537 LR: 9.301e-02 Score 83.272 Data time: 2.2977, Total iter time: 5.8260 +thomas 04/06 01:14:15 ===> Epoch[31](9320/301): Loss 0.5931 LR: 9.298e-02 Score 81.914 Data time: 2.1276, Total iter time: 5.5075 +thomas 04/06 01:18:06 ===> Epoch[32](9360/301): Loss 0.5703 LR: 9.295e-02 Score 82.311 Data time: 2.2388, Total iter time: 5.7051 +thomas 04/06 01:22:05 ===> Epoch[32](9400/301): Loss 0.5077 LR: 9.292e-02 Score 84.072 Data time: 2.3118, Total iter time: 5.8940 +thomas 04/06 01:26:08 ===> Epoch[32](9440/301): Loss 0.4512 LR: 9.289e-02 Score 86.153 Data time: 2.4164, Total iter time: 5.9822 +thomas 04/06 01:30:31 ===> Epoch[32](9480/301): Loss 0.5117 LR: 9.286e-02 Score 84.671 Data time: 2.6400, Total iter time: 6.5086 +thomas 04/06 01:34:27 ===> Epoch[32](9520/301): Loss 0.5281 LR: 9.283e-02 Score 83.457 Data time: 2.2515, Total iter time: 5.8235 +thomas 04/06 01:38:09 ===> Epoch[32](9560/301): Loss 0.4983 LR: 9.280e-02 Score 85.020 Data time: 2.1568, Total iter time: 5.4851 +thomas 04/06 01:42:04 ===> Epoch[32](9600/301): Loss 0.5157 LR: 9.277e-02 Score 83.851 Data time: 2.2847, Total iter time: 5.8016 +thomas 04/06 01:46:17 ===> Epoch[33](9640/301): Loss 0.5150 LR: 9.274e-02 Score 83.760 Data time: 2.4935, Total iter time: 6.2574 +thomas 04/06 01:50:27 ===> Epoch[33](9680/301): Loss 0.4804 LR: 9.271e-02 Score 84.825 Data time: 2.4667, Total iter time: 6.1864 +thomas 04/06 01:54:23 ===> Epoch[33](9720/301): Loss 0.5424 LR: 9.268e-02 Score 83.002 Data time: 2.3081, Total iter time: 5.8238 +thomas 04/06 01:58:00 ===> Epoch[33](9760/301): Loss 0.4943 LR: 9.265e-02 Score 85.199 Data time: 2.0978, Total iter time: 5.3520 +thomas 04/06 02:01:54 ===> Epoch[33](9800/301): Loss 0.5191 LR: 9.262e-02 Score 83.974 Data time: 2.2546, Total iter time: 5.7535 +thomas 04/06 02:05:49 ===> Epoch[33](9840/301): Loss 0.5122 LR: 9.259e-02 Score 84.345 Data time: 2.2764, Total iter time: 5.7928 +thomas 04/06 02:09:58 ===> Epoch[33](9880/301): Loss 0.5295 LR: 9.256e-02 Score 82.846 Data time: 2.4003, Total iter time: 6.1471 +thomas 04/06 02:14:03 ===> Epoch[33](9920/301): Loss 0.5083 LR: 9.253e-02 Score 84.299 Data time: 2.4366, Total iter time: 6.0433 +thomas 04/06 02:18:09 ===> Epoch[34](9960/301): Loss 0.5201 LR: 9.250e-02 Score 83.714 Data time: 2.4167, Total iter time: 6.0834 +thomas 04/06 02:22:20 ===> Epoch[34](10000/301): Loss 0.4875 LR: 9.247e-02 Score 85.013 Data time: 2.3973, Total iter time: 6.1972 +thomas 04/06 02:22:22 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/06 02:22:22 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/06 02:24:27 101/312: Data time: 0.0033, Iter time: 0.5370 Loss 0.535 (AVG: 0.745) Score 89.568 (AVG: 77.660) mIOU 41.896 mAP 64.398 mAcc 54.346 +IOU: 66.375 95.491 53.630 45.263 82.496 77.622 58.899 32.631 35.704 76.316 5.680 46.971 40.052 10.826 28.076 6.660 9.933 17.228 18.400 29.662 +mAP: 75.709 97.257 57.616 67.779 86.799 84.152 63.727 54.339 48.989 69.611 25.841 52.896 60.556 71.946 50.902 56.263 77.832 64.571 77.534 43.643 +mAcc: 74.103 98.884 73.299 81.346 91.360 90.325 73.930 58.329 48.161 87.869 6.864 53.157 51.143 77.521 28.157 6.796 9.933 17.439 18.406 39.897 + +thomas 04/06 02:26:20 201/312: Data time: 0.0029, Iter time: 0.7695 Loss 0.641 (AVG: 0.748) Score 78.967 (AVG: 77.386) mIOU 42.708 mAP 65.337 mAcc 54.920 +IOU: 65.383 95.357 51.121 52.569 82.344 77.647 59.157 30.901 39.515 68.123 6.097 46.962 40.106 20.711 30.721 10.373 9.917 20.550 19.704 26.902 +mAP: 76.070 97.375 56.395 70.274 88.598 84.286 67.186 51.052 51.154 69.453 29.220 55.953 59.866 72.291 49.919 68.274 78.540 68.437 71.346 41.045 +mAcc: 73.466 98.853 69.361 84.732 90.996 91.610 75.325 54.199 53.084 82.290 7.073 54.334 48.897 85.253 31.005 10.502 9.917 20.812 19.713 36.979 + +thomas 04/06 02:28:17 301/312: Data time: 0.0025, Iter time: 0.4187 Loss 0.384 (AVG: 0.739) Score 88.621 (AVG: 77.570) mIOU 42.635 mAP 65.741 mAcc 54.842 +IOU: 65.974 95.515 50.829 53.069 83.307 76.173 62.284 31.528 39.264 67.316 5.534 50.789 40.528 20.687 27.689 8.380 7.072 22.339 16.796 27.637 +mAP: 75.986 97.333 54.998 71.865 87.376 84.171 69.037 51.110 49.714 66.484 27.024 58.551 61.507 74.664 49.334 70.493 77.540 70.175 74.087 43.374 +mAcc: 73.538 98.850 69.269 86.408 91.996 91.085 76.757 54.249 51.583 79.965 6.428 58.204 48.980 87.621 28.072 8.631 7.072 22.641 16.803 38.695 + +thomas 04/06 02:28:30 312/312: Data time: 0.0030, Iter time: 0.4758 Loss 0.254 (AVG: 0.733) Score 94.144 (AVG: 77.705) mIOU 43.009 mAP 65.926 mAcc 55.230 +IOU: 66.069 95.534 51.485 52.910 83.159 76.630 62.423 31.823 40.021 67.240 5.400 50.602 41.458 21.760 29.634 9.764 6.793 22.622 17.001 27.853 +mAP: 76.015 97.392 55.474 71.657 87.345 83.325 68.350 51.574 50.037 66.499 27.048 58.260 61.640 75.428 51.085 71.599 78.026 70.326 73.934 43.499 +mAcc: 73.564 98.846 70.105 86.490 92.100 91.277 76.763 54.979 52.302 79.812 6.264 57.925 50.019 88.334 30.051 10.064 6.794 22.926 17.008 38.979 + +thomas 04/06 02:28:30 Finished test. Elapsed time: 368.3927 +thomas 04/06 02:28:30 Current best mIoU: 50.690 at iter 9000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/06 02:32:26 ===> Epoch[34](10040/301): Loss 0.5226 LR: 9.244e-02 Score 83.580 Data time: 2.2649, Total iter time: 5.8154 +thomas 04/06 02:36:31 ===> Epoch[34](10080/301): Loss 0.5469 LR: 9.241e-02 Score 84.078 Data time: 2.3994, Total iter time: 6.0639 +thomas 04/06 02:40:32 ===> Epoch[34](10120/301): Loss 0.4745 LR: 9.238e-02 Score 85.324 Data time: 2.3843, Total iter time: 5.9467 +thomas 04/06 02:44:25 ===> Epoch[34](10160/301): Loss 0.5178 LR: 9.235e-02 Score 83.820 Data time: 2.2464, Total iter time: 5.7374 +thomas 04/06 02:48:13 ===> Epoch[34](10200/301): Loss 0.5212 LR: 9.232e-02 Score 83.591 Data time: 2.1746, Total iter time: 5.6278 +thomas 04/06 02:51:59 ===> Epoch[35](10240/301): Loss 0.5001 LR: 9.229e-02 Score 84.580 Data time: 2.1706, Total iter time: 5.5815 +thomas 04/06 02:55:47 ===> Epoch[35](10280/301): Loss 0.4807 LR: 9.226e-02 Score 85.043 Data time: 2.2124, Total iter time: 5.6325 +thomas 04/06 02:59:45 ===> Epoch[35](10320/301): Loss 0.5204 LR: 9.223e-02 Score 83.741 Data time: 2.3331, Total iter time: 5.8806 +thomas 04/06 03:03:47 ===> Epoch[35](10360/301): Loss 0.4739 LR: 9.220e-02 Score 85.147 Data time: 2.3984, Total iter time: 5.9779 +thomas 04/06 03:07:42 ===> Epoch[35](10400/301): Loss 0.4894 LR: 9.217e-02 Score 84.648 Data time: 2.2768, Total iter time: 5.8004 +thomas 04/06 03:11:37 ===> Epoch[35](10440/301): Loss 0.4947 LR: 9.213e-02 Score 84.543 Data time: 2.2594, Total iter time: 5.7826 +thomas 04/06 03:15:36 ===> Epoch[35](10480/301): Loss 0.5315 LR: 9.210e-02 Score 83.475 Data time: 2.2895, Total iter time: 5.9042 +thomas 04/06 03:19:54 ===> Epoch[35](10520/301): Loss 0.4971 LR: 9.207e-02 Score 84.510 Data time: 2.4831, Total iter time: 6.3647 +thomas 04/06 03:24:13 ===> Epoch[36](10560/301): Loss 0.5082 LR: 9.204e-02 Score 84.580 Data time: 2.5412, Total iter time: 6.4243 +thomas 04/06 03:28:17 ===> Epoch[36](10600/301): Loss 0.5213 LR: 9.201e-02 Score 84.169 Data time: 2.4242, Total iter time: 6.0313 +thomas 04/06 03:32:12 ===> Epoch[36](10640/301): Loss 0.4912 LR: 9.198e-02 Score 84.624 Data time: 2.3025, Total iter time: 5.8011 +thomas 04/06 03:36:16 ===> Epoch[36](10680/301): Loss 0.5015 LR: 9.195e-02 Score 84.146 Data time: 2.3473, Total iter time: 6.0298 +thomas 04/06 03:40:31 ===> Epoch[36](10720/301): Loss 0.5118 LR: 9.192e-02 Score 83.795 Data time: 2.4575, Total iter time: 6.3073 +thomas 04/06 03:44:59 ===> Epoch[36](10760/301): Loss 0.4894 LR: 9.189e-02 Score 83.976 Data time: 2.5569, Total iter time: 6.6047 +thomas 04/06 03:49:08 ===> Epoch[36](10800/301): Loss 0.4861 LR: 9.186e-02 Score 85.052 Data time: 2.4502, Total iter time: 6.1550 +thomas 04/06 03:53:37 ===> Epoch[37](10840/301): Loss 0.4827 LR: 9.183e-02 Score 85.290 Data time: 2.6260, Total iter time: 6.6362 +thomas 04/06 03:57:42 ===> Epoch[37](10880/301): Loss 0.5317 LR: 9.180e-02 Score 83.579 Data time: 2.3788, Total iter time: 6.0423 +thomas 04/06 04:02:05 ===> Epoch[37](10920/301): Loss 0.4681 LR: 9.177e-02 Score 85.488 Data time: 2.4868, Total iter time: 6.4717 +thomas 04/06 04:06:07 ===> Epoch[37](10960/301): Loss 0.4944 LR: 9.174e-02 Score 84.755 Data time: 2.3589, Total iter time: 5.9766 +thomas 04/06 04:10:11 ===> Epoch[37](11000/301): Loss 0.4800 LR: 9.171e-02 Score 85.007 Data time: 2.3934, Total iter time: 6.0263 +thomas 04/06 04:10:12 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/06 04:10:12 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/06 04:12:28 101/312: Data time: 0.0027, Iter time: 1.2622 Loss 0.519 (AVG: 0.624) Score 85.223 (AVG: 81.543) mIOU 50.808 mAP 68.857 mAcc 61.337 +IOU: 75.083 95.790 49.808 68.107 80.541 61.081 58.036 40.973 27.072 71.129 7.878 19.785 44.547 42.883 39.023 39.633 67.722 43.560 51.039 32.477 +mAP: 77.197 97.526 56.900 71.351 86.797 75.144 66.597 59.616 47.951 63.252 40.036 59.610 62.483 66.548 47.691 91.633 92.290 81.734 83.771 49.014 +mAcc: 87.447 98.766 65.984 87.534 82.877 84.965 61.968 59.505 28.270 93.721 8.154 20.306 63.001 66.979 43.142 45.021 67.997 44.541 51.127 65.436 + +thomas 04/06 04:14:31 201/312: Data time: 0.0030, Iter time: 1.3384 Loss 1.118 (AVG: 0.616) Score 60.039 (AVG: 81.597) mIOU 52.097 mAP 68.964 mAcc 62.431 +IOU: 75.053 95.895 48.546 66.724 81.547 64.899 58.237 38.541 28.568 70.656 5.179 17.122 45.990 58.145 33.119 48.807 74.962 43.904 54.977 31.061 +mAP: 77.937 97.651 57.147 70.113 88.234 76.734 67.949 56.620 49.749 65.477 33.558 57.752 61.966 74.948 44.279 88.112 94.441 84.001 84.748 47.859 +mAcc: 86.932 98.856 67.647 85.719 83.943 84.416 62.554 62.223 30.241 86.731 5.313 17.572 62.906 80.886 39.477 55.896 77.482 45.117 55.162 59.543 + +thomas 04/06 04:16:30 301/312: Data time: 0.0032, Iter time: 0.8088 Loss 0.652 (AVG: 0.623) Score 77.415 (AVG: 81.476) mIOU 51.699 mAP 68.630 mAcc 62.274 +IOU: 75.054 95.805 50.445 63.785 81.551 69.749 54.775 39.744 31.471 70.586 3.684 22.930 47.216 55.299 34.946 37.838 73.801 38.524 55.787 30.993 +mAP: 77.772 97.580 59.283 67.252 87.605 78.878 66.072 57.125 50.655 69.343 33.704 57.416 61.507 77.696 45.947 83.170 94.180 79.397 80.353 47.671 +mAcc: 86.685 98.729 69.963 83.311 84.293 87.527 59.079 63.938 33.995 86.349 3.739 23.622 64.411 81.689 41.924 42.829 78.296 39.512 56.064 59.537 + +thomas 04/06 04:16:43 312/312: Data time: 0.0045, Iter time: 0.5253 Loss 0.484 (AVG: 0.626) Score 84.151 (AVG: 81.322) mIOU 51.475 mAP 68.679 mAcc 62.167 +IOU: 74.810 95.833 50.389 62.555 81.527 69.965 54.241 39.523 30.663 69.632 3.536 22.970 46.436 54.802 35.341 37.838 73.950 38.400 55.787 31.312 +mAP: 77.552 97.620 59.564 67.657 87.422 79.130 66.035 57.389 49.840 69.964 34.024 57.863 61.507 76.330 46.810 83.170 94.217 79.597 80.353 47.525 +mAcc: 86.519 98.743 70.386 83.706 84.237 87.722 58.545 64.143 33.043 86.461 3.587 23.659 64.411 81.629 42.316 42.829 78.360 39.376 56.064 57.612 + +thomas 04/06 04:16:43 Finished test. Elapsed time: 390.8479 +thomas 04/06 04:16:45 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/06 04:16:45 Current best mIoU: 51.475 at iter 11000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/06 04:20:31 ===> Epoch[37](11040/301): Loss 0.4562 LR: 9.168e-02 Score 85.596 Data time: 2.1940, Total iter time: 5.5842 +thomas 04/06 04:24:18 ===> Epoch[37](11080/301): Loss 0.5101 LR: 9.165e-02 Score 84.323 Data time: 2.1921, Total iter time: 5.5846 +thomas 04/06 04:28:12 ===> Epoch[37](11120/301): Loss 0.4901 LR: 9.162e-02 Score 84.501 Data time: 2.2518, Total iter time: 5.7820 +thomas 04/06 04:32:17 ===> Epoch[38](11160/301): Loss 0.5112 LR: 9.159e-02 Score 83.799 Data time: 2.3834, Total iter time: 6.0403 +thomas 04/06 04:36:18 ===> Epoch[38](11200/301): Loss 0.5338 LR: 9.156e-02 Score 83.740 Data time: 2.3759, Total iter time: 5.9600 +thomas 04/06 04:40:20 ===> Epoch[38](11240/301): Loss 0.4662 LR: 9.153e-02 Score 85.376 Data time: 2.3739, Total iter time: 5.9762 +thomas 04/06 04:44:11 ===> Epoch[38](11280/301): Loss 0.4838 LR: 9.150e-02 Score 84.554 Data time: 2.2499, Total iter time: 5.6849 +thomas 04/06 04:47:59 ===> Epoch[38](11320/301): Loss 0.5025 LR: 9.147e-02 Score 84.224 Data time: 2.1695, Total iter time: 5.6432 +thomas 04/06 04:52:02 ===> Epoch[38](11360/301): Loss 0.4665 LR: 9.144e-02 Score 85.349 Data time: 2.3477, Total iter time: 5.9901 +thomas 04/06 04:55:48 ===> Epoch[38](11400/301): Loss 0.4862 LR: 9.141e-02 Score 84.637 Data time: 2.1710, Total iter time: 5.5684 +thomas 04/06 04:59:33 ===> Epoch[39](11440/301): Loss 0.4533 LR: 9.138e-02 Score 85.745 Data time: 2.2350, Total iter time: 5.5707 +thomas 04/06 05:03:34 ===> Epoch[39](11480/301): Loss 0.5037 LR: 9.135e-02 Score 84.505 Data time: 2.3609, Total iter time: 5.9384 +thomas 04/06 05:07:31 ===> Epoch[39](11520/301): Loss 0.4884 LR: 9.132e-02 Score 84.858 Data time: 2.3157, Total iter time: 5.8556 +thomas 04/06 05:11:25 ===> Epoch[39](11560/301): Loss 0.4511 LR: 9.129e-02 Score 85.896 Data time: 2.2738, Total iter time: 5.7544 +thomas 04/06 05:15:17 ===> Epoch[39](11600/301): Loss 0.4778 LR: 9.126e-02 Score 84.983 Data time: 2.2572, Total iter time: 5.7394 +thomas 04/06 05:19:23 ===> Epoch[39](11640/301): Loss 0.4684 LR: 9.123e-02 Score 85.068 Data time: 2.3954, Total iter time: 6.0719 +thomas 04/06 05:23:38 ===> Epoch[39](11680/301): Loss 0.5110 LR: 9.120e-02 Score 84.139 Data time: 2.4917, Total iter time: 6.2939 +thomas 04/06 05:27:43 ===> Epoch[39](11720/301): Loss 0.4898 LR: 9.117e-02 Score 84.603 Data time: 2.3645, Total iter time: 6.0363 +thomas 04/06 05:31:44 ===> Epoch[40](11760/301): Loss 0.4862 LR: 9.114e-02 Score 84.980 Data time: 2.3453, Total iter time: 5.9609 +thomas 04/06 05:35:37 ===> Epoch[40](11800/301): Loss 0.4844 LR: 9.110e-02 Score 84.986 Data time: 2.2304, Total iter time: 5.7431 +thomas 04/06 05:39:35 ===> Epoch[40](11840/301): Loss 0.5307 LR: 9.107e-02 Score 83.732 Data time: 2.2766, Total iter time: 5.8876 +thomas 04/06 05:43:41 ===> Epoch[40](11880/301): Loss 0.4539 LR: 9.104e-02 Score 86.123 Data time: 2.3865, Total iter time: 6.0591 +thomas 04/06 05:47:28 ===> Epoch[40](11920/301): Loss 0.4804 LR: 9.101e-02 Score 84.845 Data time: 2.2238, Total iter time: 5.6131 +thomas 04/06 05:51:27 ===> Epoch[40](11960/301): Loss 0.4749 LR: 9.098e-02 Score 85.008 Data time: 2.3260, Total iter time: 5.8983 +thomas 04/06 05:55:17 ===> Epoch[40](12000/301): Loss 0.4474 LR: 9.095e-02 Score 85.904 Data time: 2.2547, Total iter time: 5.6761 +thomas 04/06 05:55:19 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/06 05:55:19 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/06 05:57:24 101/312: Data time: 0.0036, Iter time: 0.3353 Loss 0.091 (AVG: 0.693) Score 98.581 (AVG: 79.226) mIOU 51.713 mAP 66.971 mAcc 65.775 +IOU: 68.699 96.106 45.073 64.617 84.818 76.250 62.233 36.934 31.924 64.623 11.742 38.699 48.210 32.273 24.312 50.280 78.011 28.445 63.214 27.789 +mAP: 75.586 96.876 51.871 69.668 85.289 73.363 62.105 58.879 48.616 60.831 27.579 64.071 60.303 83.846 50.047 80.856 92.831 74.515 84.482 37.806 +mAcc: 78.106 98.472 67.138 83.301 90.177 87.590 72.353 73.753 33.294 72.991 21.971 42.513 61.649 94.654 25.406 83.443 88.038 32.731 70.505 37.407 + +thomas 04/06 05:59:20 201/312: Data time: 0.0024, Iter time: 0.4265 Loss 1.305 (AVG: 0.745) Score 53.937 (AVG: 77.782) mIOU 50.188 mAP 66.180 mAcc 63.856 +IOU: 65.910 95.797 41.394 64.224 83.148 73.554 62.364 33.192 32.050 61.355 11.124 35.387 45.306 31.450 21.117 40.828 78.116 36.054 61.180 30.210 +mAP: 73.573 96.888 51.762 74.201 84.126 73.570 62.943 53.770 47.510 60.921 26.572 62.479 56.576 72.402 46.287 81.760 94.932 77.260 83.248 42.827 +mAcc: 76.035 98.401 67.025 81.071 89.061 84.211 72.884 71.510 34.215 68.279 18.762 37.676 57.949 87.732 22.982 70.292 88.905 40.441 70.691 38.994 + +thomas 04/06 06:01:15 301/312: Data time: 0.0025, Iter time: 0.5948 Loss 0.614 (AVG: 0.716) Score 81.513 (AVG: 78.682) mIOU 50.020 mAP 66.948 mAcc 63.418 +IOU: 67.427 95.906 43.331 61.937 84.255 76.557 65.043 33.390 30.182 61.815 10.392 34.493 42.961 29.078 21.544 43.853 76.505 31.763 58.989 30.987 +mAP: 74.809 97.323 54.819 74.595 86.041 77.471 67.907 55.432 48.166 61.572 27.898 63.980 55.761 73.608 42.993 83.088 94.220 75.280 77.338 46.660 +mAcc: 77.611 98.355 68.694 81.753 89.745 86.689 74.519 68.998 32.103 68.824 18.500 36.232 57.540 87.602 23.315 67.557 87.051 35.080 68.528 39.656 + +thomas 04/06 06:01:29 312/312: Data time: 0.0022, Iter time: 0.6789 Loss 0.163 (AVG: 0.715) Score 96.914 (AVG: 78.692) mIOU 50.163 mAP 66.921 mAcc 63.632 +IOU: 67.569 95.945 43.859 61.236 84.060 76.973 64.968 33.770 31.081 62.130 10.364 36.035 42.877 29.982 21.686 43.459 75.983 31.482 58.849 30.946 +mAP: 74.788 97.376 54.444 73.403 85.410 78.184 67.368 55.860 47.730 61.291 27.720 63.996 55.848 74.162 44.234 83.088 94.220 75.353 77.338 46.609 +mAcc: 77.735 98.365 69.238 81.312 89.571 87.077 74.536 69.462 33.017 69.141 18.678 37.824 57.448 88.143 23.630 67.557 87.051 34.805 68.528 39.528 + +thomas 04/06 06:01:29 Finished test. Elapsed time: 369.8016 +thomas 04/06 06:01:29 Current best mIoU: 51.475 at iter 11000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/06 06:05:14 ===> Epoch[40](12040/301): Loss 0.4704 LR: 9.092e-02 Score 84.975 Data time: 2.1841, Total iter time: 5.5579 +thomas 04/06 06:09:19 ===> Epoch[41](12080/301): Loss 0.4338 LR: 9.089e-02 Score 86.541 Data time: 2.3694, Total iter time: 6.0289 +thomas 04/06 06:13:26 ===> Epoch[41](12120/301): Loss 0.5017 LR: 9.086e-02 Score 84.606 Data time: 2.4098, Total iter time: 6.1091 +thomas 04/06 06:17:28 ===> Epoch[41](12160/301): Loss 0.4854 LR: 9.083e-02 Score 84.610 Data time: 2.3571, Total iter time: 5.9722 +thomas 04/06 06:21:46 ===> Epoch[41](12200/301): Loss 0.4570 LR: 9.080e-02 Score 85.622 Data time: 2.4745, Total iter time: 6.3568 +thomas 04/06 06:25:32 ===> Epoch[41](12240/301): Loss 0.4609 LR: 9.077e-02 Score 85.501 Data time: 2.2167, Total iter time: 5.5792 +thomas 04/06 06:29:27 ===> Epoch[41](12280/301): Loss 0.4858 LR: 9.074e-02 Score 84.842 Data time: 2.3384, Total iter time: 5.8140 +thomas 04/06 06:33:27 ===> Epoch[41](12320/301): Loss 0.4883 LR: 9.071e-02 Score 84.854 Data time: 2.3430, Total iter time: 5.9231 +thomas 04/06 06:37:34 ===> Epoch[42](12360/301): Loss 0.4753 LR: 9.068e-02 Score 85.687 Data time: 2.4092, Total iter time: 6.1082 +thomas 04/06 06:41:32 ===> Epoch[42](12400/301): Loss 0.4428 LR: 9.065e-02 Score 86.107 Data time: 2.3434, Total iter time: 5.8713 +thomas 04/06 06:45:41 ===> Epoch[42](12440/301): Loss 0.4575 LR: 9.062e-02 Score 86.204 Data time: 2.4070, Total iter time: 6.1428 +thomas 04/06 06:49:37 ===> Epoch[42](12480/301): Loss 0.5381 LR: 9.059e-02 Score 83.375 Data time: 2.2939, Total iter time: 5.8353 +thomas 04/06 06:53:29 ===> Epoch[42](12520/301): Loss 0.4457 LR: 9.056e-02 Score 85.995 Data time: 2.2463, Total iter time: 5.7298 +thomas 04/06 06:57:35 ===> Epoch[42](12560/301): Loss 0.4675 LR: 9.053e-02 Score 85.441 Data time: 2.3867, Total iter time: 6.0504 +thomas 04/06 07:01:28 ===> Epoch[42](12600/301): Loss 0.4797 LR: 9.050e-02 Score 85.164 Data time: 2.2813, Total iter time: 5.7518 +thomas 04/06 07:05:27 ===> Epoch[42](12640/301): Loss 0.4712 LR: 9.047e-02 Score 85.002 Data time: 2.3133, Total iter time: 5.8979 +thomas 04/06 07:09:30 ===> Epoch[43](12680/301): Loss 0.4760 LR: 9.044e-02 Score 84.824 Data time: 2.3830, Total iter time: 5.9829 +thomas 04/06 07:13:33 ===> Epoch[43](12720/301): Loss 0.4707 LR: 9.041e-02 Score 85.275 Data time: 2.3533, Total iter time: 5.9874 +thomas 04/06 07:17:23 ===> Epoch[43](12760/301): Loss 0.4751 LR: 9.038e-02 Score 85.603 Data time: 2.2451, Total iter time: 5.6767 +thomas 04/06 07:21:24 ===> Epoch[43](12800/301): Loss 0.4887 LR: 9.035e-02 Score 85.061 Data time: 2.3327, Total iter time: 5.9481 +thomas 04/06 07:25:17 ===> Epoch[43](12840/301): Loss 0.4556 LR: 9.032e-02 Score 85.972 Data time: 2.2882, Total iter time: 5.7510 +thomas 04/06 07:29:17 ===> Epoch[43](12880/301): Loss 0.4620 LR: 9.029e-02 Score 85.860 Data time: 2.3297, Total iter time: 5.9228 +thomas 04/06 07:33:10 ===> Epoch[43](12920/301): Loss 0.4462 LR: 9.026e-02 Score 86.039 Data time: 2.2646, Total iter time: 5.7496 +thomas 04/06 07:36:57 ===> Epoch[44](12960/301): Loss 0.4718 LR: 9.023e-02 Score 85.379 Data time: 2.2222, Total iter time: 5.5880 +thomas 04/06 07:41:01 ===> Epoch[44](13000/301): Loss 0.4628 LR: 9.020e-02 Score 85.490 Data time: 2.3893, Total iter time: 6.0366 +thomas 04/06 07:41:03 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/06 07:41:03 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/06 07:43:03 101/312: Data time: 0.0026, Iter time: 0.4280 Loss 0.220 (AVG: 0.623) Score 93.975 (AVG: 81.581) mIOU 51.884 mAP 66.483 mAcc 62.050 +IOU: 75.534 94.860 43.975 62.526 82.922 73.758 61.130 40.437 38.263 66.896 9.789 53.341 33.037 59.391 33.398 39.431 51.308 33.070 54.916 29.691 +mAP: 78.957 94.423 45.993 58.390 88.560 82.205 74.564 62.351 48.907 65.347 23.143 55.095 65.782 75.176 36.870 87.578 81.366 79.941 82.474 42.529 +mAcc: 85.939 98.886 52.474 69.429 93.616 88.481 79.445 78.433 40.550 86.555 15.174 65.492 36.553 73.089 40.061 44.688 52.318 34.724 55.026 50.075 + +thomas 04/06 07:44:57 201/312: Data time: 0.0030, Iter time: 0.5242 Loss 0.438 (AVG: 0.631) Score 84.834 (AVG: 81.427) mIOU 51.050 mAP 67.557 mAcc 61.003 +IOU: 74.670 95.415 43.604 62.792 82.819 78.430 60.268 38.142 36.872 66.292 9.002 49.569 33.801 57.251 39.398 34.253 43.004 37.908 46.318 31.197 +mAP: 78.931 95.464 48.437 62.386 89.128 84.151 67.792 59.210 51.796 68.935 29.008 57.304 64.038 76.959 48.096 87.804 81.451 73.132 79.886 47.226 +mAcc: 85.301 99.014 53.563 73.890 94.071 91.735 76.375 73.571 39.414 81.010 12.004 60.065 38.531 79.155 45.710 36.381 43.531 39.545 46.385 50.804 + +thomas 04/06 07:46:57 301/312: Data time: 0.0030, Iter time: 0.9952 Loss 0.502 (AVG: 0.628) Score 86.983 (AVG: 81.425) mIOU 50.703 mAP 66.586 mAcc 60.941 +IOU: 74.473 95.294 44.495 64.215 83.253 75.048 59.782 38.967 34.084 68.067 9.270 53.664 34.234 54.521 36.587 31.955 46.455 38.609 40.282 30.802 +mAP: 78.049 95.805 47.985 65.793 89.631 80.516 67.321 57.777 50.605 66.224 28.753 58.288 61.755 76.245 44.101 86.171 83.925 70.471 76.686 45.614 +mAcc: 85.194 99.022 54.078 75.542 93.976 91.321 76.964 72.685 36.823 83.837 12.449 65.148 38.547 79.529 44.010 33.555 46.956 40.060 40.336 48.778 + +thomas 04/06 07:47:13 312/312: Data time: 0.0025, Iter time: 0.9889 Loss 0.279 (AVG: 0.629) Score 90.528 (AVG: 81.324) mIOU 50.704 mAP 66.413 mAcc 61.001 +IOU: 74.320 95.323 44.808 63.113 83.333 74.781 60.173 39.082 34.264 67.877 9.003 53.426 34.136 54.417 36.729 31.703 47.459 38.762 40.608 30.760 +mAP: 77.870 95.717 47.589 63.911 89.161 80.716 67.013 57.813 50.619 66.224 28.601 55.853 61.391 76.707 44.437 85.916 84.372 70.827 77.712 45.815 +mAcc: 84.863 99.032 54.246 74.283 93.953 91.219 77.296 72.736 37.003 83.837 12.011 65.426 38.464 79.979 44.252 33.275 47.976 40.305 40.661 49.203 + +thomas 04/06 07:47:13 Finished test. Elapsed time: 369.7141 +thomas 04/06 07:47:13 Current best mIoU: 51.475 at iter 11000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/06 07:51:18 ===> Epoch[44](13040/301): Loss 0.4610 LR: 9.016e-02 Score 86.020 Data time: 2.4316, Total iter time: 6.0748 +thomas 04/06 07:55:17 ===> Epoch[44](13080/301): Loss 0.4948 LR: 9.013e-02 Score 84.559 Data time: 2.3046, Total iter time: 5.8834 +thomas 04/06 07:59:03 ===> Epoch[44](13120/301): Loss 0.4403 LR: 9.010e-02 Score 86.093 Data time: 2.1866, Total iter time: 5.5819 +thomas 04/06 08:03:06 ===> Epoch[44](13160/301): Loss 0.4303 LR: 9.007e-02 Score 86.485 Data time: 2.3766, Total iter time: 5.9895 +thomas 04/06 08:07:07 ===> Epoch[44](13200/301): Loss 0.4555 LR: 9.004e-02 Score 85.659 Data time: 2.3494, Total iter time: 5.9461 +thomas 04/06 08:10:59 ===> Epoch[44](13240/301): Loss 0.4520 LR: 9.001e-02 Score 85.989 Data time: 2.2408, Total iter time: 5.7378 +thomas 04/06 08:14:54 ===> Epoch[45](13280/301): Loss 0.4789 LR: 8.998e-02 Score 85.114 Data time: 2.3011, Total iter time: 5.7943 +thomas 04/06 08:18:52 ===> Epoch[45](13320/301): Loss 0.4587 LR: 8.995e-02 Score 85.805 Data time: 2.3199, Total iter time: 5.8676 +thomas 04/06 08:23:02 ===> Epoch[45](13360/301): Loss 0.4779 LR: 8.992e-02 Score 85.507 Data time: 2.4415, Total iter time: 6.1669 +thomas 04/06 08:26:55 ===> Epoch[45](13400/301): Loss 0.4506 LR: 8.989e-02 Score 85.905 Data time: 2.2579, Total iter time: 5.7489 +thomas 04/06 08:30:49 ===> Epoch[45](13440/301): Loss 0.4562 LR: 8.986e-02 Score 85.712 Data time: 2.2447, Total iter time: 5.7772 +thomas 04/06 08:34:39 ===> Epoch[45](13480/301): Loss 0.4351 LR: 8.983e-02 Score 86.355 Data time: 2.1970, Total iter time: 5.6763 +thomas 04/06 08:38:47 ===> Epoch[45](13520/301): Loss 0.4636 LR: 8.980e-02 Score 85.384 Data time: 2.3985, Total iter time: 6.1298 +thomas 04/06 08:42:52 ===> Epoch[46](13560/301): Loss 0.4680 LR: 8.977e-02 Score 85.881 Data time: 2.4135, Total iter time: 6.0494 +thomas 04/06 08:47:02 ===> Epoch[46](13600/301): Loss 0.4540 LR: 8.974e-02 Score 85.838 Data time: 2.4283, Total iter time: 6.1613 +thomas 04/06 08:50:57 ===> Epoch[46](13640/301): Loss 0.4673 LR: 8.971e-02 Score 84.832 Data time: 2.2915, Total iter time: 5.8215 +thomas 04/06 08:54:47 ===> Epoch[46](13680/301): Loss 0.5071 LR: 8.968e-02 Score 84.353 Data time: 2.2133, Total iter time: 5.6612 +thomas 04/06 08:58:35 ===> Epoch[46](13720/301): Loss 0.4392 LR: 8.965e-02 Score 86.486 Data time: 2.2116, Total iter time: 5.6478 +thomas 04/06 09:02:32 ===> Epoch[46](13760/301): Loss 0.4364 LR: 8.962e-02 Score 86.780 Data time: 2.2958, Total iter time: 5.8523 +thomas 04/06 09:06:35 ===> Epoch[46](13800/301): Loss 0.4982 LR: 8.959e-02 Score 84.191 Data time: 2.3488, Total iter time: 5.9780 +thomas 04/06 09:10:45 ===> Epoch[46](13840/301): Loss 0.4128 LR: 8.956e-02 Score 86.910 Data time: 2.4607, Total iter time: 6.1758 +thomas 04/06 09:14:26 ===> Epoch[47](13880/301): Loss 0.3991 LR: 8.953e-02 Score 87.226 Data time: 2.1811, Total iter time: 5.4806 +thomas 04/06 09:18:28 ===> Epoch[47](13920/301): Loss 0.5028 LR: 8.950e-02 Score 84.502 Data time: 2.3051, Total iter time: 5.9750 +thomas 04/06 09:22:22 ===> Epoch[47](13960/301): Loss 0.5247 LR: 8.947e-02 Score 83.695 Data time: 2.2356, Total iter time: 5.7708 +thomas 04/06 09:26:29 ===> Epoch[47](14000/301): Loss 0.4344 LR: 8.944e-02 Score 86.447 Data time: 2.4163, Total iter time: 6.0912 +thomas 04/06 09:26:30 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/06 09:26:30 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/06 09:28:37 101/312: Data time: 0.0029, Iter time: 0.5868 Loss 0.425 (AVG: 0.589) Score 89.176 (AVG: 83.242) mIOU 53.187 mAP 68.850 mAcc 67.204 +IOU: 75.746 96.091 57.296 62.559 83.796 65.011 60.627 32.835 18.881 69.141 8.038 58.094 38.170 41.554 33.331 23.342 73.218 64.664 67.560 33.791 +mAP: 76.020 95.461 60.047 54.628 85.051 86.431 69.877 55.486 39.655 67.870 23.992 52.996 54.949 73.895 76.449 87.489 95.052 88.298 89.145 44.212 +mAcc: 88.147 98.441 72.678 67.846 93.317 91.435 66.895 74.523 19.316 78.653 8.577 72.378 77.969 43.704 33.570 69.246 90.167 70.379 85.248 41.599 + +thomas 04/06 09:30:41 201/312: Data time: 0.0039, Iter time: 1.3219 Loss 0.522 (AVG: 0.603) Score 82.424 (AVG: 82.649) mIOU 53.777 mAP 68.327 mAcc 65.894 +IOU: 74.880 96.179 52.046 64.564 83.230 66.932 59.216 38.331 24.341 74.654 6.545 48.780 47.466 43.507 16.793 44.614 75.566 50.581 73.633 33.672 +mAP: 77.394 96.217 56.247 65.290 87.899 85.117 67.358 61.652 46.273 67.743 29.163 57.196 56.403 72.541 51.300 86.985 91.619 79.184 85.476 45.482 +mAcc: 87.832 98.456 69.565 73.317 93.065 93.287 65.856 76.360 24.925 85.067 6.907 59.361 77.056 46.382 16.846 70.952 87.312 56.286 88.048 41.005 + +thomas 04/06 09:32:46 301/312: Data time: 0.0041, Iter time: 0.8621 Loss 0.860 (AVG: 0.628) Score 73.312 (AVG: 82.176) mIOU 54.498 mAP 68.316 mAcc 65.798 +IOU: 73.730 95.882 52.840 66.284 83.484 71.323 61.543 37.931 25.639 72.059 6.266 47.733 50.379 42.752 20.757 48.261 75.198 50.965 71.996 34.944 +mAP: 77.053 96.344 57.743 67.242 88.126 83.571 66.975 60.542 47.631 68.536 31.923 58.776 56.751 69.256 51.680 85.862 92.373 79.356 80.632 45.949 +mAcc: 87.346 98.391 70.533 75.264 93.524 93.468 67.644 75.950 26.581 82.376 6.525 55.681 80.522 46.440 20.798 64.890 87.256 55.542 85.239 41.987 + +thomas 04/06 09:32:57 312/312: Data time: 0.0025, Iter time: 0.2789 Loss 0.877 (AVG: 0.626) Score 74.502 (AVG: 82.111) mIOU 54.462 mAP 68.341 mAcc 65.731 +IOU: 73.764 95.886 52.899 65.977 83.467 71.112 60.777 37.836 25.478 71.971 6.256 47.307 49.674 42.750 21.336 48.428 75.995 50.525 72.601 35.211 +mAP: 77.383 96.450 57.539 67.381 88.083 83.267 66.923 60.813 47.388 68.536 31.923 58.578 56.751 69.256 51.653 84.715 92.813 79.565 81.458 46.338 +mAcc: 87.155 98.419 70.874 75.080 93.522 93.501 67.093 76.263 26.402 82.376 6.525 54.874 80.522 46.440 21.378 63.457 87.442 55.376 85.657 42.273 + +thomas 04/06 09:32:57 Finished test. Elapsed time: 387.0200 +thomas 04/06 09:32:59 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/06 09:32:59 Current best mIoU: 54.462 at iter 14000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/06 09:37:08 ===> Epoch[47](14040/301): Loss 0.4680 LR: 8.941e-02 Score 85.336 Data time: 2.4344, Total iter time: 6.1409 +thomas 04/06 09:40:51 ===> Epoch[47](14080/301): Loss 0.4590 LR: 8.938e-02 Score 85.805 Data time: 2.1577, Total iter time: 5.5012 +thomas 04/06 09:44:49 ===> Epoch[47](14120/301): Loss 0.5219 LR: 8.934e-02 Score 84.071 Data time: 2.3153, Total iter time: 5.8783 +thomas 04/06 09:48:39 ===> Epoch[48](14160/301): Loss 0.4667 LR: 8.931e-02 Score 85.436 Data time: 2.2305, Total iter time: 5.6799 +thomas 04/06 09:52:41 ===> Epoch[48](14200/301): Loss 0.4862 LR: 8.928e-02 Score 85.270 Data time: 2.3569, Total iter time: 5.9733 +thomas 04/06 09:57:02 ===> Epoch[48](14240/301): Loss 0.4321 LR: 8.925e-02 Score 86.681 Data time: 2.5758, Total iter time: 6.4446 +thomas 04/06 10:00:57 ===> Epoch[48](14280/301): Loss 0.4214 LR: 8.922e-02 Score 86.920 Data time: 2.2833, Total iter time: 5.8023 +thomas 04/06 10:04:49 ===> Epoch[48](14320/301): Loss 0.4469 LR: 8.919e-02 Score 85.844 Data time: 2.2089, Total iter time: 5.7119 +thomas 04/06 10:08:35 ===> Epoch[48](14360/301): Loss 0.4862 LR: 8.916e-02 Score 84.954 Data time: 2.2043, Total iter time: 5.5866 +thomas 04/06 10:12:46 ===> Epoch[48](14400/301): Loss 0.4457 LR: 8.913e-02 Score 86.178 Data time: 2.3821, Total iter time: 6.1945 +thomas 04/06 10:16:38 ===> Epoch[48](14440/301): Loss 0.4380 LR: 8.910e-02 Score 86.612 Data time: 2.3194, Total iter time: 5.7347 +thomas 04/06 10:20:53 ===> Epoch[49](14480/301): Loss 0.4526 LR: 8.907e-02 Score 85.551 Data time: 2.4950, Total iter time: 6.2948 +thomas 04/06 10:24:40 ===> Epoch[49](14520/301): Loss 0.4178 LR: 8.904e-02 Score 87.157 Data time: 2.2628, Total iter time: 5.6106 +thomas 04/06 10:28:28 ===> Epoch[49](14560/301): Loss 0.4364 LR: 8.901e-02 Score 86.558 Data time: 2.2043, Total iter time: 5.6053 +thomas 04/06 10:32:42 ===> Epoch[49](14600/301): Loss 0.4492 LR: 8.898e-02 Score 85.800 Data time: 2.4395, Total iter time: 6.2907 +thomas 04/06 10:36:45 ===> Epoch[49](14640/301): Loss 0.4475 LR: 8.895e-02 Score 86.096 Data time: 2.3598, Total iter time: 5.9934 +thomas 04/06 10:40:51 ===> Epoch[49](14680/301): Loss 0.4648 LR: 8.892e-02 Score 85.823 Data time: 2.4336, Total iter time: 6.0654 +thomas 04/06 10:44:58 ===> Epoch[49](14720/301): Loss 0.4625 LR: 8.889e-02 Score 85.701 Data time: 2.4275, Total iter time: 6.1076 +thomas 04/06 10:49:02 ===> Epoch[50](14760/301): Loss 0.4294 LR: 8.886e-02 Score 86.348 Data time: 2.3785, Total iter time: 6.0156 +thomas 04/06 10:52:59 ===> Epoch[50](14800/301): Loss 0.4424 LR: 8.883e-02 Score 86.344 Data time: 2.2938, Total iter time: 5.8741 +thomas 04/06 10:56:51 ===> Epoch[50](14840/301): Loss 0.4301 LR: 8.880e-02 Score 86.819 Data time: 2.2139, Total iter time: 5.6990 +thomas 04/06 11:00:50 ===> Epoch[50](14880/301): Loss 0.4914 LR: 8.877e-02 Score 85.144 Data time: 2.2848, Total iter time: 5.9000 +thomas 04/06 11:04:49 ===> Epoch[50](14920/301): Loss 0.4280 LR: 8.874e-02 Score 86.336 Data time: 2.3511, Total iter time: 5.8916 +thomas 04/06 11:09:12 ===> Epoch[50](14960/301): Loss 0.4258 LR: 8.871e-02 Score 86.802 Data time: 2.5962, Total iter time: 6.5024 +thomas 04/06 11:12:54 ===> Epoch[50](15000/301): Loss 0.4816 LR: 8.868e-02 Score 84.738 Data time: 2.1578, Total iter time: 5.4633 +thomas 04/06 11:12:55 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/06 11:12:55 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/06 11:14:57 101/312: Data time: 0.0024, Iter time: 1.0011 Loss 0.386 (AVG: 0.596) Score 88.072 (AVG: 82.352) mIOU 55.671 mAP 67.610 mAcc 66.304 +IOU: 73.196 95.809 47.790 71.918 84.448 83.075 62.866 34.204 15.429 61.634 7.446 54.764 52.035 50.035 38.675 43.031 86.669 49.354 70.979 30.068 +mAP: 74.802 97.899 53.495 64.327 88.598 82.529 67.410 49.294 45.643 67.614 34.625 66.984 63.780 56.937 63.910 88.991 94.441 77.723 70.409 42.790 +mAcc: 85.771 98.385 64.244 79.284 93.874 93.458 70.732 72.997 15.756 72.268 8.547 63.289 62.709 60.662 55.000 60.237 94.698 53.813 79.696 40.663 + +thomas 04/06 11:16:55 201/312: Data time: 0.0023, Iter time: 0.8035 Loss 0.505 (AVG: 0.616) Score 83.201 (AVG: 82.045) mIOU 55.386 mAP 69.335 mAcc 67.002 +IOU: 72.399 95.862 49.591 69.744 84.094 79.558 63.994 34.415 17.863 70.979 6.094 50.896 52.116 45.535 41.014 41.975 83.529 51.667 67.366 29.029 +mAP: 75.337 97.388 59.319 67.970 89.425 82.638 69.493 53.812 47.658 70.092 37.046 64.715 66.292 64.272 55.705 88.397 95.252 78.378 77.981 45.536 +mAcc: 86.482 98.383 68.447 78.292 94.143 91.459 71.344 68.921 18.520 79.054 8.093 56.720 65.061 60.757 53.600 65.624 94.217 57.337 85.930 37.654 + +thomas 04/06 11:18:54 301/312: Data time: 0.0024, Iter time: 0.4870 Loss 0.708 (AVG: 0.628) Score 74.385 (AVG: 81.516) mIOU 55.498 mAP 69.632 mAcc 67.590 +IOU: 71.739 95.894 48.455 71.489 83.907 78.351 63.227 35.205 20.103 68.548 7.816 46.771 51.181 57.464 37.628 49.349 77.781 49.534 64.504 31.018 +mAP: 76.461 97.249 60.477 71.273 88.695 81.763 66.583 54.977 49.925 68.569 34.791 61.561 66.662 72.557 55.026 87.666 92.350 77.294 82.096 46.665 +mAcc: 85.970 98.434 69.390 81.988 93.270 91.922 70.179 70.544 20.790 78.315 9.366 53.258 64.529 72.846 49.242 70.209 90.407 54.672 88.565 37.904 + +thomas 04/06 11:19:02 312/312: Data time: 0.0023, Iter time: 0.2266 Loss 0.541 (AVG: 0.629) Score 73.934 (AVG: 81.439) mIOU 55.595 mAP 69.488 mAcc 67.617 +IOU: 71.554 95.897 48.126 71.655 83.926 78.597 63.284 34.537 19.900 68.153 7.689 47.286 51.329 57.072 37.522 49.682 78.606 50.742 64.843 31.501 +mAP: 76.540 97.210 60.569 71.467 88.444 80.582 66.613 54.311 49.394 67.468 34.057 62.161 66.621 72.214 55.026 87.977 92.869 78.262 82.036 45.939 +mAcc: 85.685 98.428 68.911 82.186 93.287 91.997 70.266 70.230 20.572 77.635 9.300 53.724 64.670 72.595 49.242 70.733 90.601 55.911 87.872 38.486 + +thomas 04/06 11:19:02 Finished test. Elapsed time: 366.8634 +thomas 04/06 11:19:03 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/06 11:19:03 Current best mIoU: 55.595 at iter 15000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/06 11:22:57 ===> Epoch[50](15040/301): Loss 0.4730 LR: 8.865e-02 Score 85.164 Data time: 2.2521, Total iter time: 5.7693 +thomas 04/06 11:26:45 ===> Epoch[51](15080/301): Loss 0.4661 LR: 8.862e-02 Score 85.500 Data time: 2.2258, Total iter time: 5.6209 +thomas 04/06 11:30:51 ===> Epoch[51](15120/301): Loss 0.4420 LR: 8.859e-02 Score 86.071 Data time: 2.4181, Total iter time: 6.0687 +thomas 04/06 11:35:08 ===> Epoch[51](15160/301): Loss 0.4500 LR: 8.855e-02 Score 85.789 Data time: 2.5398, Total iter time: 6.3472 +thomas 04/06 11:39:03 ===> Epoch[51](15200/301): Loss 0.4250 LR: 8.852e-02 Score 87.113 Data time: 2.2444, Total iter time: 5.8061 +thomas 04/06 11:43:06 ===> Epoch[51](15240/301): Loss 0.4366 LR: 8.849e-02 Score 86.478 Data time: 2.3169, Total iter time: 5.9906 +thomas 04/06 11:46:53 ===> Epoch[51](15280/301): Loss 0.4112 LR: 8.846e-02 Score 86.940 Data time: 2.1812, Total iter time: 5.6063 +thomas 04/06 11:50:52 ===> Epoch[51](15320/301): Loss 0.4302 LR: 8.843e-02 Score 86.776 Data time: 2.2879, Total iter time: 5.8866 +thomas 04/06 11:55:04 ===> Epoch[52](15360/301): Loss 0.4343 LR: 8.840e-02 Score 86.658 Data time: 2.5148, Total iter time: 6.2305 +thomas 04/06 11:58:49 ===> Epoch[52](15400/301): Loss 0.4497 LR: 8.837e-02 Score 86.358 Data time: 2.2062, Total iter time: 5.5670 +thomas 04/06 12:02:43 ===> Epoch[52](15440/301): Loss 0.4568 LR: 8.834e-02 Score 86.342 Data time: 2.2177, Total iter time: 5.7614 +thomas 04/06 12:06:37 ===> Epoch[52](15480/301): Loss 0.4621 LR: 8.831e-02 Score 85.565 Data time: 2.2584, Total iter time: 5.7861 +thomas 04/06 12:10:40 ===> Epoch[52](15520/301): Loss 0.4631 LR: 8.828e-02 Score 85.431 Data time: 2.3135, Total iter time: 5.9835 +thomas 04/06 12:14:37 ===> Epoch[52](15560/301): Loss 0.4303 LR: 8.825e-02 Score 87.072 Data time: 2.3363, Total iter time: 5.8469 +thomas 04/06 12:18:50 ===> Epoch[52](15600/301): Loss 0.4470 LR: 8.822e-02 Score 86.275 Data time: 2.5595, Total iter time: 6.2550 +thomas 04/06 12:22:58 ===> Epoch[52](15640/301): Loss 0.4013 LR: 8.819e-02 Score 87.517 Data time: 2.4701, Total iter time: 6.1292 +thomas 04/06 12:26:50 ===> Epoch[53](15680/301): Loss 0.4308 LR: 8.816e-02 Score 86.897 Data time: 2.2100, Total iter time: 5.7254 +thomas 04/06 12:30:49 ===> Epoch[53](15720/301): Loss 0.4236 LR: 8.813e-02 Score 86.663 Data time: 2.2943, Total iter time: 5.8918 +thomas 04/06 12:34:48 ===> Epoch[53](15760/301): Loss 0.4093 LR: 8.810e-02 Score 86.892 Data time: 2.2950, Total iter time: 5.8957 +thomas 04/06 12:39:07 ===> Epoch[53](15800/301): Loss 0.4095 LR: 8.807e-02 Score 86.979 Data time: 2.5162, Total iter time: 6.3913 +thomas 04/06 12:43:21 ===> Epoch[53](15840/301): Loss 0.4430 LR: 8.804e-02 Score 86.326 Data time: 2.5649, Total iter time: 6.2857 +thomas 04/06 12:47:11 ===> Epoch[53](15880/301): Loss 0.4169 LR: 8.801e-02 Score 87.033 Data time: 2.2766, Total iter time: 5.6819 +thomas 04/06 12:51:00 ===> Epoch[53](15920/301): Loss 0.4506 LR: 8.798e-02 Score 86.275 Data time: 2.2057, Total iter time: 5.6444 +thomas 04/06 12:55:04 ===> Epoch[54](15960/301): Loss 0.4403 LR: 8.795e-02 Score 86.328 Data time: 2.3514, Total iter time: 6.0183 +thomas 04/06 12:58:48 ===> Epoch[54](16000/301): Loss 0.4598 LR: 8.792e-02 Score 85.672 Data time: 2.1812, Total iter time: 5.5439 +thomas 04/06 12:58:50 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/06 12:58:50 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/06 13:00:58 101/312: Data time: 0.0849, Iter time: 0.3824 Loss 0.413 (AVG: 0.598) Score 86.584 (AVG: 82.177) mIOU 58.604 mAP 68.828 mAcc 70.826 +IOU: 73.914 95.732 52.883 64.171 82.626 69.876 63.558 38.110 38.017 71.427 7.748 55.078 52.747 60.790 48.755 58.902 75.089 49.963 79.965 32.737 +mAP: 77.944 95.867 54.907 64.526 87.200 83.597 62.989 59.865 42.007 67.031 35.123 63.654 63.722 79.254 52.557 82.902 90.271 81.863 84.813 46.467 +mAcc: 84.262 99.101 69.501 76.380 96.261 88.336 74.268 64.499 45.245 92.542 8.922 78.404 78.926 83.759 50.788 67.650 76.152 55.027 87.441 39.063 + +thomas 04/06 13:03:03 201/312: Data time: 0.0030, Iter time: 0.8220 Loss 0.150 (AVG: 0.593) Score 96.949 (AVG: 82.489) mIOU 58.428 mAP 68.940 mAcc 70.479 +IOU: 74.278 95.764 50.951 64.611 81.954 73.649 63.831 40.389 37.002 70.288 7.574 56.116 51.033 55.568 50.140 56.323 76.648 49.179 80.615 32.645 +mAP: 78.314 95.505 52.410 64.698 87.476 84.375 66.431 62.198 44.863 67.844 32.263 61.859 59.986 80.029 55.944 82.005 89.544 81.157 84.095 47.809 +mAcc: 84.237 99.013 70.685 76.170 95.623 88.942 76.297 64.701 45.100 93.715 8.458 77.397 74.198 81.253 52.222 63.487 77.836 53.926 87.090 39.230 + +thomas 04/06 13:05:07 301/312: Data time: 0.0025, Iter time: 0.7004 Loss 0.719 (AVG: 0.587) Score 77.979 (AVG: 82.720) mIOU 57.839 mAP 69.568 mAcc 69.618 +IOU: 74.304 95.656 48.509 63.301 82.622 74.574 65.421 39.367 38.882 71.203 7.132 57.204 49.831 57.511 41.862 55.825 75.140 46.968 78.659 32.802 +mAP: 78.698 95.325 50.955 67.442 87.965 84.502 69.450 59.364 47.961 71.307 33.354 63.904 60.851 79.076 56.509 83.468 88.521 81.132 83.209 48.378 +mAcc: 84.598 99.007 68.444 76.678 95.490 87.961 76.839 62.828 47.287 92.702 8.135 76.111 75.436 81.543 43.602 63.235 76.519 50.954 85.895 39.099 + +thomas 04/06 13:05:20 312/312: Data time: 0.0033, Iter time: 0.2328 Loss 0.537 (AVG: 0.586) Score 73.839 (AVG: 82.722) mIOU 57.967 mAP 69.741 mAcc 69.739 +IOU: 74.200 95.583 48.481 63.366 82.489 74.531 65.266 39.145 39.777 70.255 7.269 56.511 51.114 57.719 42.080 56.286 75.792 48.241 78.110 33.117 +mAP: 78.393 95.353 50.300 67.522 88.099 84.225 68.938 58.832 48.638 72.233 33.744 62.924 61.632 79.484 57.329 83.845 89.219 81.718 83.748 48.651 +mAcc: 84.471 98.947 67.891 76.999 95.387 87.758 76.924 63.166 48.204 91.066 8.255 75.922 75.919 81.809 43.807 63.999 77.258 52.173 85.298 39.536 + +thomas 04/06 13:05:20 Finished test. Elapsed time: 390.5237 +thomas 04/06 13:05:22 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/06 13:05:22 Current best mIoU: 57.967 at iter 16000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/06 13:09:41 ===> Epoch[54](16040/301): Loss 0.4226 LR: 8.789e-02 Score 86.491 Data time: 2.5841, Total iter time: 6.4003 +thomas 04/06 13:13:30 ===> Epoch[54](16080/301): Loss 0.4098 LR: 8.786e-02 Score 87.438 Data time: 2.2063, Total iter time: 5.6490 +thomas 04/06 13:17:29 ===> Epoch[54](16120/301): Loss 0.4661 LR: 8.782e-02 Score 85.550 Data time: 2.2990, Total iter time: 5.9027 +thomas 04/06 13:21:14 ===> Epoch[54](16160/301): Loss 0.4642 LR: 8.779e-02 Score 85.646 Data time: 2.1434, Total iter time: 5.5572 +thomas 04/06 13:25:14 ===> Epoch[54](16200/301): Loss 0.4087 LR: 8.776e-02 Score 87.529 Data time: 2.2942, Total iter time: 5.9226 +thomas 04/06 13:29:21 ===> Epoch[54](16240/301): Loss 0.4255 LR: 8.773e-02 Score 86.963 Data time: 2.4303, Total iter time: 6.0844 +thomas 04/06 13:33:41 ===> Epoch[55](16280/301): Loss 0.4304 LR: 8.770e-02 Score 87.009 Data time: 2.5644, Total iter time: 6.4339 +thomas 04/06 13:37:28 ===> Epoch[55](16320/301): Loss 0.4214 LR: 8.767e-02 Score 87.166 Data time: 2.1737, Total iter time: 5.5977 +thomas 04/06 13:41:27 ===> Epoch[55](16360/301): Loss 0.4386 LR: 8.764e-02 Score 86.412 Data time: 2.2728, Total iter time: 5.8894 +thomas 04/06 13:45:01 ===> Epoch[55](16400/301): Loss 0.4361 LR: 8.761e-02 Score 86.265 Data time: 2.0522, Total iter time: 5.2767 +thomas 04/06 13:48:50 ===> Epoch[55](16440/301): Loss 0.4215 LR: 8.758e-02 Score 86.894 Data time: 2.2207, Total iter time: 5.6468 +thomas 04/06 13:52:57 ===> Epoch[55](16480/301): Loss 0.4351 LR: 8.755e-02 Score 86.734 Data time: 2.4799, Total iter time: 6.1156 +thomas 04/06 13:57:23 ===> Epoch[55](16520/301): Loss 0.4291 LR: 8.752e-02 Score 86.735 Data time: 2.6339, Total iter time: 6.5565 +thomas 04/06 14:01:27 ===> Epoch[56](16560/301): Loss 0.4129 LR: 8.749e-02 Score 86.912 Data time: 2.3275, Total iter time: 6.0217 +thomas 04/06 14:05:16 ===> Epoch[56](16600/301): Loss 0.4370 LR: 8.746e-02 Score 86.132 Data time: 2.2080, Total iter time: 5.6670 +thomas 04/06 14:09:16 ===> Epoch[56](16640/301): Loss 0.4295 LR: 8.743e-02 Score 86.305 Data time: 2.3166, Total iter time: 5.9141 +thomas 04/06 14:13:23 ===> Epoch[56](16680/301): Loss 0.4534 LR: 8.740e-02 Score 86.367 Data time: 2.3464, Total iter time: 6.1070 +thomas 04/06 14:17:17 ===> Epoch[56](16720/301): Loss 0.4315 LR: 8.737e-02 Score 86.873 Data time: 2.3209, Total iter time: 5.7739 +thomas 04/06 14:21:36 ===> Epoch[56](16760/301): Loss 0.4152 LR: 8.734e-02 Score 87.145 Data time: 2.5574, Total iter time: 6.3993 +thomas 04/06 14:25:23 ===> Epoch[56](16800/301): Loss 0.3957 LR: 8.731e-02 Score 87.855 Data time: 2.1949, Total iter time: 5.5996 +thomas 04/06 14:28:59 ===> Epoch[56](16840/301): Loss 0.4446 LR: 8.728e-02 Score 86.178 Data time: 2.0849, Total iter time: 5.3321 +thomas 04/06 14:32:56 ===> Epoch[57](16880/301): Loss 0.4096 LR: 8.725e-02 Score 87.153 Data time: 2.2612, Total iter time: 5.8349 +thomas 04/06 14:37:03 ===> Epoch[57](16920/301): Loss 0.4886 LR: 8.722e-02 Score 85.191 Data time: 2.3537, Total iter time: 6.0883 +thomas 04/06 14:41:05 ===> Epoch[57](16960/301): Loss 0.4604 LR: 8.719e-02 Score 85.735 Data time: 2.3627, Total iter time: 5.9647 +thomas 04/06 14:45:31 ===> Epoch[57](17000/301): Loss 0.4219 LR: 8.715e-02 Score 86.739 Data time: 2.6289, Total iter time: 6.5719 +thomas 04/06 14:45:33 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/06 14:45:33 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/06 14:47:41 101/312: Data time: 0.0024, Iter time: 0.3892 Loss 0.489 (AVG: 0.736) Score 85.642 (AVG: 80.326) mIOU 50.902 mAP 65.924 mAcc 60.861 +IOU: 74.489 95.407 48.455 43.658 86.356 76.114 60.048 38.359 19.276 62.529 4.116 64.214 49.876 25.736 33.597 21.439 88.948 44.023 47.466 33.931 +mAP: 78.601 95.802 61.948 54.256 88.585 77.695 65.715 50.975 49.349 58.642 30.901 60.711 63.597 55.629 43.585 88.136 97.752 75.243 79.875 41.481 +mAcc: 90.091 98.192 80.245 45.711 92.986 91.351 63.241 57.068 19.630 92.168 4.316 77.301 74.695 26.476 47.369 21.447 90.473 47.927 47.914 48.625 + +thomas 04/06 14:49:36 201/312: Data time: 0.0023, Iter time: 0.6941 Loss 0.739 (AVG: 0.678) Score 76.336 (AVG: 81.627) mIOU 51.799 mAP 67.805 mAcc 61.972 +IOU: 76.441 95.849 47.487 51.964 86.693 77.666 57.442 38.227 22.292 63.679 6.577 62.578 48.645 37.662 33.096 16.706 84.707 43.358 53.491 31.420 +mAP: 79.690 95.526 59.639 61.488 88.553 82.823 68.096 54.833 49.510 65.973 30.608 58.348 64.784 67.336 50.482 80.264 95.301 78.932 83.184 40.737 +mAcc: 90.012 98.285 80.001 54.118 93.875 93.563 60.978 55.687 22.979 88.436 6.847 76.108 74.899 38.466 52.605 17.335 85.714 46.315 53.709 49.514 + +thomas 04/06 14:51:31 301/312: Data time: 0.0024, Iter time: 0.3560 Loss 0.465 (AVG: 0.677) Score 82.813 (AVG: 81.684) mIOU 52.250 mAP 68.881 mAcc 62.668 +IOU: 76.189 95.865 48.384 53.956 86.188 75.111 57.542 38.430 25.420 66.057 5.669 60.529 48.345 39.291 36.308 13.717 82.093 49.746 55.327 30.842 +mAP: 78.167 95.968 60.641 61.949 88.577 79.585 70.786 55.000 51.385 69.876 34.940 60.732 64.652 69.206 55.611 79.414 96.002 79.928 82.675 42.516 +mAcc: 89.315 98.214 79.604 56.491 93.440 93.000 60.326 56.912 26.237 88.417 5.829 75.040 76.618 39.916 56.608 14.068 83.084 53.472 55.517 51.247 + +thomas 04/06 14:51:44 312/312: Data time: 0.0022, Iter time: 0.4455 Loss 0.365 (AVG: 0.675) Score 86.782 (AVG: 81.742) mIOU 52.227 mAP 69.023 mAcc 62.754 +IOU: 76.041 95.895 48.264 53.737 86.408 74.085 58.527 38.245 25.480 66.002 5.308 60.127 48.854 40.572 36.076 14.681 82.036 50.195 52.754 31.250 +mAP: 78.250 96.016 61.261 62.040 88.863 79.870 71.334 54.891 51.354 69.876 34.169 60.732 64.757 69.760 55.611 79.377 95.881 80.405 83.033 42.981 +mAcc: 89.184 98.221 79.501 56.311 93.573 93.125 61.273 56.960 26.378 88.417 5.447 75.040 77.058 41.243 56.608 15.044 82.986 54.051 52.919 51.744 + +thomas 04/06 14:51:44 Finished test. Elapsed time: 371.2522 +thomas 04/06 14:51:44 Current best mIoU: 57.967 at iter 16000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/06 14:55:34 ===> Epoch[57](17040/301): Loss 0.3939 LR: 8.712e-02 Score 87.294 Data time: 2.2104, Total iter time: 5.6811 +thomas 04/06 14:59:17 ===> Epoch[57](17080/301): Loss 0.4361 LR: 8.709e-02 Score 86.285 Data time: 2.1320, Total iter time: 5.4961 +thomas 04/06 15:03:24 ===> Epoch[57](17120/301): Loss 0.3690 LR: 8.706e-02 Score 88.419 Data time: 2.4531, Total iter time: 6.1030 +thomas 04/06 15:07:45 ===> Epoch[58](17160/301): Loss 0.4455 LR: 8.703e-02 Score 86.298 Data time: 2.5828, Total iter time: 6.4179 +thomas 04/06 15:11:47 ===> Epoch[58](17200/301): Loss 0.4358 LR: 8.700e-02 Score 86.668 Data time: 2.3550, Total iter time: 5.9733 +thomas 04/06 15:15:54 ===> Epoch[58](17240/301): Loss 0.4158 LR: 8.697e-02 Score 86.801 Data time: 2.3693, Total iter time: 6.1047 +thomas 04/06 15:19:24 ===> Epoch[58](17280/301): Loss 0.4269 LR: 8.694e-02 Score 86.809 Data time: 2.0528, Total iter time: 5.1900 +thomas 04/06 15:23:17 ===> Epoch[58](17320/301): Loss 0.4079 LR: 8.691e-02 Score 87.277 Data time: 2.2203, Total iter time: 5.7501 +thomas 04/06 15:27:24 ===> Epoch[58](17360/301): Loss 0.4192 LR: 8.688e-02 Score 87.119 Data time: 2.3778, Total iter time: 6.0908 +thomas 04/06 15:31:30 ===> Epoch[58](17400/301): Loss 0.4385 LR: 8.685e-02 Score 86.181 Data time: 2.4510, Total iter time: 6.0717 +thomas 04/06 15:35:19 ===> Epoch[58](17440/301): Loss 0.4211 LR: 8.682e-02 Score 87.172 Data time: 2.2310, Total iter time: 5.6716 +thomas 04/06 15:39:00 ===> Epoch[59](17480/301): Loss 0.4201 LR: 8.679e-02 Score 86.916 Data time: 2.1100, Total iter time: 5.4433 +thomas 04/06 15:42:59 ===> Epoch[59](17520/301): Loss 0.4485 LR: 8.676e-02 Score 86.271 Data time: 2.2911, Total iter time: 5.8841 +thomas 04/06 15:46:46 ===> Epoch[59](17560/301): Loss 0.4139 LR: 8.673e-02 Score 87.190 Data time: 2.2158, Total iter time: 5.6123 +thomas 04/06 15:50:50 ===> Epoch[59](17600/301): Loss 0.4890 LR: 8.670e-02 Score 84.883 Data time: 2.3923, Total iter time: 6.0259 +thomas 04/06 15:55:05 ===> Epoch[59](17640/301): Loss 0.4238 LR: 8.667e-02 Score 86.607 Data time: 2.5146, Total iter time: 6.2820 +thomas 04/06 15:59:06 ===> Epoch[59](17680/301): Loss 0.4227 LR: 8.664e-02 Score 86.762 Data time: 2.3049, Total iter time: 5.9564 +thomas 04/06 16:03:02 ===> Epoch[59](17720/301): Loss 0.4018 LR: 8.661e-02 Score 87.444 Data time: 2.2581, Total iter time: 5.8199 +thomas 04/06 16:06:53 ===> Epoch[60](17760/301): Loss 0.3929 LR: 8.658e-02 Score 87.506 Data time: 2.2183, Total iter time: 5.7007 +thomas 04/06 16:10:39 ===> Epoch[60](17800/301): Loss 0.4358 LR: 8.655e-02 Score 86.694 Data time: 2.1785, Total iter time: 5.5994 +thomas 04/06 16:14:43 ===> Epoch[60](17840/301): Loss 0.4168 LR: 8.651e-02 Score 87.054 Data time: 2.4246, Total iter time: 6.0290 +thomas 04/06 16:18:59 ===> Epoch[60](17880/301): Loss 0.4310 LR: 8.648e-02 Score 86.434 Data time: 2.5389, Total iter time: 6.3190 +thomas 04/06 16:23:06 ===> Epoch[60](17920/301): Loss 0.3961 LR: 8.645e-02 Score 87.884 Data time: 2.3974, Total iter time: 6.0882 +thomas 04/06 16:26:44 ===> Epoch[60](17960/301): Loss 0.4543 LR: 8.642e-02 Score 85.536 Data time: 2.0705, Total iter time: 5.3753 +thomas 04/06 16:30:45 ===> Epoch[60](18000/301): Loss 0.4393 LR: 8.639e-02 Score 86.570 Data time: 2.2755, Total iter time: 5.9605 +thomas 04/06 16:30:46 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/06 16:30:47 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/06 16:32:46 101/312: Data time: 0.0023, Iter time: 0.3996 Loss 0.448 (AVG: 0.642) Score 85.585 (AVG: 80.338) mIOU 53.725 mAP 68.043 mAcc 64.249 +IOU: 73.956 95.957 51.611 56.394 85.720 61.853 60.252 33.527 32.399 50.460 9.248 44.646 53.968 56.915 40.141 32.894 71.715 57.742 79.785 25.323 +mAP: 77.811 95.485 61.440 56.691 88.671 72.349 68.614 52.446 48.837 67.417 38.526 68.222 67.492 70.406 45.099 90.190 89.996 79.013 80.801 41.348 +mAcc: 84.316 98.547 60.324 61.605 93.627 91.001 64.282 58.680 36.663 87.099 12.296 46.345 69.783 61.830 43.853 33.022 72.108 61.430 82.279 65.884 + +thomas 04/06 16:34:51 201/312: Data time: 0.0029, Iter time: 0.3718 Loss 0.528 (AVG: 0.614) Score 85.355 (AVG: 81.291) mIOU 55.375 mAP 69.201 mAcc 65.015 +IOU: 74.370 96.016 46.573 55.424 86.636 71.498 57.966 36.423 37.510 63.188 10.358 50.687 51.638 58.220 39.078 27.739 78.182 56.877 78.867 30.257 +mAP: 77.873 95.790 57.781 59.010 87.650 81.403 70.002 56.700 49.237 66.808 36.778 66.685 67.778 72.926 50.278 88.598 92.738 78.921 81.434 45.619 +mAcc: 84.501 98.603 54.724 66.048 93.862 92.874 62.478 62.297 42.832 86.514 14.528 52.954 65.457 62.301 43.897 27.825 79.107 61.132 81.020 67.340 + +thomas 04/06 16:36:48 301/312: Data time: 0.0030, Iter time: 1.2082 Loss 0.339 (AVG: 0.616) Score 90.123 (AVG: 81.404) mIOU 55.118 mAP 69.738 mAcc 64.603 +IOU: 74.671 95.918 46.141 55.393 86.365 71.785 58.460 39.268 38.736 65.616 12.783 46.389 49.119 60.326 39.633 23.554 78.699 50.271 79.827 29.410 +mAP: 78.605 96.011 56.591 63.012 88.763 83.136 71.724 60.325 50.067 68.142 39.166 59.653 65.880 74.398 53.674 88.429 92.355 78.794 81.450 44.594 +mAcc: 84.812 98.593 55.246 66.053 93.609 93.545 63.385 65.191 43.842 88.841 17.853 48.509 62.293 65.456 44.191 24.055 79.319 53.519 81.584 62.165 + +thomas 04/06 16:36:59 312/312: Data time: 0.0030, Iter time: 0.6734 Loss 0.304 (AVG: 0.613) Score 91.460 (AVG: 81.506) mIOU 55.225 mAP 69.778 mAcc 64.691 +IOU: 74.714 95.941 46.496 55.809 86.337 72.260 59.042 39.260 39.923 65.319 12.605 46.194 49.737 60.133 38.966 23.554 79.027 50.075 79.827 29.284 +mAP: 78.636 96.096 56.503 64.014 89.007 83.369 72.296 59.597 50.851 67.013 38.376 59.695 65.814 74.398 54.255 88.429 92.461 78.425 81.450 44.875 +mAcc: 84.883 98.618 55.579 66.942 93.564 93.630 63.851 65.174 45.111 88.568 17.577 48.345 62.986 65.456 43.157 24.055 79.644 53.314 81.584 61.787 + +thomas 04/06 16:36:59 Finished test. Elapsed time: 372.8820 +thomas 04/06 16:37:00 Current best mIoU: 57.967 at iter 16000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/06 16:41:11 ===> Epoch[60](18040/301): Loss 0.3978 LR: 8.636e-02 Score 87.713 Data time: 2.5207, Total iter time: 6.2112 +thomas 04/06 16:45:16 ===> Epoch[61](18080/301): Loss 0.4106 LR: 8.633e-02 Score 87.365 Data time: 2.3927, Total iter time: 6.0440 +thomas 04/06 16:49:10 ===> Epoch[61](18120/301): Loss 0.4204 LR: 8.630e-02 Score 86.993 Data time: 2.2457, Total iter time: 5.7845 +thomas 04/06 16:52:58 ===> Epoch[61](18160/301): Loss 0.4441 LR: 8.627e-02 Score 86.483 Data time: 2.1734, Total iter time: 5.6128 +thomas 04/06 16:56:45 ===> Epoch[61](18200/301): Loss 0.4167 LR: 8.624e-02 Score 87.106 Data time: 2.1812, Total iter time: 5.5989 +thomas 04/06 17:00:57 ===> Epoch[61](18240/301): Loss 0.4534 LR: 8.621e-02 Score 85.589 Data time: 2.4431, Total iter time: 6.2112 +thomas 04/06 17:05:03 ===> Epoch[61](18280/301): Loss 0.4172 LR: 8.618e-02 Score 86.970 Data time: 2.4372, Total iter time: 6.0919 +thomas 04/06 17:08:57 ===> Epoch[61](18320/301): Loss 0.3823 LR: 8.615e-02 Score 87.945 Data time: 2.2973, Total iter time: 5.7662 +thomas 04/06 17:12:53 ===> Epoch[61](18360/301): Loss 0.4112 LR: 8.612e-02 Score 86.832 Data time: 2.2750, Total iter time: 5.8243 +thomas 04/06 17:16:36 ===> Epoch[62](18400/301): Loss 0.4403 LR: 8.609e-02 Score 86.287 Data time: 2.1337, Total iter time: 5.5024 +thomas 04/06 17:20:28 ===> Epoch[62](18440/301): Loss 0.3765 LR: 8.606e-02 Score 88.205 Data time: 2.2343, Total iter time: 5.7313 +thomas 04/06 17:24:34 ===> Epoch[62](18480/301): Loss 0.3825 LR: 8.603e-02 Score 88.045 Data time: 2.3616, Total iter time: 6.0714 +thomas 04/06 17:28:32 ===> Epoch[62](18520/301): Loss 0.4287 LR: 8.600e-02 Score 86.532 Data time: 2.3522, Total iter time: 5.8622 +thomas 04/06 17:32:42 ===> Epoch[62](18560/301): Loss 0.4807 LR: 8.597e-02 Score 85.269 Data time: 2.4339, Total iter time: 6.1765 +thomas 04/06 17:36:42 ===> Epoch[62](18600/301): Loss 0.4005 LR: 8.594e-02 Score 87.624 Data time: 2.3056, Total iter time: 5.9000 +thomas 04/06 17:40:51 ===> Epoch[62](18640/301): Loss 0.4482 LR: 8.590e-02 Score 86.195 Data time: 2.3890, Total iter time: 6.1582 +thomas 04/06 17:44:33 ===> Epoch[63](18680/301): Loss 0.3924 LR: 8.587e-02 Score 88.205 Data time: 2.1252, Total iter time: 5.4577 +thomas 04/06 17:48:46 ===> Epoch[63](18720/301): Loss 0.4691 LR: 8.584e-02 Score 85.578 Data time: 2.4483, Total iter time: 6.2451 +thomas 04/06 17:52:53 ===> Epoch[63](18760/301): Loss 0.3929 LR: 8.581e-02 Score 87.659 Data time: 2.4941, Total iter time: 6.0981 +thomas 04/06 17:56:44 ===> Epoch[63](18800/301): Loss 0.4193 LR: 8.578e-02 Score 86.950 Data time: 2.2546, Total iter time: 5.7079 +thomas 04/06 18:00:30 ===> Epoch[63](18840/301): Loss 0.4017 LR: 8.575e-02 Score 87.406 Data time: 2.1934, Total iter time: 5.5952 +thomas 04/06 18:04:30 ===> Epoch[63](18880/301): Loss 0.4453 LR: 8.572e-02 Score 86.504 Data time: 2.2998, Total iter time: 5.9074 +thomas 04/06 18:08:24 ===> Epoch[63](18920/301): Loss 0.4111 LR: 8.569e-02 Score 87.093 Data time: 2.2444, Total iter time: 5.7902 +thomas 04/06 18:12:17 ===> Epoch[63](18960/301): Loss 0.4059 LR: 8.566e-02 Score 87.559 Data time: 2.2692, Total iter time: 5.7443 +thomas 04/06 18:16:24 ===> Epoch[64](19000/301): Loss 0.3929 LR: 8.563e-02 Score 87.537 Data time: 2.4025, Total iter time: 6.0983 +thomas 04/06 18:16:26 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/06 18:16:26 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/06 18:18:25 101/312: Data time: 0.0031, Iter time: 0.4156 Loss 0.598 (AVG: 0.549) Score 86.350 (AVG: 83.545) mIOU 57.309 mAP 70.343 mAcc 69.191 +IOU: 79.833 95.982 38.974 72.448 82.196 70.463 55.023 43.860 50.686 62.872 19.254 40.396 52.489 47.177 59.365 29.351 85.840 50.045 74.845 35.072 +mAP: 78.148 97.747 49.956 70.518 91.158 79.860 65.806 62.242 59.401 75.182 40.274 46.651 60.671 66.926 62.855 81.828 92.569 81.723 89.674 53.676 +mAcc: 89.142 98.393 44.515 76.138 91.909 96.364 59.793 66.870 59.956 90.242 25.519 58.815 85.505 54.283 65.045 30.486 89.783 52.182 90.722 58.152 + +thomas 04/06 18:20:32 201/312: Data time: 0.0023, Iter time: 0.5837 Loss 0.693 (AVG: 0.604) Score 83.151 (AVG: 82.554) mIOU 56.243 mAP 69.524 mAcc 68.456 +IOU: 76.758 95.673 49.155 69.543 83.475 71.226 58.339 41.549 41.995 57.628 11.869 52.468 44.173 49.729 53.515 26.516 88.328 44.793 69.626 38.495 +mAP: 76.647 97.407 54.491 68.293 88.533 82.560 63.576 59.731 55.278 74.527 36.379 52.960 61.091 71.778 59.372 80.461 94.140 78.793 83.537 50.930 +mAcc: 87.734 98.393 55.926 75.461 92.289 95.377 62.245 66.373 51.369 89.459 13.699 67.144 85.818 56.532 60.794 27.698 91.496 46.728 89.351 55.225 + +thomas 04/06 18:22:26 301/312: Data time: 0.0028, Iter time: 0.7586 Loss 0.171 (AVG: 0.584) Score 94.321 (AVG: 83.020) mIOU 56.507 mAP 70.058 mAcc 68.064 +IOU: 77.243 96.040 49.602 69.498 84.038 71.140 59.952 41.979 41.728 60.627 12.231 56.973 44.665 49.653 43.152 23.793 87.239 47.789 75.453 37.349 +mAP: 78.149 97.605 52.672 68.938 88.820 83.815 65.537 60.322 55.430 72.937 35.894 59.460 63.943 71.696 56.067 78.285 92.789 79.936 86.688 52.173 +mAcc: 87.704 98.482 56.134 75.946 92.695 95.674 63.537 68.953 50.266 91.156 14.010 69.576 82.749 55.375 48.372 24.964 89.797 49.736 90.981 55.180 + +thomas 04/06 18:22:38 312/312: Data time: 0.0026, Iter time: 0.3140 Loss 0.162 (AVG: 0.588) Score 95.458 (AVG: 82.908) mIOU 56.388 mAP 69.891 mAcc 67.919 +IOU: 77.030 96.083 49.730 69.585 83.475 70.942 59.585 42.644 41.574 60.480 12.166 56.796 44.574 48.712 41.623 23.793 86.312 49.615 75.442 37.594 +mAP: 78.301 97.630 52.932 69.118 88.367 83.632 65.228 60.935 54.749 72.715 35.124 59.896 63.675 70.355 54.862 78.285 92.837 80.434 86.688 52.068 +mAcc: 87.532 98.500 56.481 76.206 92.189 95.243 63.148 68.885 50.145 91.297 13.925 69.098 82.546 54.216 46.871 24.964 88.740 51.569 90.981 55.841 + +thomas 04/06 18:22:38 Finished test. Elapsed time: 372.1067 +thomas 04/06 18:22:38 Current best mIoU: 57.967 at iter 16000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/06 18:26:30 ===> Epoch[64](19040/301): Loss 0.4370 LR: 8.560e-02 Score 86.910 Data time: 2.2305, Total iter time: 5.7156 +thomas 04/06 18:30:17 ===> Epoch[64](19080/301): Loss 0.4027 LR: 8.557e-02 Score 87.336 Data time: 2.1686, Total iter time: 5.6066 +thomas 04/06 18:33:59 ===> Epoch[64](19120/301): Loss 0.3901 LR: 8.554e-02 Score 87.894 Data time: 2.1489, Total iter time: 5.4737 +thomas 04/06 18:38:11 ===> Epoch[64](19160/301): Loss 0.4073 LR: 8.551e-02 Score 87.232 Data time: 2.4493, Total iter time: 6.2158 +thomas 04/06 18:42:12 ===> Epoch[64](19200/301): Loss 0.4308 LR: 8.548e-02 Score 86.621 Data time: 2.3548, Total iter time: 5.9556 +thomas 04/06 18:45:52 ===> Epoch[64](19240/301): Loss 0.4057 LR: 8.545e-02 Score 87.551 Data time: 2.1562, Total iter time: 5.4393 +thomas 04/06 18:49:42 ===> Epoch[65](19280/301): Loss 0.3899 LR: 8.542e-02 Score 87.890 Data time: 2.2050, Total iter time: 5.6572 +thomas 04/06 18:53:44 ===> Epoch[65](19320/301): Loss 0.3933 LR: 8.539e-02 Score 87.559 Data time: 2.3186, Total iter time: 5.9748 +thomas 04/06 18:57:45 ===> Epoch[65](19360/301): Loss 0.4164 LR: 8.536e-02 Score 86.854 Data time: 2.3284, Total iter time: 5.9519 +thomas 04/06 19:01:28 ===> Epoch[65](19400/301): Loss 0.4035 LR: 8.532e-02 Score 87.174 Data time: 2.1402, Total iter time: 5.4981 +thomas 04/06 19:05:34 ===> Epoch[65](19440/301): Loss 0.3880 LR: 8.529e-02 Score 87.754 Data time: 2.3868, Total iter time: 6.0688 +thomas 04/06 19:09:23 ===> Epoch[65](19480/301): Loss 0.4170 LR: 8.526e-02 Score 87.576 Data time: 2.2525, Total iter time: 5.6711 +thomas 04/06 19:13:11 ===> Epoch[65](19520/301): Loss 0.3664 LR: 8.523e-02 Score 88.533 Data time: 2.1841, Total iter time: 5.6147 +thomas 04/06 19:17:06 ===> Epoch[65](19560/301): Loss 0.4237 LR: 8.520e-02 Score 86.668 Data time: 2.2375, Total iter time: 5.8147 +thomas 04/06 19:20:57 ===> Epoch[66](19600/301): Loss 0.4176 LR: 8.517e-02 Score 87.031 Data time: 2.1754, Total iter time: 5.6832 +thomas 04/06 19:24:56 ===> Epoch[66](19640/301): Loss 0.4317 LR: 8.514e-02 Score 86.369 Data time: 2.3333, Total iter time: 5.8882 +thomas 04/06 19:28:59 ===> Epoch[66](19680/301): Loss 0.4243 LR: 8.511e-02 Score 86.832 Data time: 2.3864, Total iter time: 5.9963 +thomas 04/06 19:33:05 ===> Epoch[66](19720/301): Loss 0.4300 LR: 8.508e-02 Score 87.194 Data time: 2.4061, Total iter time: 6.0625 +thomas 04/06 19:36:53 ===> Epoch[66](19760/301): Loss 0.4129 LR: 8.505e-02 Score 87.160 Data time: 2.1887, Total iter time: 5.6147 +thomas 04/06 19:40:30 ===> Epoch[66](19800/301): Loss 0.4138 LR: 8.502e-02 Score 87.333 Data time: 2.0917, Total iter time: 5.3760 +thomas 04/06 19:44:35 ===> Epoch[66](19840/301): Loss 0.4031 LR: 8.499e-02 Score 87.623 Data time: 2.3883, Total iter time: 6.0439 +thomas 04/06 19:48:27 ===> Epoch[67](19880/301): Loss 0.4331 LR: 8.496e-02 Score 86.630 Data time: 2.2625, Total iter time: 5.7356 +thomas 04/06 19:52:17 ===> Epoch[67](19920/301): Loss 0.4358 LR: 8.493e-02 Score 86.427 Data time: 2.2246, Total iter time: 5.6624 +thomas 04/06 19:55:57 ===> Epoch[67](19960/301): Loss 0.4269 LR: 8.490e-02 Score 86.548 Data time: 2.1573, Total iter time: 5.4368 +thomas 04/06 20:00:06 ===> Epoch[67](20000/301): Loss 0.4068 LR: 8.487e-02 Score 87.219 Data time: 2.4071, Total iter time: 6.1521 +thomas 04/06 20:00:07 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/06 20:00:07 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/06 20:02:01 101/312: Data time: 0.0024, Iter time: 0.7069 Loss 0.987 (AVG: 0.708) Score 73.461 (AVG: 78.467) mIOU 54.526 mAP 70.920 mAcc 69.247 +IOU: 63.333 95.565 46.372 43.497 85.843 65.718 64.695 31.396 25.398 73.811 11.989 58.102 40.916 53.745 41.315 26.703 91.838 68.059 72.812 29.410 +mAP: 75.410 96.741 52.884 69.745 90.344 79.913 75.597 59.505 40.364 73.290 40.012 56.485 63.492 74.938 74.355 79.696 97.886 92.331 78.195 47.222 +mAcc: 70.401 97.447 60.619 81.406 92.933 94.293 70.843 81.228 28.172 94.340 16.288 70.540 88.279 74.342 60.804 27.930 93.564 73.176 72.966 35.373 + +thomas 04/06 20:03:52 201/312: Data time: 0.0020, Iter time: 0.6778 Loss 0.881 (AVG: 0.696) Score 69.332 (AVG: 78.652) mIOU 55.028 mAP 69.409 mAcc 69.358 +IOU: 64.940 95.376 50.515 45.350 85.972 67.887 63.519 30.412 27.476 67.339 14.241 59.793 47.386 57.798 39.719 28.327 88.013 60.697 71.145 34.651 +mAP: 75.921 96.731 56.385 68.280 89.002 79.087 68.564 57.781 41.515 71.420 39.004 63.710 64.336 75.001 57.959 74.567 95.598 88.105 76.333 48.875 +mAcc: 71.079 97.214 62.619 84.022 93.728 95.082 69.845 81.738 30.301 91.102 21.470 74.062 85.123 72.628 57.316 29.208 89.513 67.066 71.345 42.689 + +thomas 04/06 20:05:57 301/312: Data time: 0.0024, Iter time: 0.6647 Loss 0.780 (AVG: 0.689) Score 77.257 (AVG: 78.738) mIOU 54.787 mAP 69.502 mAcc 69.311 +IOU: 64.724 95.427 55.430 43.553 86.198 68.401 61.422 31.604 27.104 68.443 14.141 58.604 46.479 54.731 43.681 26.221 84.891 55.461 74.066 35.160 +mAP: 76.274 96.907 58.082 71.791 89.635 81.170 68.253 59.545 42.553 69.931 35.382 58.339 63.738 70.373 63.168 76.853 93.328 83.946 80.297 50.474 +mAcc: 71.286 97.024 68.558 86.840 94.668 94.733 67.444 82.843 29.719 92.101 22.059 71.078 85.545 68.987 62.294 27.990 86.359 59.865 74.470 42.351 + +thomas 04/06 20:06:11 312/312: Data time: 0.0024, Iter time: 0.4526 Loss 0.155 (AVG: 0.690) Score 96.958 (AVG: 78.730) mIOU 54.739 mAP 69.205 mAcc 69.162 +IOU: 64.678 95.431 55.161 43.786 86.220 68.408 61.855 32.010 27.548 68.384 13.639 58.412 47.050 54.014 42.139 26.221 84.891 55.442 74.066 35.433 +mAP: 75.863 96.897 57.869 72.155 89.358 81.131 67.864 59.580 42.566 69.503 34.724 57.757 63.612 68.916 61.688 76.853 93.328 83.891 80.297 50.249 +mAcc: 71.206 97.013 68.306 87.300 94.632 94.845 67.868 83.126 30.148 92.103 21.167 70.877 85.346 68.239 59.262 27.990 86.359 59.943 74.470 43.043 + +thomas 04/06 20:06:11 Finished test. Elapsed time: 364.0085 +thomas 04/06 20:06:12 Current best mIoU: 57.967 at iter 16000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/06 20:10:12 ===> Epoch[67](20040/301): Loss 0.4048 LR: 8.484e-02 Score 87.504 Data time: 2.3222, Total iter time: 5.9393 +thomas 04/06 20:14:10 ===> Epoch[67](20080/301): Loss 0.4387 LR: 8.481e-02 Score 86.265 Data time: 2.2880, Total iter time: 5.8609 +thomas 04/06 20:18:00 ===> Epoch[67](20120/301): Loss 0.3868 LR: 8.478e-02 Score 87.934 Data time: 2.2525, Total iter time: 5.6804 +thomas 04/06 20:21:54 ===> Epoch[67](20160/301): Loss 0.3730 LR: 8.474e-02 Score 88.314 Data time: 2.2820, Total iter time: 5.7655 +thomas 04/06 20:25:52 ===> Epoch[68](20200/301): Loss 0.4085 LR: 8.471e-02 Score 87.335 Data time: 2.2743, Total iter time: 5.8755 +thomas 04/06 20:29:45 ===> Epoch[68](20240/301): Loss 0.3783 LR: 8.468e-02 Score 88.186 Data time: 2.2440, Total iter time: 5.7304 +thomas 04/06 20:33:34 ===> Epoch[68](20280/301): Loss 0.3975 LR: 8.465e-02 Score 87.491 Data time: 2.2271, Total iter time: 5.6662 +thomas 04/06 20:37:35 ===> Epoch[68](20320/301): Loss 0.3891 LR: 8.462e-02 Score 88.298 Data time: 2.3079, Total iter time: 5.9277 +thomas 04/06 20:41:30 ===> Epoch[68](20360/301): Loss 0.3820 LR: 8.459e-02 Score 88.050 Data time: 2.2812, Total iter time: 5.8146 +thomas 04/06 20:45:26 ===> Epoch[68](20400/301): Loss 0.4150 LR: 8.456e-02 Score 87.271 Data time: 2.3321, Total iter time: 5.8234 +thomas 04/06 20:49:31 ===> Epoch[68](20440/301): Loss 0.4353 LR: 8.453e-02 Score 86.408 Data time: 2.2981, Total iter time: 6.0564 +thomas 04/06 20:53:18 ===> Epoch[69](20480/301): Loss 0.4138 LR: 8.450e-02 Score 87.116 Data time: 2.2078, Total iter time: 5.6070 +thomas 04/06 20:57:29 ===> Epoch[69](20520/301): Loss 0.3955 LR: 8.447e-02 Score 87.472 Data time: 2.4380, Total iter time: 6.1713 +thomas 04/06 21:01:11 ===> Epoch[69](20560/301): Loss 0.3708 LR: 8.444e-02 Score 88.403 Data time: 2.1472, Total iter time: 5.4918 +thomas 04/06 21:05:10 ===> Epoch[69](20600/301): Loss 0.4069 LR: 8.441e-02 Score 87.525 Data time: 2.3299, Total iter time: 5.8898 +thomas 04/06 21:09:06 ===> Epoch[69](20640/301): Loss 0.4024 LR: 8.438e-02 Score 87.377 Data time: 2.2984, Total iter time: 5.8196 +thomas 04/06 21:12:55 ===> Epoch[69](20680/301): Loss 0.4224 LR: 8.435e-02 Score 86.497 Data time: 2.2186, Total iter time: 5.6493 +thomas 04/06 21:16:52 ===> Epoch[69](20720/301): Loss 0.4149 LR: 8.432e-02 Score 86.954 Data time: 2.2825, Total iter time: 5.8740 +thomas 04/06 21:20:54 ===> Epoch[69](20760/301): Loss 0.4032 LR: 8.429e-02 Score 87.310 Data time: 2.3429, Total iter time: 5.9808 +thomas 04/06 21:24:46 ===> Epoch[70](20800/301): Loss 0.3863 LR: 8.426e-02 Score 87.728 Data time: 2.2263, Total iter time: 5.7118 +thomas 04/06 21:28:34 ===> Epoch[70](20840/301): Loss 0.4204 LR: 8.422e-02 Score 87.415 Data time: 2.2508, Total iter time: 5.6472 +thomas 04/06 21:32:36 ===> Epoch[70](20880/301): Loss 0.4041 LR: 8.419e-02 Score 87.346 Data time: 2.3515, Total iter time: 5.9778 +thomas 04/06 21:36:11 ===> Epoch[70](20920/301): Loss 0.3989 LR: 8.416e-02 Score 87.846 Data time: 2.0700, Total iter time: 5.3015 +thomas 04/06 21:40:21 ===> Epoch[70](20960/301): Loss 0.4429 LR: 8.413e-02 Score 86.136 Data time: 2.4284, Total iter time: 6.1592 +thomas 04/06 21:44:25 ===> Epoch[70](21000/301): Loss 0.3931 LR: 8.410e-02 Score 87.731 Data time: 2.3922, Total iter time: 6.0299 +thomas 04/06 21:44:27 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/06 21:44:27 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/06 21:46:30 101/312: Data time: 0.0032, Iter time: 0.7731 Loss 0.797 (AVG: 0.751) Score 76.482 (AVG: 78.888) mIOU 47.211 mAP 66.530 mAcc 58.483 +IOU: 73.794 95.683 32.119 57.913 85.589 68.115 55.847 41.312 31.309 18.245 2.250 7.260 40.171 59.502 0.568 30.089 85.119 35.593 87.400 36.336 +mAP: 79.587 97.710 48.380 67.479 89.548 79.799 67.392 60.800 47.972 65.814 31.327 44.228 52.192 83.901 21.077 87.575 92.287 74.298 86.340 52.891 +mAcc: 84.458 98.004 43.434 84.602 90.651 95.424 63.512 66.789 34.592 89.258 2.295 7.375 54.111 64.290 0.568 31.274 85.601 36.438 88.532 48.460 + +thomas 04/06 21:48:26 201/312: Data time: 0.0024, Iter time: 0.5735 Loss 0.161 (AVG: 0.714) Score 97.379 (AVG: 79.717) mIOU 48.917 mAP 66.202 mAcc 59.929 +IOU: 73.371 95.907 36.418 61.299 86.118 72.419 62.188 40.638 26.941 26.854 3.769 11.884 42.032 58.263 16.100 30.219 82.239 37.396 80.999 33.280 +mAP: 77.002 97.217 47.629 68.653 90.787 81.373 69.580 59.670 44.519 70.140 31.546 44.549 55.068 75.718 32.987 83.272 89.290 72.258 85.324 47.464 +mAcc: 84.376 98.136 46.056 87.066 90.063 96.712 70.189 68.809 29.806 91.845 3.909 12.031 59.873 61.267 16.135 30.799 83.041 38.229 82.163 48.071 + +thomas 04/06 21:50:29 301/312: Data time: 0.0032, Iter time: 1.3474 Loss 0.728 (AVG: 0.727) Score 81.224 (AVG: 79.519) mIOU 49.111 mAP 65.939 mAcc 59.685 +IOU: 72.981 95.903 38.227 60.078 84.895 70.813 60.270 39.214 26.900 37.553 3.474 12.742 41.204 55.888 17.483 29.487 81.383 38.576 81.431 33.726 +mAP: 77.094 97.262 50.425 67.830 88.869 83.601 68.223 59.344 44.977 66.610 29.316 46.340 56.185 72.543 37.407 83.071 88.983 72.343 82.098 46.250 +mAcc: 84.185 98.250 49.235 81.939 88.571 96.617 68.721 67.226 29.438 94.081 3.604 12.923 59.753 58.949 17.530 30.609 82.075 39.642 82.717 47.643 + +thomas 04/06 21:50:40 312/312: Data time: 0.0036, Iter time: 0.5362 Loss 0.479 (AVG: 0.729) Score 85.474 (AVG: 79.456) mIOU 49.163 mAP 66.003 mAcc 59.827 +IOU: 72.921 95.942 38.949 59.733 84.354 70.342 60.214 39.321 26.616 36.901 4.120 13.053 41.407 56.414 16.708 29.487 81.789 40.012 81.431 33.543 +mAP: 76.992 97.325 50.692 68.334 88.751 83.689 68.074 59.442 45.074 67.127 29.809 45.359 56.667 73.094 36.953 83.071 89.278 72.182 82.098 46.055 +mAcc: 84.093 98.251 50.252 82.093 87.940 96.605 68.626 67.212 29.068 94.129 4.273 13.247 60.153 59.537 16.750 30.609 82.456 41.100 82.717 47.426 + +thomas 04/06 21:50:40 Finished test. Elapsed time: 372.8104 +thomas 04/06 21:50:40 Current best mIoU: 57.967 at iter 16000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/06 21:54:25 ===> Epoch[70](21040/301): Loss 0.4212 LR: 8.407e-02 Score 86.989 Data time: 2.1863, Total iter time: 5.5511 +thomas 04/06 21:58:33 ===> Epoch[71](21080/301): Loss 0.4378 LR: 8.404e-02 Score 86.454 Data time: 2.3939, Total iter time: 6.1237 +thomas 04/06 22:02:20 ===> Epoch[71](21120/301): Loss 0.4106 LR: 8.401e-02 Score 86.996 Data time: 2.2101, Total iter time: 5.6015 +thomas 04/06 22:06:23 ===> Epoch[71](21160/301): Loss 0.4351 LR: 8.398e-02 Score 86.896 Data time: 2.3537, Total iter time: 5.9904 +thomas 04/06 22:10:15 ===> Epoch[71](21200/301): Loss 0.3699 LR: 8.395e-02 Score 88.766 Data time: 2.2215, Total iter time: 5.7117 +thomas 04/06 22:14:08 ===> Epoch[71](21240/301): Loss 0.3958 LR: 8.392e-02 Score 87.820 Data time: 2.2410, Total iter time: 5.7396 +thomas 04/06 22:18:12 ===> Epoch[71](21280/301): Loss 0.3917 LR: 8.389e-02 Score 87.796 Data time: 2.3840, Total iter time: 6.0201 +thomas 04/06 22:22:07 ===> Epoch[71](21320/301): Loss 0.3967 LR: 8.386e-02 Score 87.528 Data time: 2.3239, Total iter time: 5.8143 +thomas 04/06 22:26:06 ===> Epoch[71](21360/301): Loss 0.3731 LR: 8.383e-02 Score 88.160 Data time: 2.3173, Total iter time: 5.8880 +thomas 04/06 22:30:10 ===> Epoch[72](21400/301): Loss 0.4165 LR: 8.380e-02 Score 87.111 Data time: 2.3870, Total iter time: 6.0290 +thomas 04/06 22:34:19 ===> Epoch[72](21440/301): Loss 0.3999 LR: 8.377e-02 Score 87.533 Data time: 2.4231, Total iter time: 6.1518 +thomas 04/06 22:38:08 ===> Epoch[72](21480/301): Loss 0.3818 LR: 8.374e-02 Score 88.611 Data time: 2.2242, Total iter time: 5.6482 +thomas 04/06 22:42:01 ===> Epoch[72](21520/301): Loss 0.4023 LR: 8.370e-02 Score 87.491 Data time: 2.2837, Total iter time: 5.7475 +thomas 04/06 22:46:03 ===> Epoch[72](21560/301): Loss 0.4367 LR: 8.367e-02 Score 86.332 Data time: 2.3863, Total iter time: 5.9699 +thomas 04/06 22:50:08 ===> Epoch[72](21600/301): Loss 0.4046 LR: 8.364e-02 Score 87.437 Data time: 2.3822, Total iter time: 6.0438 +thomas 04/06 22:53:49 ===> Epoch[72](21640/301): Loss 0.3922 LR: 8.361e-02 Score 87.979 Data time: 2.1636, Total iter time: 5.4697 +thomas 04/06 22:57:43 ===> Epoch[73](21680/301): Loss 0.3933 LR: 8.358e-02 Score 87.460 Data time: 2.2472, Total iter time: 5.7672 +thomas 04/06 23:01:35 ===> Epoch[73](21720/301): Loss 0.3992 LR: 8.355e-02 Score 87.297 Data time: 2.2188, Total iter time: 5.7220 +thomas 04/06 23:05:26 ===> Epoch[73](21760/301): Loss 0.3983 LR: 8.352e-02 Score 87.850 Data time: 2.2416, Total iter time: 5.7012 +thomas 04/06 23:09:17 ===> Epoch[73](21800/301): Loss 0.3784 LR: 8.349e-02 Score 87.975 Data time: 2.2919, Total iter time: 5.6942 +thomas 04/06 23:13:30 ===> Epoch[73](21840/301): Loss 0.3992 LR: 8.346e-02 Score 87.484 Data time: 2.4536, Total iter time: 6.2564 +thomas 04/06 23:17:31 ===> Epoch[73](21880/301): Loss 0.3937 LR: 8.343e-02 Score 88.094 Data time: 2.3207, Total iter time: 5.9323 +thomas 04/06 23:21:30 ===> Epoch[73](21920/301): Loss 0.3833 LR: 8.340e-02 Score 88.068 Data time: 2.3177, Total iter time: 5.9241 +thomas 04/06 23:25:30 ===> Epoch[73](21960/301): Loss 0.3822 LR: 8.337e-02 Score 88.003 Data time: 2.3241, Total iter time: 5.9135 +thomas 04/06 23:29:23 ===> Epoch[74](22000/301): Loss 0.3852 LR: 8.334e-02 Score 87.929 Data time: 2.2778, Total iter time: 5.7518 +thomas 04/06 23:29:25 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/06 23:29:25 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/06 23:31:26 101/312: Data time: 0.0031, Iter time: 0.9267 Loss 1.451 (AVG: 0.605) Score 64.522 (AVG: 82.463) mIOU 53.329 mAP 69.236 mAcc 61.839 +IOU: 76.554 96.203 41.069 68.667 85.790 74.595 60.441 37.600 36.531 55.085 11.039 43.945 49.993 46.827 42.180 20.840 69.656 35.242 68.905 45.412 +mAP: 78.079 97.685 57.297 70.351 86.937 77.304 64.867 58.026 48.524 63.209 37.505 67.140 64.716 71.163 56.116 83.097 89.545 77.440 83.888 51.824 +mAcc: 90.102 98.564 78.399 74.180 92.806 91.396 76.255 58.413 43.245 59.687 12.461 47.150 71.859 48.655 45.280 20.855 70.114 35.771 69.337 52.242 + +thomas 04/06 23:33:29 201/312: Data time: 0.0027, Iter time: 0.4330 Loss 0.339 (AVG: 0.598) Score 91.117 (AVG: 82.830) mIOU 53.329 mAP 69.492 mAcc 61.903 +IOU: 77.091 95.916 45.898 70.214 84.879 74.878 64.926 40.307 39.377 52.891 12.695 45.374 48.455 50.078 39.980 23.399 58.981 31.889 70.534 38.818 +mAP: 78.147 97.401 58.612 69.591 88.510 82.118 66.216 59.180 48.821 62.876 36.192 65.521 66.437 72.998 61.901 76.304 82.738 78.374 89.770 48.140 +mAcc: 90.144 98.664 77.320 74.098 91.978 93.019 78.595 63.987 47.039 60.163 13.829 48.591 71.590 52.381 43.682 24.333 59.350 32.242 70.802 46.247 + +thomas 04/06 23:35:32 301/312: Data time: 0.0033, Iter time: 1.2964 Loss 0.801 (AVG: 0.609) Score 67.890 (AVG: 82.494) mIOU 52.756 mAP 68.936 mAcc 61.362 +IOU: 76.503 95.969 42.844 68.815 83.962 77.567 64.046 38.680 39.912 51.133 13.387 45.348 49.236 49.279 40.617 18.149 62.192 30.659 67.871 38.947 +mAP: 78.500 97.269 58.775 70.484 87.622 81.436 65.651 56.002 49.797 64.834 37.098 62.218 67.693 71.655 58.804 79.126 85.450 75.599 83.340 47.367 +mAcc: 89.901 98.629 75.477 73.637 90.732 92.695 77.999 62.147 47.456 57.373 14.722 48.541 73.853 51.426 45.351 18.755 62.637 30.967 68.149 46.800 + +thomas 04/06 23:35:46 312/312: Data time: 0.0024, Iter time: 0.7174 Loss 0.302 (AVG: 0.605) Score 91.106 (AVG: 82.620) mIOU 52.864 mAP 69.023 mAcc 61.489 +IOU: 76.337 96.025 43.472 68.188 84.183 77.047 64.901 38.753 39.884 51.095 13.290 45.472 49.468 49.249 41.281 19.259 61.424 29.372 68.177 40.407 +mAP: 78.522 97.336 58.880 70.509 87.901 81.613 66.386 56.216 49.974 64.834 37.026 61.734 67.729 71.655 59.561 79.627 84.200 75.112 83.716 47.937 +mAcc: 89.904 98.622 76.102 73.589 90.823 92.763 78.661 62.061 47.349 57.373 14.611 48.466 74.152 51.426 46.042 19.886 61.850 29.648 68.456 47.995 + +thomas 04/06 23:35:46 Finished test. Elapsed time: 380.8988 +thomas 04/06 23:35:46 Current best mIoU: 57.967 at iter 16000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/06 23:39:34 ===> Epoch[74](22040/301): Loss 0.4000 LR: 8.331e-02 Score 87.367 Data time: 2.2283, Total iter time: 5.6283 +thomas 04/06 23:43:19 ===> Epoch[74](22080/301): Loss 0.4016 LR: 8.328e-02 Score 87.148 Data time: 2.1752, Total iter time: 5.5585 +thomas 04/06 23:47:17 ===> Epoch[74](22120/301): Loss 0.4216 LR: 8.325e-02 Score 86.721 Data time: 2.2920, Total iter time: 5.8739 +thomas 04/06 23:51:11 ===> Epoch[74](22160/301): Loss 0.3737 LR: 8.322e-02 Score 88.812 Data time: 2.2905, Total iter time: 5.7732 +thomas 04/06 23:55:26 ===> Epoch[74](22200/301): Loss 0.4072 LR: 8.318e-02 Score 87.723 Data time: 2.5049, Total iter time: 6.2881 +thomas 04/06 23:59:17 ===> Epoch[74](22240/301): Loss 0.3596 LR: 8.315e-02 Score 88.492 Data time: 2.3072, Total iter time: 5.7041 +thomas 04/07 00:03:19 ===> Epoch[75](22280/301): Loss 0.3952 LR: 8.312e-02 Score 87.511 Data time: 2.3427, Total iter time: 5.9669 +thomas 04/07 00:07:10 ===> Epoch[75](22320/301): Loss 0.3928 LR: 8.309e-02 Score 87.726 Data time: 2.2132, Total iter time: 5.7012 +thomas 04/07 00:11:18 ===> Epoch[75](22360/301): Loss 0.3658 LR: 8.306e-02 Score 88.417 Data time: 2.3521, Total iter time: 6.1040 +thomas 04/07 00:15:08 ===> Epoch[75](22400/301): Loss 0.3762 LR: 8.303e-02 Score 88.258 Data time: 2.2696, Total iter time: 5.6795 +thomas 04/07 00:19:09 ===> Epoch[75](22440/301): Loss 0.3914 LR: 8.300e-02 Score 87.685 Data time: 2.3432, Total iter time: 5.9624 +thomas 04/07 00:23:13 ===> Epoch[75](22480/301): Loss 0.3874 LR: 8.297e-02 Score 88.032 Data time: 2.4222, Total iter time: 6.0155 +thomas 04/07 00:27:15 ===> Epoch[75](22520/301): Loss 0.4569 LR: 8.294e-02 Score 86.214 Data time: 2.3606, Total iter time: 5.9689 +thomas 04/07 00:31:07 ===> Epoch[75](22560/301): Loss 0.3704 LR: 8.291e-02 Score 88.255 Data time: 2.2384, Total iter time: 5.7224 +thomas 04/07 00:35:02 ===> Epoch[76](22600/301): Loss 0.3524 LR: 8.288e-02 Score 88.760 Data time: 2.2457, Total iter time: 5.7879 +thomas 04/07 00:39:10 ===> Epoch[76](22640/301): Loss 0.3823 LR: 8.285e-02 Score 88.366 Data time: 2.4009, Total iter time: 6.1268 +thomas 04/07 00:43:06 ===> Epoch[76](22680/301): Loss 0.3970 LR: 8.282e-02 Score 87.163 Data time: 2.3304, Total iter time: 5.8249 +thomas 04/07 00:47:00 ===> Epoch[76](22720/301): Loss 0.4005 LR: 8.279e-02 Score 87.500 Data time: 2.3057, Total iter time: 5.7763 +thomas 04/07 00:50:54 ===> Epoch[76](22760/301): Loss 0.3925 LR: 8.276e-02 Score 88.159 Data time: 2.2862, Total iter time: 5.7544 +thomas 04/07 00:54:54 ===> Epoch[76](22800/301): Loss 0.3831 LR: 8.273e-02 Score 88.175 Data time: 2.3055, Total iter time: 5.9205 +thomas 04/07 00:58:57 ===> Epoch[76](22840/301): Loss 0.3770 LR: 8.269e-02 Score 88.411 Data time: 2.3024, Total iter time: 5.9851 +thomas 04/07 01:03:02 ===> Epoch[77](22880/301): Loss 0.3579 LR: 8.266e-02 Score 88.677 Data time: 2.4080, Total iter time: 6.0484 +thomas 04/07 01:07:12 ===> Epoch[77](22920/301): Loss 0.4133 LR: 8.263e-02 Score 86.999 Data time: 2.4731, Total iter time: 6.1806 +thomas 04/07 01:10:58 ===> Epoch[77](22960/301): Loss 0.3661 LR: 8.260e-02 Score 88.739 Data time: 2.2209, Total iter time: 5.5602 +thomas 04/07 01:15:06 ===> Epoch[77](23000/301): Loss 0.3548 LR: 8.257e-02 Score 89.230 Data time: 2.3998, Total iter time: 6.1191 +thomas 04/07 01:15:08 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/07 01:15:08 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/07 01:17:13 101/312: Data time: 0.0027, Iter time: 0.5238 Loss 0.735 (AVG: 0.571) Score 80.654 (AVG: 83.851) mIOU 58.974 mAP 71.823 mAcc 70.030 +IOU: 75.248 95.564 57.972 68.782 86.757 76.181 63.580 41.190 34.383 70.666 22.811 46.540 53.930 62.446 29.145 34.878 91.848 45.012 80.037 42.517 +mAP: 77.321 97.477 64.097 81.411 89.349 81.938 71.464 61.770 49.389 71.960 54.160 54.335 69.212 76.729 55.495 88.651 95.424 70.140 70.480 55.668 +mAcc: 84.483 97.585 79.149 87.828 92.925 94.346 67.594 67.822 36.500 96.287 27.273 55.036 68.585 84.391 32.436 36.631 93.331 47.184 80.849 70.356 + +thomas 04/07 01:19:11 201/312: Data time: 0.0028, Iter time: 0.6243 Loss 0.663 (AVG: 0.558) Score 82.117 (AVG: 84.169) mIOU 59.756 mAP 71.617 mAcc 70.428 +IOU: 75.186 95.954 59.599 68.953 87.805 77.307 66.075 43.869 36.860 70.593 17.317 47.750 53.156 67.371 36.257 38.722 83.251 48.147 79.069 41.881 +mAP: 78.500 97.828 66.974 76.691 90.320 84.234 70.905 62.636 48.672 73.905 45.596 57.938 63.940 76.792 51.637 86.156 92.834 76.386 77.826 52.564 +mAcc: 84.629 97.758 79.709 87.323 94.815 93.744 70.508 71.600 38.977 93.337 21.903 55.226 65.902 85.303 41.318 40.349 84.853 50.346 80.166 70.790 + +thomas 04/07 01:21:03 301/312: Data time: 0.0027, Iter time: 0.7028 Loss 1.225 (AVG: 0.570) Score 74.911 (AVG: 83.882) mIOU 59.431 mAP 71.409 mAcc 69.950 +IOU: 75.307 95.858 57.080 68.710 86.994 76.871 64.598 45.065 35.449 70.181 15.594 49.053 52.844 63.634 37.402 41.856 79.628 49.622 81.977 40.887 +mAP: 78.499 97.395 62.773 74.482 89.896 81.889 71.568 63.310 51.538 71.928 41.378 60.109 64.175 77.347 52.333 88.238 90.196 78.960 82.017 50.157 +mAcc: 85.352 97.734 76.933 85.174 95.161 93.902 68.929 69.995 37.416 92.300 20.477 55.545 65.668 80.490 44.381 45.208 80.976 52.716 83.414 67.219 + +thomas 04/07 01:21:14 312/312: Data time: 0.0025, Iter time: 0.6708 Loss 0.462 (AVG: 0.570) Score 84.346 (AVG: 83.880) mIOU 59.584 mAP 71.451 mAcc 70.158 +IOU: 75.557 95.880 57.658 68.563 87.015 75.976 64.064 44.667 35.168 69.269 16.581 49.502 52.486 64.363 42.666 40.985 79.563 49.076 81.977 40.656 +mAP: 78.680 97.445 63.139 74.738 90.121 82.523 71.257 63.254 51.249 71.928 41.780 59.568 63.899 77.872 54.305 86.439 90.330 78.099 82.017 50.382 +mAcc: 85.541 97.737 77.572 84.940 95.235 94.072 68.838 68.942 37.133 92.300 21.894 55.694 65.507 81.128 50.058 44.194 80.892 52.161 83.414 65.899 + +thomas 04/07 01:21:14 Finished test. Elapsed time: 366.5519 +thomas 04/07 01:21:16 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/07 01:21:16 Current best mIoU: 59.584 at iter 23000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/07 01:25:04 ===> Epoch[77](23040/301): Loss 0.3548 LR: 8.254e-02 Score 88.953 Data time: 2.2031, Total iter time: 5.6385 +thomas 04/07 01:29:10 ===> Epoch[77](23080/301): Loss 0.3880 LR: 8.251e-02 Score 87.743 Data time: 2.4008, Total iter time: 6.0689 +thomas 04/07 01:33:06 ===> Epoch[77](23120/301): Loss 0.4220 LR: 8.248e-02 Score 87.273 Data time: 2.3590, Total iter time: 5.8343 +thomas 04/07 01:37:02 ===> Epoch[77](23160/301): Loss 0.3971 LR: 8.245e-02 Score 87.659 Data time: 2.3069, Total iter time: 5.8124 +thomas 04/07 01:40:54 ===> Epoch[78](23200/301): Loss 0.4344 LR: 8.242e-02 Score 86.887 Data time: 2.2560, Total iter time: 5.7180 +thomas 04/07 01:45:06 ===> Epoch[78](23240/301): Loss 0.4296 LR: 8.239e-02 Score 86.411 Data time: 2.3920, Total iter time: 6.2224 +thomas 04/07 01:48:51 ===> Epoch[78](23280/301): Loss 0.4187 LR: 8.236e-02 Score 87.303 Data time: 2.1561, Total iter time: 5.5485 +thomas 04/07 01:52:23 ===> Epoch[78](23320/301): Loss 0.3667 LR: 8.233e-02 Score 88.842 Data time: 2.0826, Total iter time: 5.2305 +thomas 04/07 01:56:27 ===> Epoch[78](23360/301): Loss 0.3981 LR: 8.230e-02 Score 87.523 Data time: 2.4339, Total iter time: 6.0086 +thomas 04/07 02:00:36 ===> Epoch[78](23400/301): Loss 0.4223 LR: 8.227e-02 Score 86.909 Data time: 2.4509, Total iter time: 6.1461 +thomas 04/07 02:04:28 ===> Epoch[78](23440/301): Loss 0.3635 LR: 8.223e-02 Score 88.533 Data time: 2.2186, Total iter time: 5.7265 +thomas 04/07 02:08:03 ===> Epoch[79](23480/301): Loss 0.3788 LR: 8.220e-02 Score 88.174 Data time: 2.0572, Total iter time: 5.3258 +thomas 04/07 02:12:02 ===> Epoch[79](23520/301): Loss 0.3733 LR: 8.217e-02 Score 88.439 Data time: 2.2403, Total iter time: 5.8898 +thomas 04/07 02:15:52 ===> Epoch[79](23560/301): Loss 0.3846 LR: 8.214e-02 Score 87.864 Data time: 2.2429, Total iter time: 5.6961 +thomas 04/07 02:20:08 ===> Epoch[79](23600/301): Loss 0.3925 LR: 8.211e-02 Score 87.859 Data time: 2.5364, Total iter time: 6.3181 +thomas 04/07 02:24:05 ===> Epoch[79](23640/301): Loss 0.4042 LR: 8.208e-02 Score 87.582 Data time: 2.3887, Total iter time: 5.8599 +thomas 04/07 02:28:01 ===> Epoch[79](23680/301): Loss 0.3807 LR: 8.205e-02 Score 87.706 Data time: 2.2910, Total iter time: 5.8256 +thomas 04/07 02:32:10 ===> Epoch[79](23720/301): Loss 0.3516 LR: 8.202e-02 Score 89.393 Data time: 2.4068, Total iter time: 6.1764 +thomas 04/07 02:36:06 ===> Epoch[79](23760/301): Loss 0.3479 LR: 8.199e-02 Score 88.769 Data time: 2.2530, Total iter time: 5.8016 +thomas 04/07 02:39:56 ===> Epoch[80](23800/301): Loss 0.3541 LR: 8.196e-02 Score 88.933 Data time: 2.2757, Total iter time: 5.7039 +thomas 04/07 02:43:52 ===> Epoch[80](23840/301): Loss 0.3707 LR: 8.193e-02 Score 88.752 Data time: 2.3306, Total iter time: 5.8292 +thomas 04/07 02:48:11 ===> Epoch[80](23880/301): Loss 0.3684 LR: 8.190e-02 Score 88.789 Data time: 2.5240, Total iter time: 6.3832 +thomas 04/07 02:52:14 ===> Epoch[80](23920/301): Loss 0.3813 LR: 8.187e-02 Score 87.916 Data time: 2.3300, Total iter time: 5.9750 +thomas 04/07 02:56:01 ===> Epoch[80](23960/301): Loss 0.4328 LR: 8.184e-02 Score 86.695 Data time: 2.1793, Total iter time: 5.6062 +thomas 04/07 02:59:53 ===> Epoch[80](24000/301): Loss 0.4313 LR: 8.181e-02 Score 86.287 Data time: 2.2389, Total iter time: 5.7236 +thomas 04/07 02:59:54 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/07 02:59:54 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/07 03:02:05 101/312: Data time: 0.0031, Iter time: 0.3371 Loss 0.377 (AVG: 0.595) Score 81.091 (AVG: 83.304) mIOU 59.000 mAP 70.341 mAcc 70.243 +IOU: 76.561 95.411 54.078 64.605 83.704 79.605 63.079 41.510 42.034 61.610 9.733 62.024 57.458 64.185 34.422 61.705 87.536 30.761 71.154 38.836 +mAP: 79.746 95.473 58.273 66.959 87.217 81.839 70.575 59.918 49.903 73.094 40.764 58.333 71.216 75.019 52.260 89.993 96.266 80.073 70.337 49.556 +mAcc: 89.363 98.367 79.612 71.572 93.685 93.356 77.860 56.317 51.741 74.311 10.607 79.620 73.309 83.909 55.173 71.098 93.584 30.832 75.521 45.024 + +thomas 04/07 03:04:15 201/312: Data time: 0.0024, Iter time: 0.6240 Loss 0.547 (AVG: 0.599) Score 82.049 (AVG: 83.342) mIOU 57.706 mAP 69.732 mAcc 69.215 +IOU: 75.950 95.976 50.287 65.597 83.824 76.572 65.840 43.925 37.630 66.279 8.740 55.580 52.539 59.874 40.054 53.004 83.709 30.966 68.878 38.889 +mAP: 78.009 96.158 57.876 70.947 87.557 80.382 71.122 59.574 48.893 70.203 33.809 60.389 68.379 69.143 55.154 90.042 96.481 77.620 75.148 47.752 +mAcc: 89.424 98.610 79.826 73.214 94.303 91.129 75.988 58.651 44.803 77.811 9.566 68.979 71.460 73.240 60.190 74.687 89.545 31.128 77.102 44.634 + +thomas 04/07 03:06:09 301/312: Data time: 0.0024, Iter time: 0.4296 Loss 0.556 (AVG: 0.574) Score 74.675 (AVG: 83.970) mIOU 58.482 mAP 70.429 mAcc 70.282 +IOU: 76.594 95.926 49.637 66.261 85.367 75.449 67.948 42.902 37.430 68.433 9.626 56.286 53.064 58.002 39.442 53.472 85.293 31.691 78.178 38.630 +mAP: 77.484 96.305 57.611 69.282 88.186 81.657 71.358 59.266 50.529 70.774 33.865 58.712 70.176 71.182 56.684 89.113 96.108 78.678 82.472 49.127 +mAcc: 89.104 98.583 78.617 74.015 94.753 92.932 76.967 59.134 43.349 78.693 10.462 67.310 72.945 76.690 62.960 76.353 91.355 31.808 85.050 44.550 + +thomas 04/07 03:06:19 312/312: Data time: 0.0024, Iter time: 0.2547 Loss 0.344 (AVG: 0.573) Score 89.201 (AVG: 84.031) mIOU 58.658 mAP 70.650 mAcc 70.487 +IOU: 76.849 95.891 49.470 66.106 85.518 74.478 68.246 42.789 38.057 68.731 9.776 56.226 53.863 59.606 39.386 53.336 85.356 31.715 79.004 38.756 +mAP: 77.795 96.404 57.915 69.282 88.217 81.657 71.453 59.137 50.423 70.419 34.801 58.645 70.359 72.143 56.684 89.204 96.220 79.085 83.033 50.122 +mAcc: 89.263 98.584 78.737 74.015 94.812 92.932 77.354 58.889 43.710 78.990 10.614 67.136 73.129 78.214 62.960 76.667 91.609 31.829 85.692 44.602 + +thomas 04/07 03:06:19 Finished test. Elapsed time: 384.2776 +thomas 04/07 03:06:19 Current best mIoU: 59.584 at iter 23000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/07 03:10:21 ===> Epoch[80](24040/301): Loss 0.3964 LR: 8.177e-02 Score 87.665 Data time: 2.4030, Total iter time: 5.9853 +thomas 04/07 03:14:25 ===> Epoch[80](24080/301): Loss 0.3840 LR: 8.174e-02 Score 87.871 Data time: 2.3665, Total iter time: 6.0016 +thomas 04/07 03:18:08 ===> Epoch[81](24120/301): Loss 0.3702 LR: 8.171e-02 Score 88.634 Data time: 2.1464, Total iter time: 5.5081 +thomas 04/07 03:22:01 ===> Epoch[81](24160/301): Loss 0.3809 LR: 8.168e-02 Score 88.300 Data time: 2.2464, Total iter time: 5.7615 +thomas 04/07 03:25:45 ===> Epoch[81](24200/301): Loss 0.4226 LR: 8.165e-02 Score 87.307 Data time: 2.1647, Total iter time: 5.5086 +thomas 04/07 03:29:44 ===> Epoch[81](24240/301): Loss 0.3995 LR: 8.162e-02 Score 87.793 Data time: 2.3431, Total iter time: 5.9059 +thomas 04/07 03:33:57 ===> Epoch[81](24280/301): Loss 0.3941 LR: 8.159e-02 Score 87.902 Data time: 2.5577, Total iter time: 6.2639 +thomas 04/07 03:37:55 ===> Epoch[81](24320/301): Loss 0.3787 LR: 8.156e-02 Score 87.897 Data time: 2.2628, Total iter time: 5.8492 +thomas 04/07 03:41:39 ===> Epoch[81](24360/301): Loss 0.3867 LR: 8.153e-02 Score 88.198 Data time: 2.1620, Total iter time: 5.5355 +thomas 04/07 03:45:32 ===> Epoch[82](24400/301): Loss 0.3957 LR: 8.150e-02 Score 87.835 Data time: 2.2258, Total iter time: 5.7519 +thomas 04/07 03:49:33 ===> Epoch[82](24440/301): Loss 0.4083 LR: 8.147e-02 Score 87.419 Data time: 2.2974, Total iter time: 5.9494 +thomas 04/07 03:53:29 ===> Epoch[82](24480/301): Loss 0.3836 LR: 8.144e-02 Score 87.941 Data time: 2.3137, Total iter time: 5.8333 +thomas 04/07 03:57:45 ===> Epoch[82](24520/301): Loss 0.3842 LR: 8.141e-02 Score 88.152 Data time: 2.5242, Total iter time: 6.3233 +thomas 04/07 04:01:37 ===> Epoch[82](24560/301): Loss 0.3650 LR: 8.138e-02 Score 88.345 Data time: 2.2272, Total iter time: 5.7379 +thomas 04/07 04:05:37 ===> Epoch[82](24600/301): Loss 0.3824 LR: 8.135e-02 Score 88.051 Data time: 2.2971, Total iter time: 5.9025 +thomas 04/07 04:09:20 ===> Epoch[82](24640/301): Loss 0.4314 LR: 8.131e-02 Score 86.986 Data time: 2.1463, Total iter time: 5.5028 +thomas 04/07 04:13:07 ===> Epoch[82](24680/301): Loss 0.3658 LR: 8.128e-02 Score 88.394 Data time: 2.2236, Total iter time: 5.6174 +thomas 04/07 04:17:20 ===> Epoch[83](24720/301): Loss 0.3818 LR: 8.125e-02 Score 88.290 Data time: 2.4905, Total iter time: 6.2268 +thomas 04/07 04:21:25 ===> Epoch[83](24760/301): Loss 0.4216 LR: 8.122e-02 Score 87.045 Data time: 2.4149, Total iter time: 6.0393 +thomas 04/07 04:25:20 ===> Epoch[83](24800/301): Loss 0.3665 LR: 8.119e-02 Score 88.900 Data time: 2.2874, Total iter time: 5.8196 +thomas 04/07 04:29:13 ===> Epoch[83](24840/301): Loss 0.3475 LR: 8.116e-02 Score 89.247 Data time: 2.2428, Total iter time: 5.7509 +thomas 04/07 04:33:06 ===> Epoch[83](24880/301): Loss 0.3842 LR: 8.113e-02 Score 87.974 Data time: 2.2371, Total iter time: 5.7446 +thomas 04/07 04:37:03 ===> Epoch[83](24920/301): Loss 0.3671 LR: 8.110e-02 Score 88.177 Data time: 2.2946, Total iter time: 5.8587 +thomas 04/07 04:41:06 ===> Epoch[83](24960/301): Loss 0.3670 LR: 8.107e-02 Score 88.323 Data time: 2.4259, Total iter time: 6.0031 +thomas 04/07 04:45:03 ===> Epoch[84](25000/301): Loss 0.3533 LR: 8.104e-02 Score 88.729 Data time: 2.3604, Total iter time: 5.8354 +thomas 04/07 04:45:04 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/07 04:45:05 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/07 04:47:10 101/312: Data time: 0.0029, Iter time: 0.4896 Loss 0.340 (AVG: 0.770) Score 91.180 (AVG: 75.644) mIOU 48.988 mAP 69.262 mAcc 64.457 +IOU: 63.943 95.852 43.202 37.764 77.103 49.128 56.454 34.627 36.758 66.275 11.026 47.694 45.525 30.128 36.076 40.988 59.616 33.959 85.410 28.226 +mAP: 76.456 97.725 55.398 75.605 88.973 81.700 64.491 64.836 49.182 68.804 34.156 56.619 64.840 75.694 47.724 84.859 96.229 74.619 81.571 45.764 +mAcc: 70.382 98.173 46.470 92.363 81.119 97.921 69.634 71.880 38.786 93.192 11.729 49.801 62.886 81.385 39.932 46.862 59.806 34.161 88.076 54.590 + +thomas 04/07 04:49:08 201/312: Data time: 0.0027, Iter time: 0.6250 Loss 0.731 (AVG: 0.746) Score 77.573 (AVG: 76.888) mIOU 49.114 mAP 67.538 mAcc 63.741 +IOU: 63.615 96.101 40.970 42.169 81.484 50.194 62.346 33.959 34.464 67.773 10.701 44.021 50.171 41.769 22.363 38.840 61.835 34.796 76.718 27.990 +mAP: 75.988 97.615 52.430 75.032 89.195 78.738 66.593 61.539 45.118 72.581 37.766 55.487 66.520 71.573 38.702 85.272 93.672 68.171 73.722 45.053 +mAcc: 69.400 98.331 44.214 90.910 85.166 93.987 73.163 75.946 36.429 94.595 11.341 45.866 70.854 83.542 24.214 43.484 62.165 35.024 79.063 57.131 + +thomas 04/07 04:50:59 301/312: Data time: 0.0025, Iter time: 0.3899 Loss 0.328 (AVG: 0.739) Score 90.543 (AVG: 77.038) mIOU 50.289 mAP 68.058 mAcc 64.282 +IOU: 63.694 96.061 40.616 45.107 82.232 53.077 64.122 34.630 35.038 71.144 10.361 43.982 52.020 47.557 22.665 43.158 63.374 35.581 74.149 27.209 +mAP: 76.712 97.390 53.175 71.533 88.694 79.684 68.361 60.666 46.667 71.723 34.638 54.195 66.409 75.900 39.740 85.919 93.311 72.324 79.697 44.423 +mAcc: 69.278 98.370 43.457 88.171 85.803 94.951 75.447 76.179 37.310 94.437 11.130 45.901 71.127 84.094 24.487 49.537 63.662 35.849 79.145 57.308 + +thomas 04/07 04:51:12 312/312: Data time: 0.0033, Iter time: 0.3169 Loss 0.680 (AVG: 0.739) Score 77.117 (AVG: 77.057) mIOU 50.155 mAP 68.113 mAcc 64.253 +IOU: 63.528 96.091 40.638 44.433 82.397 52.356 64.762 34.245 34.937 71.080 9.756 42.863 51.417 47.202 22.492 43.156 63.879 35.659 74.149 28.070 +mAP: 76.647 97.445 53.577 71.087 88.348 79.927 69.085 60.461 46.841 71.489 33.976 54.828 65.948 75.560 39.929 85.919 93.400 72.547 79.697 45.555 +mAcc: 69.077 98.382 43.549 86.939 85.917 95.045 75.933 75.878 37.224 94.525 10.461 45.546 70.903 83.761 24.285 49.537 64.166 35.926 79.145 58.851 + +thomas 04/07 04:51:12 Finished test. Elapsed time: 367.6927 +thomas 04/07 04:51:12 Current best mIoU: 59.584 at iter 23000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/07 04:55:14 ===> Epoch[84](25040/301): Loss 0.3255 LR: 8.101e-02 Score 89.849 Data time: 2.3389, Total iter time: 5.9640 +thomas 04/07 04:58:59 ===> Epoch[84](25080/301): Loss 0.3473 LR: 8.098e-02 Score 89.119 Data time: 2.1562, Total iter time: 5.5631 +thomas 04/07 05:02:52 ===> Epoch[84](25120/301): Loss 0.3870 LR: 8.095e-02 Score 87.861 Data time: 2.2710, Total iter time: 5.7402 +thomas 04/07 05:07:06 ===> Epoch[84](25160/301): Loss 0.3769 LR: 8.092e-02 Score 88.385 Data time: 2.5283, Total iter time: 6.2566 +thomas 04/07 05:11:16 ===> Epoch[84](25200/301): Loss 0.3749 LR: 8.088e-02 Score 88.241 Data time: 2.4526, Total iter time: 6.1759 +thomas 04/07 05:15:00 ===> Epoch[84](25240/301): Loss 0.3711 LR: 8.085e-02 Score 88.318 Data time: 2.1688, Total iter time: 5.5341 +thomas 04/07 05:19:00 ===> Epoch[84](25280/301): Loss 0.3817 LR: 8.082e-02 Score 88.101 Data time: 2.3118, Total iter time: 5.9302 +thomas 04/07 05:22:45 ===> Epoch[85](25320/301): Loss 0.4271 LR: 8.079e-02 Score 86.900 Data time: 2.1950, Total iter time: 5.5567 +thomas 04/07 05:26:51 ===> Epoch[85](25360/301): Loss 0.3914 LR: 8.076e-02 Score 87.528 Data time: 2.3601, Total iter time: 6.0598 +thomas 04/07 05:31:17 ===> Epoch[85](25400/301): Loss 0.3476 LR: 8.073e-02 Score 89.296 Data time: 2.6296, Total iter time: 6.5431 +thomas 04/07 05:35:10 ===> Epoch[85](25440/301): Loss 0.3877 LR: 8.070e-02 Score 88.157 Data time: 2.3189, Total iter time: 5.7561 +thomas 04/07 05:39:02 ===> Epoch[85](25480/301): Loss 0.4004 LR: 8.067e-02 Score 87.326 Data time: 2.2188, Total iter time: 5.7217 +thomas 04/07 05:42:44 ===> Epoch[85](25520/301): Loss 0.3755 LR: 8.064e-02 Score 88.305 Data time: 2.1590, Total iter time: 5.4796 +thomas 04/07 05:46:31 ===> Epoch[85](25560/301): Loss 0.3682 LR: 8.061e-02 Score 88.604 Data time: 2.1962, Total iter time: 5.5955 +thomas 04/07 05:50:39 ===> Epoch[86](25600/301): Loss 0.3866 LR: 8.058e-02 Score 87.906 Data time: 2.3891, Total iter time: 6.1154 +thomas 04/07 05:55:00 ===> Epoch[86](25640/301): Loss 0.3683 LR: 8.055e-02 Score 88.657 Data time: 2.6298, Total iter time: 6.4544 +thomas 04/07 05:58:51 ===> Epoch[86](25680/301): Loss 0.4031 LR: 8.052e-02 Score 87.523 Data time: 2.2929, Total iter time: 5.6862 +thomas 04/07 06:02:42 ===> Epoch[86](25720/301): Loss 0.3680 LR: 8.049e-02 Score 88.605 Data time: 2.2402, Total iter time: 5.7034 +thomas 04/07 06:06:41 ===> Epoch[86](25760/301): Loss 0.3730 LR: 8.045e-02 Score 88.138 Data time: 2.2778, Total iter time: 5.9135 +thomas 04/07 06:10:25 ===> Epoch[86](25800/301): Loss 0.4245 LR: 8.042e-02 Score 86.924 Data time: 2.1761, Total iter time: 5.5295 +thomas 04/07 06:14:01 ===> Epoch[86](25840/301): Loss 0.3852 LR: 8.039e-02 Score 87.977 Data time: 2.0923, Total iter time: 5.3243 +thomas 04/07 06:18:07 ===> Epoch[86](25880/301): Loss 0.3920 LR: 8.036e-02 Score 87.639 Data time: 2.4492, Total iter time: 6.0804 +thomas 04/07 06:22:13 ===> Epoch[87](25920/301): Loss 0.3705 LR: 8.033e-02 Score 88.634 Data time: 2.4322, Total iter time: 6.0645 +thomas 04/07 06:26:01 ===> Epoch[87](25960/301): Loss 0.3835 LR: 8.030e-02 Score 88.105 Data time: 2.1862, Total iter time: 5.6368 +thomas 04/07 06:29:57 ===> Epoch[87](26000/301): Loss 0.3726 LR: 8.027e-02 Score 88.261 Data time: 2.2629, Total iter time: 5.8304 +thomas 04/07 06:29:59 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/07 06:29:59 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/07 06:31:59 101/312: Data time: 0.0040, Iter time: 1.0384 Loss 0.434 (AVG: 0.614) Score 85.492 (AVG: 82.028) mIOU 59.288 mAP 72.468 mAcc 70.775 +IOU: 71.831 95.889 49.180 76.234 81.801 70.602 71.613 39.847 21.763 71.755 19.954 47.059 61.395 66.031 57.472 40.537 81.613 56.663 77.212 27.313 +mAP: 76.029 96.562 61.631 79.959 87.709 90.237 73.251 60.259 44.770 58.437 41.071 62.351 68.714 81.852 68.713 86.287 94.036 84.242 82.286 50.963 +mAcc: 82.489 98.292 85.846 87.768 94.181 88.643 82.454 66.391 22.879 93.110 23.250 50.302 84.071 84.581 66.145 41.712 82.705 58.760 90.227 31.705 + +thomas 04/07 06:34:05 201/312: Data time: 0.0033, Iter time: 0.4275 Loss 0.482 (AVG: 0.603) Score 85.575 (AVG: 82.542) mIOU 58.880 mAP 70.739 mAcc 70.379 +IOU: 73.192 95.935 49.061 72.660 83.348 75.832 64.735 43.211 32.327 67.714 13.669 48.269 53.812 64.597 50.137 40.039 83.012 56.259 80.604 29.181 +mAP: 76.351 96.584 59.985 72.716 89.193 83.095 70.779 61.334 52.411 65.936 37.907 58.271 67.559 77.773 60.959 80.010 93.309 81.366 80.707 48.536 +mAcc: 83.390 98.207 84.198 80.627 94.553 88.783 73.890 70.105 33.503 91.101 16.672 52.535 84.311 81.118 61.123 43.415 83.905 60.158 90.324 35.659 + +thomas 04/07 06:35:54 301/312: Data time: 0.0025, Iter time: 0.6510 Loss 1.028 (AVG: 0.613) Score 67.047 (AVG: 82.384) mIOU 58.082 mAP 71.176 mAcc 69.493 +IOU: 73.041 96.081 45.213 72.688 83.756 79.296 65.668 43.009 30.287 66.492 13.191 46.912 53.035 62.861 47.839 34.908 85.416 50.967 81.536 29.450 +mAP: 76.285 96.713 57.931 73.654 89.226 83.658 72.681 60.509 51.356 69.822 38.810 59.314 66.926 77.252 62.263 82.510 94.604 80.817 79.999 49.190 +mAcc: 83.635 98.231 81.994 82.053 94.964 90.153 74.081 68.462 31.189 89.721 16.780 50.771 85.342 80.484 58.786 37.100 86.482 53.691 89.883 36.060 + +thomas 04/07 06:36:05 312/312: Data time: 0.0030, Iter time: 0.4784 Loss 0.367 (AVG: 0.613) Score 90.242 (AVG: 82.327) mIOU 58.034 mAP 71.293 mAcc 69.470 +IOU: 72.973 96.095 45.028 73.660 83.900 79.219 65.285 42.139 30.262 66.282 13.520 47.500 52.616 62.873 47.698 34.908 85.416 50.867 81.536 28.913 +mAP: 76.441 96.732 58.170 74.180 89.460 83.804 72.839 60.743 51.501 70.054 38.629 59.655 67.022 76.808 62.917 82.510 94.604 80.817 79.999 48.984 +mAcc: 83.542 98.242 82.120 82.882 95.056 90.260 73.647 66.537 31.281 90.388 17.288 51.279 85.370 80.556 58.391 37.100 86.482 53.691 89.883 35.407 + +thomas 04/07 06:36:05 Finished test. Elapsed time: 365.9681 +thomas 04/07 06:36:05 Current best mIoU: 59.584 at iter 23000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/07 06:40:04 ===> Epoch[87](26040/301): Loss 0.4095 LR: 8.024e-02 Score 87.745 Data time: 2.3666, Total iter time: 5.9121 +thomas 04/07 06:44:23 ===> Epoch[87](26080/301): Loss 0.3952 LR: 8.021e-02 Score 87.762 Data time: 2.5637, Total iter time: 6.3961 +thomas 04/07 06:48:28 ===> Epoch[87](26120/301): Loss 0.3679 LR: 8.018e-02 Score 88.377 Data time: 2.3847, Total iter time: 6.0308 +thomas 04/07 06:52:17 ===> Epoch[87](26160/301): Loss 0.3728 LR: 8.015e-02 Score 88.520 Data time: 2.1939, Total iter time: 5.6677 +thomas 04/07 06:56:13 ===> Epoch[88](26200/301): Loss 0.3689 LR: 8.012e-02 Score 88.372 Data time: 2.2699, Total iter time: 5.8089 +thomas 04/07 07:00:01 ===> Epoch[88](26240/301): Loss 0.3635 LR: 8.009e-02 Score 88.523 Data time: 2.1816, Total iter time: 5.6352 +thomas 04/07 07:04:02 ===> Epoch[88](26280/301): Loss 0.3922 LR: 8.005e-02 Score 87.773 Data time: 2.4066, Total iter time: 5.9528 +thomas 04/07 07:08:04 ===> Epoch[88](26320/301): Loss 0.3504 LR: 8.002e-02 Score 88.833 Data time: 2.4340, Total iter time: 5.9612 +thomas 04/07 07:12:19 ===> Epoch[88](26360/301): Loss 0.3588 LR: 7.999e-02 Score 88.615 Data time: 2.4854, Total iter time: 6.3128 +thomas 04/07 07:16:15 ===> Epoch[88](26400/301): Loss 0.3943 LR: 7.996e-02 Score 87.974 Data time: 2.2801, Total iter time: 5.8193 +thomas 04/07 07:20:10 ===> Epoch[88](26440/301): Loss 0.3643 LR: 7.993e-02 Score 88.539 Data time: 2.2682, Total iter time: 5.7952 +thomas 04/07 07:24:05 ===> Epoch[88](26480/301): Loss 0.3626 LR: 7.990e-02 Score 88.697 Data time: 2.2843, Total iter time: 5.8148 +thomas 04/07 07:28:20 ===> Epoch[89](26520/301): Loss 0.3609 LR: 7.987e-02 Score 88.529 Data time: 2.5619, Total iter time: 6.3066 +thomas 04/07 07:32:24 ===> Epoch[89](26560/301): Loss 0.3803 LR: 7.984e-02 Score 88.103 Data time: 2.4439, Total iter time: 6.0000 +thomas 04/07 07:36:24 ===> Epoch[89](26600/301): Loss 0.4052 LR: 7.981e-02 Score 87.958 Data time: 2.3512, Total iter time: 5.9216 +thomas 04/07 07:40:07 ===> Epoch[89](26640/301): Loss 0.3888 LR: 7.978e-02 Score 87.540 Data time: 2.1641, Total iter time: 5.4978 +thomas 04/07 07:44:06 ===> Epoch[89](26680/301): Loss 0.3756 LR: 7.975e-02 Score 88.397 Data time: 2.2921, Total iter time: 5.8974 +thomas 04/07 07:47:58 ===> Epoch[89](26720/301): Loss 0.3849 LR: 7.972e-02 Score 88.191 Data time: 2.2699, Total iter time: 5.7314 +thomas 04/07 07:51:47 ===> Epoch[89](26760/301): Loss 0.3429 LR: 7.969e-02 Score 89.262 Data time: 2.2941, Total iter time: 5.6604 +thomas 04/07 07:56:03 ===> Epoch[90](26800/301): Loss 0.3935 LR: 7.965e-02 Score 87.662 Data time: 2.5328, Total iter time: 6.3171 +thomas 04/07 08:00:05 ===> Epoch[90](26840/301): Loss 0.3394 LR: 7.962e-02 Score 89.464 Data time: 2.3605, Total iter time: 5.9922 +thomas 04/07 08:03:53 ===> Epoch[90](26880/301): Loss 0.3721 LR: 7.959e-02 Score 88.115 Data time: 2.1949, Total iter time: 5.6277 +thomas 04/07 08:07:42 ===> Epoch[90](26920/301): Loss 0.3696 LR: 7.956e-02 Score 88.510 Data time: 2.2002, Total iter time: 5.6568 +thomas 04/07 08:11:31 ===> Epoch[90](26960/301): Loss 0.3911 LR: 7.953e-02 Score 88.074 Data time: 2.2056, Total iter time: 5.6506 +thomas 04/07 08:15:45 ===> Epoch[90](27000/301): Loss 0.3980 LR: 7.950e-02 Score 87.371 Data time: 2.5149, Total iter time: 6.2727 +thomas 04/07 08:15:47 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/07 08:15:47 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/07 08:17:54 101/312: Data time: 0.0025, Iter time: 0.6575 Loss 0.854 (AVG: 0.709) Score 72.109 (AVG: 79.067) mIOU 55.371 mAP 68.794 mAcc 69.049 +IOU: 65.934 95.609 58.938 51.701 88.700 65.386 66.486 33.081 38.962 49.172 17.565 60.917 57.149 34.899 53.691 40.677 80.252 52.039 62.911 33.347 +mAP: 72.314 97.564 54.540 52.038 91.512 84.710 75.079 57.754 41.034 74.017 36.554 61.220 62.003 74.794 72.392 72.138 93.910 81.895 71.079 49.333 +mAcc: 74.575 98.307 64.181 77.388 93.440 94.920 74.304 75.309 45.692 76.413 20.276 77.008 72.073 84.635 60.430 49.380 80.963 53.298 63.259 45.127 + +thomas 04/07 08:20:00 201/312: Data time: 0.0031, Iter time: 0.7743 Loss 0.564 (AVG: 0.672) Score 78.746 (AVG: 79.382) mIOU 55.818 mAP 68.799 mAcc 68.974 +IOU: 64.870 95.803 56.837 58.559 87.535 69.781 67.870 33.262 36.982 61.875 12.138 61.563 55.421 25.929 35.582 43.706 82.152 56.517 73.253 36.717 +mAP: 73.605 97.814 53.979 65.226 91.088 85.212 74.988 56.551 42.967 73.227 30.497 60.446 61.750 69.287 52.610 76.339 94.941 84.447 78.117 52.894 +mAcc: 72.439 98.222 61.598 82.884 92.193 95.871 76.571 72.269 42.773 88.289 14.298 74.849 68.747 79.607 38.908 51.911 82.783 57.754 74.098 53.411 + +thomas 04/07 08:21:58 301/312: Data time: 0.0041, Iter time: 0.9790 Loss 0.984 (AVG: 0.669) Score 75.220 (AVG: 79.508) mIOU 55.522 mAP 68.981 mAcc 68.541 +IOU: 65.521 95.945 55.552 59.231 86.711 70.191 64.676 33.112 39.203 65.806 14.249 57.541 56.681 27.444 31.929 44.443 84.645 52.437 69.720 35.396 +mAP: 75.086 97.724 55.123 67.288 89.739 82.372 73.109 58.129 44.068 71.845 33.376 59.120 62.875 69.807 54.100 75.356 94.878 83.294 80.396 51.937 +mAcc: 72.646 98.313 61.950 82.617 91.126 95.227 73.606 72.559 45.348 91.044 17.063 69.235 72.703 79.799 34.530 50.040 85.582 53.560 70.470 53.402 + +thomas 04/07 08:22:16 312/312: Data time: 0.0034, Iter time: 0.3878 Loss 0.154 (AVG: 0.676) Score 94.404 (AVG: 79.123) mIOU 55.376 mAP 69.011 mAcc 68.526 +IOU: 64.849 95.971 54.715 58.382 86.492 70.361 64.140 32.897 39.389 65.219 13.912 56.993 56.577 26.321 32.559 44.528 84.796 52.638 71.400 35.386 +mAP: 74.522 97.758 55.284 67.715 89.761 83.007 72.690 57.892 43.614 72.456 33.774 58.401 62.852 70.414 53.982 73.859 95.089 83.332 81.428 52.392 +mAcc: 72.005 98.291 60.988 82.758 91.014 95.490 73.012 72.243 45.525 91.092 16.645 68.857 73.013 80.481 35.194 49.957 85.714 53.750 72.158 52.331 + +thomas 04/07 08:22:16 Finished test. Elapsed time: 389.2360 +thomas 04/07 08:22:16 Current best mIoU: 59.584 at iter 23000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/07 08:26:12 ===> Epoch[90](27040/301): Loss 0.3476 LR: 7.947e-02 Score 89.082 Data time: 2.2855, Total iter time: 5.8327 +thomas 04/07 08:30:19 ===> Epoch[90](27080/301): Loss 0.3993 LR: 7.944e-02 Score 87.543 Data time: 2.3904, Total iter time: 6.1060 +thomas 04/07 08:34:25 ===> Epoch[91](27120/301): Loss 0.3721 LR: 7.941e-02 Score 88.454 Data time: 2.3517, Total iter time: 6.0564 +thomas 04/07 08:38:42 ===> Epoch[91](27160/301): Loss 0.3707 LR: 7.938e-02 Score 88.252 Data time: 2.5287, Total iter time: 6.3521 +thomas 04/07 08:42:42 ===> Epoch[91](27200/301): Loss 0.3535 LR: 7.935e-02 Score 89.062 Data time: 2.4095, Total iter time: 5.9455 +thomas 04/07 08:46:26 ===> Epoch[91](27240/301): Loss 0.3796 LR: 7.932e-02 Score 87.980 Data time: 2.1871, Total iter time: 5.5273 +thomas 04/07 08:50:17 ===> Epoch[91](27280/301): Loss 0.3707 LR: 7.929e-02 Score 88.651 Data time: 2.2115, Total iter time: 5.6855 +thomas 04/07 08:54:02 ===> Epoch[91](27320/301): Loss 0.3697 LR: 7.925e-02 Score 88.640 Data time: 2.1921, Total iter time: 5.5826 +thomas 04/07 08:58:03 ===> Epoch[91](27360/301): Loss 0.3796 LR: 7.922e-02 Score 88.300 Data time: 2.3226, Total iter time: 5.9400 +thomas 04/07 09:01:41 ===> Epoch[92](27400/301): Loss 0.3719 LR: 7.919e-02 Score 88.646 Data time: 2.1585, Total iter time: 5.3903 +thomas 04/07 09:06:06 ===> Epoch[92](27440/301): Loss 0.3385 LR: 7.916e-02 Score 89.514 Data time: 2.5988, Total iter time: 6.5498 +thomas 04/07 09:10:06 ===> Epoch[92](27480/301): Loss 0.3694 LR: 7.913e-02 Score 88.778 Data time: 2.3431, Total iter time: 5.9247 +thomas 04/07 09:14:31 ===> Epoch[92](27520/301): Loss 0.3640 LR: 7.910e-02 Score 88.667 Data time: 2.5349, Total iter time: 6.5480 +thomas 04/07 09:18:26 ===> Epoch[92](27560/301): Loss 0.3398 LR: 7.907e-02 Score 89.262 Data time: 2.2688, Total iter time: 5.8097 +thomas 04/07 09:22:41 ===> Epoch[92](27600/301): Loss 0.3531 LR: 7.904e-02 Score 88.927 Data time: 2.4849, Total iter time: 6.3042 +thomas 04/07 09:27:07 ===> Epoch[92](27640/301): Loss 0.3199 LR: 7.901e-02 Score 89.590 Data time: 2.6541, Total iter time: 6.5763 +thomas 04/07 09:31:14 ===> Epoch[92](27680/301): Loss 0.3259 LR: 7.898e-02 Score 89.600 Data time: 2.4456, Total iter time: 6.0909 +thomas 04/07 09:35:32 ===> Epoch[93](27720/301): Loss 0.3506 LR: 7.895e-02 Score 89.030 Data time: 2.4790, Total iter time: 6.3451 +thomas 04/07 09:39:51 ===> Epoch[93](27760/301): Loss 0.3924 LR: 7.892e-02 Score 87.915 Data time: 2.4797, Total iter time: 6.4112 +thomas 04/07 09:44:01 ===> Epoch[93](27800/301): Loss 0.3510 LR: 7.889e-02 Score 88.898 Data time: 2.4240, Total iter time: 6.1654 +thomas 04/07 09:48:06 ===> Epoch[93](27840/301): Loss 0.3773 LR: 7.885e-02 Score 88.315 Data time: 2.3650, Total iter time: 6.0726 +thomas 04/07 09:52:28 ===> Epoch[93](27880/301): Loss 0.3634 LR: 7.882e-02 Score 88.673 Data time: 2.6222, Total iter time: 6.4655 +thomas 04/07 09:56:40 ===> Epoch[93](27920/301): Loss 0.4084 LR: 7.879e-02 Score 87.222 Data time: 2.4757, Total iter time: 6.2314 +thomas 04/07 10:01:00 ===> Epoch[93](27960/301): Loss 0.3490 LR: 7.876e-02 Score 88.765 Data time: 2.5277, Total iter time: 6.4234 +thomas 04/07 10:05:27 ===> Epoch[94](28000/301): Loss 0.3556 LR: 7.873e-02 Score 89.206 Data time: 2.5356, Total iter time: 6.6011 +thomas 04/07 10:05:28 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/07 10:05:28 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/07 10:07:27 101/312: Data time: 0.0029, Iter time: 0.3105 Loss 0.240 (AVG: 0.548) Score 92.048 (AVG: 83.180) mIOU 60.769 mAP 72.802 mAcc 73.107 +IOU: 73.755 96.686 51.608 77.277 86.851 85.940 67.308 42.774 43.961 58.081 8.563 56.372 53.249 60.866 57.176 39.278 93.000 53.222 73.092 36.326 +mAP: 75.204 97.647 67.940 84.162 88.697 84.873 71.990 62.362 49.850 65.951 45.367 60.626 65.817 76.712 74.097 81.793 94.667 80.881 76.625 50.786 +mAcc: 84.203 98.498 79.185 94.518 92.011 94.920 76.348 60.310 51.611 91.788 8.941 74.548 90.001 85.589 63.831 47.051 94.073 56.329 73.810 44.577 + +thomas 04/07 10:09:24 201/312: Data time: 0.0027, Iter time: 0.6599 Loss 0.390 (AVG: 0.552) Score 86.137 (AVG: 83.505) mIOU 59.411 mAP 72.191 mAcc 71.549 +IOU: 74.990 96.064 51.499 69.945 87.485 75.715 65.394 43.682 45.237 60.775 8.213 58.847 51.564 63.992 47.741 44.294 84.587 49.502 74.448 34.243 +mAP: 77.362 97.315 63.990 75.409 90.428 81.359 71.184 61.974 52.979 69.160 43.136 55.551 64.958 78.814 67.300 83.509 93.183 78.806 86.152 51.248 +mAcc: 84.946 98.332 79.082 85.117 93.921 93.998 73.228 62.282 53.682 95.162 8.588 71.777 84.606 82.808 53.775 55.099 85.667 51.483 75.213 42.209 + +thomas 04/07 10:11:24 301/312: Data time: 0.0026, Iter time: 0.7414 Loss 0.915 (AVG: 0.555) Score 75.676 (AVG: 83.527) mIOU 59.100 mAP 71.376 mAcc 70.565 +IOU: 75.630 96.004 51.985 67.198 87.303 75.310 64.775 43.739 46.584 62.198 9.793 58.879 51.151 63.722 42.697 40.977 83.182 49.981 76.568 34.323 +mAP: 77.149 97.300 61.714 72.020 89.403 81.482 70.610 61.809 53.220 70.758 41.397 56.614 65.125 77.381 64.012 84.087 92.216 79.116 82.789 49.323 +mAcc: 85.624 98.315 77.392 80.835 94.117 91.644 73.284 62.495 54.932 93.519 10.201 70.926 83.919 80.675 48.062 48.127 84.284 51.747 77.468 43.734 + +thomas 04/07 10:11:36 312/312: Data time: 0.0031, Iter time: 0.5222 Loss 0.509 (AVG: 0.553) Score 82.630 (AVG: 83.608) mIOU 59.080 mAP 71.327 mAcc 70.488 +IOU: 75.638 95.996 51.849 67.141 87.264 75.184 64.811 43.775 45.416 63.295 9.133 59.311 51.346 63.664 42.697 40.977 83.182 49.981 76.568 34.382 +mAP: 76.730 97.368 61.748 72.020 89.516 80.426 70.520 61.636 52.836 71.581 40.927 57.141 65.599 77.381 64.012 84.087 92.216 79.116 82.789 48.886 +mAcc: 85.757 98.291 77.217 80.835 94.072 91.409 73.386 62.755 53.622 93.325 9.518 71.170 84.299 80.675 48.062 48.127 84.284 51.747 77.468 43.748 + +thomas 04/07 10:11:36 Finished test. Elapsed time: 367.7899 +thomas 04/07 10:11:36 Current best mIoU: 59.584 at iter 23000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/07 10:15:36 ===> Epoch[94](28040/301): Loss 0.3302 LR: 7.870e-02 Score 89.289 Data time: 2.3584, Total iter time: 5.9362 +thomas 04/07 10:19:31 ===> Epoch[94](28080/301): Loss 0.3475 LR: 7.867e-02 Score 89.078 Data time: 2.3165, Total iter time: 5.8059 +thomas 04/07 10:23:33 ===> Epoch[94](28120/301): Loss 0.3535 LR: 7.864e-02 Score 88.740 Data time: 2.3458, Total iter time: 5.9824 +thomas 04/07 10:27:16 ===> Epoch[94](28160/301): Loss 0.3604 LR: 7.861e-02 Score 88.987 Data time: 2.1316, Total iter time: 5.5081 +thomas 04/07 10:30:54 ===> Epoch[94](28200/301): Loss 0.3791 LR: 7.858e-02 Score 88.609 Data time: 2.0862, Total iter time: 5.3721 +thomas 04/07 10:34:52 ===> Epoch[94](28240/301): Loss 0.3619 LR: 7.855e-02 Score 89.144 Data time: 2.3203, Total iter time: 5.8751 +thomas 04/07 10:38:48 ===> Epoch[94](28280/301): Loss 0.3772 LR: 7.852e-02 Score 88.000 Data time: 2.3116, Total iter time: 5.8294 +thomas 04/07 10:42:57 ===> Epoch[95](28320/301): Loss 0.3751 LR: 7.848e-02 Score 88.263 Data time: 2.4781, Total iter time: 6.1532 +thomas 04/07 10:47:18 ===> Epoch[95](28360/301): Loss 0.3600 LR: 7.845e-02 Score 88.752 Data time: 2.5375, Total iter time: 6.4329 +thomas 04/07 10:51:11 ===> Epoch[95](28400/301): Loss 0.3450 LR: 7.842e-02 Score 89.190 Data time: 2.2520, Total iter time: 5.7571 +thomas 04/07 10:55:12 ===> Epoch[95](28440/301): Loss 0.3371 LR: 7.839e-02 Score 89.573 Data time: 2.3255, Total iter time: 5.9637 +thomas 04/07 10:59:28 ===> Epoch[95](28480/301): Loss 0.3893 LR: 7.836e-02 Score 88.381 Data time: 2.4999, Total iter time: 6.3110 +thomas 04/07 11:03:17 ===> Epoch[95](28520/301): Loss 0.3448 LR: 7.833e-02 Score 89.231 Data time: 2.2712, Total iter time: 5.6647 +thomas 04/07 11:07:38 ===> Epoch[95](28560/301): Loss 0.3463 LR: 7.830e-02 Score 89.290 Data time: 2.5621, Total iter time: 6.4370 +thomas 04/07 11:12:01 ===> Epoch[96](28600/301): Loss 0.3705 LR: 7.827e-02 Score 88.276 Data time: 2.5575, Total iter time: 6.5109 +thomas 04/07 11:16:09 ===> Epoch[96](28640/301): Loss 0.3252 LR: 7.824e-02 Score 89.618 Data time: 2.3915, Total iter time: 6.1223 +thomas 04/07 11:20:13 ===> Epoch[96](28680/301): Loss 0.3709 LR: 7.821e-02 Score 88.762 Data time: 2.3186, Total iter time: 6.0307 +thomas 04/07 11:24:00 ===> Epoch[96](28720/301): Loss 0.3744 LR: 7.818e-02 Score 88.213 Data time: 2.2130, Total iter time: 5.5957 +thomas 04/07 11:27:56 ===> Epoch[96](28760/301): Loss 0.4019 LR: 7.815e-02 Score 87.789 Data time: 2.2967, Total iter time: 5.8119 +thomas 04/07 11:31:44 ===> Epoch[96](28800/301): Loss 0.3624 LR: 7.811e-02 Score 88.930 Data time: 2.2000, Total iter time: 5.6182 +thomas 04/07 11:35:54 ===> Epoch[96](28840/301): Loss 0.3398 LR: 7.808e-02 Score 89.113 Data time: 2.4081, Total iter time: 6.1729 +thomas 04/07 11:39:51 ===> Epoch[96](28880/301): Loss 0.3111 LR: 7.805e-02 Score 90.118 Data time: 2.2785, Total iter time: 5.8445 +thomas 04/07 11:43:49 ===> Epoch[97](28920/301): Loss 0.3434 LR: 7.802e-02 Score 89.135 Data time: 2.3168, Total iter time: 5.8580 +thomas 04/07 11:47:35 ===> Epoch[97](28960/301): Loss 0.3813 LR: 7.799e-02 Score 88.145 Data time: 2.2339, Total iter time: 5.5915 +thomas 04/07 11:51:30 ===> Epoch[97](29000/301): Loss 0.3692 LR: 7.796e-02 Score 88.204 Data time: 2.2709, Total iter time: 5.7862 +thomas 04/07 11:51:31 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/07 11:51:31 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/07 11:53:37 101/312: Data time: 0.0032, Iter time: 0.7855 Loss 0.688 (AVG: 0.569) Score 80.998 (AVG: 84.024) mIOU 58.749 mAP 71.186 mAcc 69.867 +IOU: 76.825 96.003 41.494 68.355 83.076 80.741 62.335 39.918 35.423 73.984 11.856 58.047 49.501 61.439 46.259 62.468 84.290 43.218 66.875 32.869 +mAP: 78.560 98.349 48.034 75.266 86.091 89.931 64.229 58.420 50.824 67.638 31.159 58.380 58.533 74.776 78.527 94.281 94.881 70.176 93.777 51.883 +mAcc: 90.491 99.015 71.883 84.056 87.097 88.529 78.922 55.006 39.281 79.286 12.771 75.831 80.825 82.880 71.162 68.042 84.743 43.494 67.011 37.010 + +thomas 04/07 11:55:30 201/312: Data time: 0.0026, Iter time: 0.6692 Loss 0.468 (AVG: 0.591) Score 87.037 (AVG: 83.560) mIOU 58.647 mAP 70.156 mAcc 68.986 +IOU: 76.646 95.618 43.011 69.292 84.477 79.888 64.052 40.053 35.976 73.517 12.201 54.648 48.596 66.859 42.559 63.182 82.914 47.039 60.266 32.138 +mAP: 78.765 98.083 50.061 71.174 87.481 83.299 66.311 56.093 48.586 72.129 32.617 59.132 60.300 78.841 70.092 92.760 91.517 77.290 82.579 46.019 +mAcc: 91.147 98.922 71.585 79.931 89.162 87.440 79.318 56.150 38.491 78.787 13.767 72.923 79.184 78.890 68.559 67.048 83.389 47.629 60.675 36.730 + +thomas 04/07 11:57:36 301/312: Data time: 0.0026, Iter time: 0.6426 Loss 1.121 (AVG: 0.621) Score 77.866 (AVG: 82.955) mIOU 57.389 mAP 69.397 mAcc 67.676 +IOU: 76.106 95.539 44.771 69.471 85.049 79.719 63.536 40.085 33.047 72.019 12.752 56.811 48.953 60.616 41.555 51.169 78.903 41.487 64.556 31.644 +mAP: 78.709 97.752 51.858 72.616 87.990 83.177 67.198 55.507 46.787 70.841 36.273 57.955 60.878 75.194 62.340 89.021 90.184 77.914 80.591 45.159 +mAcc: 90.168 98.947 75.243 78.665 89.769 86.707 79.881 55.302 36.278 79.155 14.797 71.690 79.502 74.032 63.825 55.615 80.349 41.968 65.007 36.615 + +thomas 04/07 11:57:48 312/312: Data time: 0.0023, Iter time: 0.3766 Loss 0.136 (AVG: 0.622) Score 96.081 (AVG: 83.009) mIOU 57.405 mAP 69.336 mAcc 67.646 +IOU: 76.116 95.625 44.566 69.475 85.226 79.151 64.015 39.587 32.449 72.131 12.657 56.864 49.144 59.560 41.672 51.610 79.301 41.692 65.507 31.745 +mAP: 78.638 97.779 51.134 72.828 87.779 83.086 67.768 54.967 46.520 71.277 35.895 57.404 60.984 73.729 63.017 89.426 90.330 77.872 81.131 45.149 +mAcc: 90.228 98.959 74.844 78.860 90.010 85.956 80.336 54.594 35.567 78.968 14.766 71.828 79.268 72.483 64.465 56.051 80.734 42.197 65.954 36.846 + +thomas 04/07 11:57:48 Finished test. Elapsed time: 377.0770 +thomas 04/07 11:57:48 Current best mIoU: 59.584 at iter 23000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/07 12:01:51 ===> Epoch[97](29040/301): Loss 0.3722 LR: 7.793e-02 Score 88.493 Data time: 2.3334, Total iter time: 5.9809 +thomas 04/07 12:05:35 ===> Epoch[97](29080/301): Loss 0.3716 LR: 7.790e-02 Score 88.587 Data time: 2.1587, Total iter time: 5.5456 +thomas 04/07 12:09:52 ===> Epoch[97](29120/301): Loss 0.3681 LR: 7.787e-02 Score 88.776 Data time: 2.5036, Total iter time: 6.3245 +thomas 04/07 12:13:51 ===> Epoch[97](29160/301): Loss 0.3132 LR: 7.784e-02 Score 90.208 Data time: 2.3333, Total iter time: 5.9087 +thomas 04/07 12:18:02 ===> Epoch[98](29200/301): Loss 0.3613 LR: 7.781e-02 Score 88.620 Data time: 2.4433, Total iter time: 6.1936 +thomas 04/07 12:22:12 ===> Epoch[98](29240/301): Loss 0.4034 LR: 7.778e-02 Score 87.649 Data time: 2.4186, Total iter time: 6.1601 +thomas 04/07 12:26:19 ===> Epoch[98](29280/301): Loss 0.3242 LR: 7.774e-02 Score 90.027 Data time: 2.4314, Total iter time: 6.1109 +thomas 04/07 12:30:17 ===> Epoch[98](29320/301): Loss 0.3703 LR: 7.771e-02 Score 88.545 Data time: 2.2755, Total iter time: 5.8763 +thomas 04/07 12:34:43 ===> Epoch[98](29360/301): Loss 0.3800 LR: 7.768e-02 Score 88.555 Data time: 2.5494, Total iter time: 6.5648 +thomas 04/07 12:38:45 ===> Epoch[98](29400/301): Loss 0.3702 LR: 7.765e-02 Score 88.745 Data time: 2.3912, Total iter time: 5.9686 +thomas 04/07 12:42:43 ===> Epoch[98](29440/301): Loss 0.3525 LR: 7.762e-02 Score 89.061 Data time: 2.3191, Total iter time: 5.8683 +thomas 04/07 12:46:55 ===> Epoch[98](29480/301): Loss 0.3522 LR: 7.759e-02 Score 88.848 Data time: 2.4515, Total iter time: 6.2229 +thomas 04/07 12:50:49 ===> Epoch[99](29520/301): Loss 0.3798 LR: 7.756e-02 Score 88.227 Data time: 2.2778, Total iter time: 5.7703 +thomas 04/07 12:54:51 ===> Epoch[99](29560/301): Loss 0.3787 LR: 7.753e-02 Score 88.187 Data time: 2.3470, Total iter time: 5.9720 +thomas 04/07 12:58:58 ===> Epoch[99](29600/301): Loss 0.3729 LR: 7.750e-02 Score 88.534 Data time: 2.3674, Total iter time: 6.1049 +thomas 04/07 13:02:55 ===> Epoch[99](29640/301): Loss 0.3509 LR: 7.747e-02 Score 89.315 Data time: 2.2774, Total iter time: 5.8479 +thomas 04/07 13:07:08 ===> Epoch[99](29680/301): Loss 0.3489 LR: 7.744e-02 Score 89.033 Data time: 2.4624, Total iter time: 6.2518 +thomas 04/07 13:11:02 ===> Epoch[99](29720/301): Loss 0.3733 LR: 7.741e-02 Score 88.408 Data time: 2.2777, Total iter time: 5.7851 +thomas 04/07 13:15:01 ===> Epoch[99](29760/301): Loss 0.3471 LR: 7.737e-02 Score 88.890 Data time: 2.3062, Total iter time: 5.8907 +thomas 04/07 13:19:20 ===> Epoch[100](29800/301): Loss 0.3379 LR: 7.734e-02 Score 89.578 Data time: 2.4810, Total iter time: 6.3829 +thomas 04/07 13:23:27 ===> Epoch[100](29840/301): Loss 0.3524 LR: 7.731e-02 Score 89.056 Data time: 2.3932, Total iter time: 6.0997 +thomas 04/07 13:27:16 ===> Epoch[100](29880/301): Loss 0.3659 LR: 7.728e-02 Score 88.319 Data time: 2.2102, Total iter time: 5.6676 +thomas 04/07 13:31:13 ===> Epoch[100](29920/301): Loss 0.3784 LR: 7.725e-02 Score 88.784 Data time: 2.3010, Total iter time: 5.8505 +thomas 04/07 13:35:19 ===> Epoch[100](29960/301): Loss 0.3467 LR: 7.722e-02 Score 89.352 Data time: 2.4096, Total iter time: 6.0731 +thomas 04/07 13:39:10 ===> Epoch[100](30000/301): Loss 0.4000 LR: 7.719e-02 Score 88.080 Data time: 2.2292, Total iter time: 5.6907 +thomas 04/07 13:39:11 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/07 13:39:11 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/07 13:41:23 101/312: Data time: 0.0027, Iter time: 0.6742 Loss 0.621 (AVG: 0.570) Score 78.667 (AVG: 84.405) mIOU 58.030 mAP 69.773 mAcc 68.986 +IOU: 77.758 96.668 54.174 62.205 86.375 69.474 64.820 45.091 28.035 65.025 9.809 65.736 48.608 43.674 43.777 39.408 80.477 56.264 82.488 40.741 +mAP: 79.668 98.529 63.313 72.087 89.153 91.787 70.632 60.362 43.912 65.064 16.505 64.442 57.376 61.671 62.400 89.345 91.553 79.259 82.542 55.862 +mAcc: 90.587 98.318 64.772 85.581 92.901 98.686 67.626 54.585 29.927 92.062 12.054 76.735 78.014 45.240 50.430 45.613 83.338 59.846 86.968 66.444 + +thomas 04/07 13:43:25 201/312: Data time: 0.0027, Iter time: 0.9139 Loss 0.482 (AVG: 0.589) Score 85.330 (AVG: 83.575) mIOU 57.743 mAP 70.098 mAcc 69.176 +IOU: 76.823 96.443 54.714 63.977 85.145 62.154 63.428 41.358 34.517 59.864 8.840 64.677 53.889 49.903 43.670 36.757 84.769 52.454 79.391 42.094 +mAP: 78.560 98.001 64.881 71.052 88.997 87.716 69.969 59.140 49.816 66.601 31.668 60.387 60.319 72.474 57.111 81.458 92.483 76.019 80.791 54.514 +mAcc: 89.394 98.203 66.582 84.710 91.031 97.231 67.952 53.562 37.208 92.966 10.269 74.663 75.626 52.097 49.941 45.894 86.625 57.514 84.492 67.560 + +thomas 04/07 13:45:28 301/312: Data time: 0.0026, Iter time: 0.2896 Loss 0.256 (AVG: 0.611) Score 92.268 (AVG: 83.006) mIOU 57.765 mAP 69.765 mAcc 68.904 +IOU: 76.491 96.079 51.227 63.179 85.600 63.628 63.444 40.123 34.911 59.244 10.720 59.911 56.046 52.694 40.757 40.506 85.710 52.700 82.364 39.972 +mAP: 79.075 97.779 60.911 67.645 88.689 84.055 70.152 58.606 49.409 69.775 35.140 60.591 60.195 70.658 50.537 84.858 93.024 77.571 83.297 53.330 +mAcc: 89.176 98.053 63.157 79.953 91.068 97.124 69.684 50.223 37.489 92.963 12.446 71.500 76.653 55.319 47.463 49.131 87.264 57.242 87.116 65.054 + +thomas 04/07 13:45:40 312/312: Data time: 0.0038, Iter time: 1.1417 Loss 0.320 (AVG: 0.605) Score 91.594 (AVG: 83.192) mIOU 57.936 mAP 69.936 mAcc 69.119 +IOU: 76.702 96.144 50.916 64.119 85.844 63.175 64.153 39.860 34.696 58.927 11.099 59.911 55.667 55.344 40.053 40.506 86.099 53.360 82.364 39.785 +mAP: 79.326 97.839 60.572 68.418 88.847 84.055 70.960 58.504 49.287 69.775 35.413 60.591 60.647 72.492 49.546 84.858 93.070 77.849 83.297 53.381 +mAcc: 89.170 98.080 62.981 80.726 91.315 97.124 70.209 49.994 37.241 92.963 12.887 71.500 76.424 57.797 46.772 49.131 87.607 57.914 87.116 65.426 + +thomas 04/07 13:45:40 Finished test. Elapsed time: 389.1093 +thomas 04/07 13:45:41 Current best mIoU: 59.584 at iter 23000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/07 13:49:50 ===> Epoch[100](30040/301): Loss 0.3280 LR: 7.716e-02 Score 89.493 Data time: 2.4624, Total iter time: 6.1588 +thomas 04/07 13:53:46 ===> Epoch[100](30080/301): Loss 0.3421 LR: 7.713e-02 Score 88.986 Data time: 2.2917, Total iter time: 5.8183 +thomas 04/07 13:57:57 ===> Epoch[101](30120/301): Loss 0.3633 LR: 7.710e-02 Score 88.891 Data time: 2.4947, Total iter time: 6.2216 +thomas 04/07 14:01:49 ===> Epoch[101](30160/301): Loss 0.3381 LR: 7.707e-02 Score 89.754 Data time: 2.2493, Total iter time: 5.7160 +thomas 04/07 14:05:58 ===> Epoch[101](30200/301): Loss 0.3390 LR: 7.703e-02 Score 89.307 Data time: 2.4279, Total iter time: 6.1555 +thomas 04/07 14:10:07 ===> Epoch[101](30240/301): Loss 0.3101 LR: 7.700e-02 Score 90.346 Data time: 2.4311, Total iter time: 6.1451 +thomas 04/07 14:13:59 ===> Epoch[101](30280/301): Loss 0.3750 LR: 7.697e-02 Score 88.460 Data time: 2.2444, Total iter time: 5.7424 +thomas 04/07 14:18:03 ===> Epoch[101](30320/301): Loss 0.3710 LR: 7.694e-02 Score 88.246 Data time: 2.3452, Total iter time: 6.0058 +thomas 04/07 14:22:12 ===> Epoch[101](30360/301): Loss 0.3293 LR: 7.691e-02 Score 89.630 Data time: 2.4227, Total iter time: 6.1228 +thomas 04/07 14:26:16 ===> Epoch[101](30400/301): Loss 0.3239 LR: 7.688e-02 Score 89.680 Data time: 2.3740, Total iter time: 6.0337 +thomas 04/07 14:30:26 ===> Epoch[102](30440/301): Loss 0.3625 LR: 7.685e-02 Score 88.627 Data time: 2.4425, Total iter time: 6.1825 +thomas 04/07 14:34:30 ===> Epoch[102](30480/301): Loss 0.3522 LR: 7.682e-02 Score 88.881 Data time: 2.4182, Total iter time: 6.0494 +thomas 04/07 14:38:22 ===> Epoch[102](30520/301): Loss 0.3224 LR: 7.679e-02 Score 89.976 Data time: 2.2507, Total iter time: 5.7234 +thomas 04/07 14:42:32 ===> Epoch[102](30560/301): Loss 0.3550 LR: 7.676e-02 Score 89.127 Data time: 2.4565, Total iter time: 6.1786 +thomas 04/07 14:46:19 ===> Epoch[102](30600/301): Loss 0.3266 LR: 7.673e-02 Score 89.670 Data time: 2.2184, Total iter time: 5.5846 +thomas 04/07 14:50:27 ===> Epoch[102](30640/301): Loss 0.3616 LR: 7.669e-02 Score 88.482 Data time: 2.3994, Total iter time: 6.1159 +thomas 04/07 14:54:44 ===> Epoch[102](30680/301): Loss 0.3367 LR: 7.666e-02 Score 89.528 Data time: 2.5028, Total iter time: 6.3606 +thomas 04/07 14:58:56 ===> Epoch[103](30720/301): Loss 0.3399 LR: 7.663e-02 Score 89.485 Data time: 2.4415, Total iter time: 6.2057 +thomas 04/07 15:03:08 ===> Epoch[103](30760/301): Loss 0.3581 LR: 7.660e-02 Score 88.671 Data time: 2.4009, Total iter time: 6.2160 +thomas 04/07 15:07:10 ===> Epoch[103](30800/301): Loss 0.3296 LR: 7.657e-02 Score 89.523 Data time: 2.3873, Total iter time: 5.9909 +thomas 04/07 15:10:58 ===> Epoch[103](30840/301): Loss 0.4059 LR: 7.654e-02 Score 87.324 Data time: 2.2566, Total iter time: 5.6177 +thomas 04/07 15:14:48 ===> Epoch[103](30880/301): Loss 0.3677 LR: 7.651e-02 Score 88.739 Data time: 2.2529, Total iter time: 5.6705 +thomas 04/07 15:18:46 ===> Epoch[103](30920/301): Loss 0.3733 LR: 7.648e-02 Score 88.446 Data time: 2.3113, Total iter time: 5.8787 +thomas 04/07 15:23:03 ===> Epoch[103](30960/301): Loss 0.3475 LR: 7.645e-02 Score 89.072 Data time: 2.4863, Total iter time: 6.3516 +thomas 04/07 15:26:55 ===> Epoch[103](31000/301): Loss 0.3403 LR: 7.642e-02 Score 89.202 Data time: 2.1909, Total iter time: 5.7200 +thomas 04/07 15:26:57 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/07 15:26:57 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/07 15:29:02 101/312: Data time: 0.0028, Iter time: 0.3632 Loss 0.083 (AVG: 0.540) Score 98.121 (AVG: 85.135) mIOU 60.848 mAP 70.312 mAcc 69.993 +IOU: 76.998 96.138 53.882 57.405 86.278 86.209 70.846 45.117 42.780 71.308 6.025 55.533 50.789 67.451 42.076 65.308 76.310 32.606 81.823 52.070 +mAP: 77.641 97.282 64.118 58.899 88.035 82.150 68.943 62.725 50.246 59.887 45.786 65.148 69.231 75.119 58.497 82.384 89.062 74.571 81.565 54.948 +mAcc: 88.895 98.651 64.070 72.754 90.042 94.957 77.880 71.885 50.256 93.116 6.196 61.068 69.855 76.393 54.931 67.390 76.856 33.207 82.666 68.788 + +thomas 04/07 15:31:08 201/312: Data time: 0.0028, Iter time: 0.6599 Loss 0.539 (AVG: 0.548) Score 84.550 (AVG: 84.736) mIOU 59.939 mAP 71.151 mAcc 69.316 +IOU: 77.315 96.005 53.303 67.988 88.222 80.184 64.446 45.031 44.332 72.865 6.875 51.451 50.803 59.052 41.644 55.259 82.558 40.264 77.325 43.853 +mAP: 77.312 97.658 63.786 71.021 89.298 82.629 69.387 62.747 47.938 66.948 43.407 59.712 67.622 73.179 66.118 82.920 93.545 75.096 77.062 55.637 +mAcc: 89.478 98.538 63.119 83.197 92.792 94.330 70.690 70.829 50.935 91.171 7.086 56.272 72.426 68.194 50.506 62.175 83.170 41.051 78.373 61.993 + +thomas 04/07 15:33:11 301/312: Data time: 0.0039, Iter time: 0.3224 Loss 0.374 (AVG: 0.548) Score 85.680 (AVG: 84.537) mIOU 59.783 mAP 71.272 mAcc 69.440 +IOU: 77.248 96.038 50.459 68.292 88.678 79.398 64.419 43.755 42.430 69.887 7.860 51.439 50.372 65.371 41.005 55.559 81.549 39.339 77.886 44.677 +mAP: 78.252 97.604 59.325 73.358 89.624 82.613 68.770 62.591 48.918 70.216 41.349 58.561 67.366 76.816 61.049 85.440 91.910 75.913 81.891 53.885 +mAcc: 88.943 98.477 61.736 84.643 93.089 93.632 72.186 68.599 49.664 91.323 8.077 55.818 70.307 75.524 51.465 62.666 82.141 40.014 78.724 61.780 + +thomas 04/07 15:33:23 312/312: Data time: 0.0022, Iter time: 0.5038 Loss 0.177 (AVG: 0.545) Score 94.871 (AVG: 84.582) mIOU 59.826 mAP 71.374 mAcc 69.673 +IOU: 77.323 96.048 51.004 68.493 88.745 79.390 64.798 43.822 43.725 69.668 7.774 52.334 50.647 65.098 41.938 53.011 81.549 39.542 76.966 44.645 +mAP: 78.288 97.552 59.221 74.033 89.621 82.613 69.220 62.766 50.099 70.230 40.909 58.868 67.423 76.915 60.612 85.440 91.910 75.978 81.891 53.901 +mAcc: 88.793 98.488 62.381 85.387 93.165 93.632 72.434 68.882 51.131 91.344 7.987 56.737 69.857 75.105 52.549 62.666 82.141 40.261 78.724 61.787 + +thomas 04/07 15:33:23 Finished test. Elapsed time: 386.4798 +thomas 04/07 15:33:25 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/07 15:33:25 Current best mIoU: 59.826 at iter 31000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/07 15:37:40 ===> Epoch[104](31040/301): Loss 0.3149 LR: 7.639e-02 Score 90.192 Data time: 2.5203, Total iter time: 6.3034 +thomas 04/07 15:41:46 ===> Epoch[104](31080/301): Loss 0.3314 LR: 7.636e-02 Score 89.366 Data time: 2.3960, Total iter time: 6.0635 +thomas 04/07 15:45:40 ===> Epoch[104](31120/301): Loss 0.3339 LR: 7.632e-02 Score 89.466 Data time: 2.2769, Total iter time: 5.7695 +thomas 04/07 15:49:32 ===> Epoch[104](31160/301): Loss 0.3111 LR: 7.629e-02 Score 90.249 Data time: 2.2298, Total iter time: 5.7290 +thomas 04/07 15:53:20 ===> Epoch[104](31200/301): Loss 0.3376 LR: 7.626e-02 Score 89.541 Data time: 2.2155, Total iter time: 5.6161 +thomas 04/07 15:57:30 ===> Epoch[104](31240/301): Loss 0.4025 LR: 7.623e-02 Score 88.016 Data time: 2.4236, Total iter time: 6.1647 +thomas 04/07 16:01:35 ===> Epoch[104](31280/301): Loss 0.3325 LR: 7.620e-02 Score 89.667 Data time: 2.3880, Total iter time: 6.0615 +thomas 04/07 16:05:31 ===> Epoch[105](31320/301): Loss 0.3459 LR: 7.617e-02 Score 89.013 Data time: 2.3208, Total iter time: 5.8330 +thomas 04/07 16:09:32 ===> Epoch[105](31360/301): Loss 0.3238 LR: 7.614e-02 Score 89.706 Data time: 2.3101, Total iter time: 5.9553 +thomas 04/07 16:13:14 ===> Epoch[105](31400/301): Loss 0.3262 LR: 7.611e-02 Score 89.768 Data time: 2.1377, Total iter time: 5.4567 +thomas 04/07 16:17:01 ===> Epoch[105](31440/301): Loss 0.3060 LR: 7.608e-02 Score 90.545 Data time: 2.2218, Total iter time: 5.6208 +thomas 04/07 16:20:52 ===> Epoch[105](31480/301): Loss 0.3869 LR: 7.605e-02 Score 88.136 Data time: 2.2408, Total iter time: 5.6753 +thomas 04/07 16:25:09 ===> Epoch[105](31520/301): Loss 0.3725 LR: 7.601e-02 Score 88.703 Data time: 2.5186, Total iter time: 6.3530 +thomas 04/07 16:29:11 ===> Epoch[105](31560/301): Loss 0.3716 LR: 7.598e-02 Score 88.255 Data time: 2.3761, Total iter time: 5.9676 +thomas 04/07 16:33:13 ===> Epoch[105](31600/301): Loss 0.3674 LR: 7.595e-02 Score 88.210 Data time: 2.3733, Total iter time: 5.9715 +thomas 04/07 16:37:09 ===> Epoch[106](31640/301): Loss 0.3127 LR: 7.592e-02 Score 90.319 Data time: 2.2524, Total iter time: 5.8067 +thomas 04/07 16:41:03 ===> Epoch[106](31680/301): Loss 0.3785 LR: 7.589e-02 Score 88.625 Data time: 2.2718, Total iter time: 5.7704 +thomas 04/07 16:44:47 ===> Epoch[106](31720/301): Loss 0.3845 LR: 7.586e-02 Score 88.236 Data time: 2.1968, Total iter time: 5.5377 +thomas 04/07 16:48:56 ===> Epoch[106](31760/301): Loss 0.3725 LR: 7.583e-02 Score 88.239 Data time: 2.4274, Total iter time: 6.1457 +thomas 04/07 16:52:46 ===> Epoch[106](31800/301): Loss 0.3216 LR: 7.580e-02 Score 89.896 Data time: 2.2091, Total iter time: 5.6750 +thomas 04/07 16:56:32 ===> Epoch[106](31840/301): Loss 0.3169 LR: 7.577e-02 Score 90.298 Data time: 2.2122, Total iter time: 5.5918 +thomas 04/07 17:00:33 ===> Epoch[106](31880/301): Loss 0.3461 LR: 7.574e-02 Score 88.676 Data time: 2.2835, Total iter time: 5.9558 +thomas 04/07 17:04:27 ===> Epoch[107](31920/301): Loss 0.3085 LR: 7.571e-02 Score 90.732 Data time: 2.2800, Total iter time: 5.7782 +thomas 04/07 17:08:13 ===> Epoch[107](31960/301): Loss 0.3611 LR: 7.567e-02 Score 88.493 Data time: 2.2157, Total iter time: 5.5764 +thomas 04/07 17:12:21 ===> Epoch[107](32000/301): Loss 0.3694 LR: 7.564e-02 Score 88.835 Data time: 2.4379, Total iter time: 6.1070 +thomas 04/07 17:12:22 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/07 17:12:22 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/07 17:14:15 101/312: Data time: 0.0023, Iter time: 1.3420 Loss 0.961 (AVG: 0.627) Score 68.336 (AVG: 83.095) mIOU 58.986 mAP 70.083 mAcc 69.859 +IOU: 76.047 95.438 54.863 66.672 84.820 83.853 66.132 39.463 27.165 63.262 14.826 57.614 47.998 65.076 42.252 52.695 82.239 58.299 68.211 32.786 +mAP: 79.086 94.930 58.256 68.463 87.316 83.319 72.279 55.818 46.982 75.369 41.427 60.135 70.760 66.578 45.820 88.200 95.330 82.002 79.888 49.695 +mAcc: 86.581 97.561 71.676 75.832 93.479 91.522 71.536 54.433 28.642 92.764 16.163 83.747 79.272 70.364 50.746 53.483 83.811 63.431 68.543 63.592 + +thomas 04/07 17:16:20 201/312: Data time: 0.0027, Iter time: 0.4372 Loss 0.281 (AVG: 0.595) Score 92.005 (AVG: 83.506) mIOU 60.793 mAP 71.343 mAcc 71.160 +IOU: 75.592 95.899 57.137 69.159 85.519 84.420 65.505 43.247 31.598 67.353 12.853 61.072 51.691 63.184 48.397 48.100 84.711 56.241 76.298 37.882 +mAP: 79.277 94.476 60.922 73.062 87.733 83.362 71.166 59.980 48.349 71.205 34.429 60.242 68.822 73.546 57.083 90.602 94.866 82.708 84.902 50.126 +mAcc: 86.206 97.584 73.490 80.185 94.596 90.692 70.469 60.611 33.466 88.691 14.966 84.587 78.151 66.270 57.202 53.143 85.827 59.665 76.707 70.690 + +thomas 04/07 17:18:25 301/312: Data time: 0.0025, Iter time: 0.7498 Loss 0.772 (AVG: 0.604) Score 79.697 (AVG: 83.397) mIOU 60.303 mAP 71.120 mAcc 70.888 +IOU: 75.645 95.854 56.631 70.006 85.476 80.088 67.284 42.960 28.568 68.621 11.723 56.965 55.471 59.094 46.034 51.567 85.533 54.045 79.015 35.475 +mAP: 79.527 94.786 61.662 73.646 88.419 83.784 72.915 60.623 45.342 69.201 37.273 58.472 68.434 72.001 58.441 88.700 94.476 81.728 82.929 50.047 +mAcc: 86.815 97.638 72.200 81.092 94.639 90.129 72.445 60.348 30.074 90.037 13.528 81.192 79.072 61.691 56.511 58.277 86.589 59.510 79.429 66.552 + +thomas 04/07 17:18:38 312/312: Data time: 0.0026, Iter time: 0.3479 Loss 0.839 (AVG: 0.604) Score 75.897 (AVG: 83.373) mIOU 60.227 mAP 71.037 mAcc 70.855 +IOU: 75.577 95.848 56.465 69.216 85.656 79.616 67.602 43.056 28.816 68.333 11.115 56.494 55.924 59.715 45.986 51.549 85.533 54.038 79.014 34.979 +mAP: 79.237 94.654 60.762 72.570 88.444 83.862 73.112 60.827 45.964 70.004 36.539 58.472 67.890 72.631 58.441 88.700 94.476 81.728 82.929 49.503 +mAcc: 86.719 97.640 71.899 80.175 94.681 90.011 72.952 60.769 30.312 89.842 12.751 81.192 79.489 62.173 56.511 58.277 86.589 59.510 79.429 66.190 + +thomas 04/07 17:18:38 Finished test. Elapsed time: 376.1601 +thomas 04/07 17:18:40 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/07 17:18:40 Current best mIoU: 60.227 at iter 32000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/07 17:22:29 ===> Epoch[107](32040/301): Loss 0.3115 LR: 7.561e-02 Score 90.009 Data time: 2.2221, Total iter time: 5.6606 +thomas 04/07 17:26:22 ===> Epoch[107](32080/301): Loss 0.3475 LR: 7.558e-02 Score 89.345 Data time: 2.2180, Total iter time: 5.7644 +thomas 04/07 17:30:25 ===> Epoch[107](32120/301): Loss 0.3172 LR: 7.555e-02 Score 89.998 Data time: 2.4070, Total iter time: 5.9980 +thomas 04/07 17:34:35 ===> Epoch[107](32160/301): Loss 0.3330 LR: 7.552e-02 Score 89.212 Data time: 2.4408, Total iter time: 6.1678 +thomas 04/07 17:38:27 ===> Epoch[107](32200/301): Loss 0.3861 LR: 7.549e-02 Score 87.762 Data time: 2.2660, Total iter time: 5.7400 +thomas 04/07 17:42:25 ===> Epoch[108](32240/301): Loss 0.3162 LR: 7.546e-02 Score 89.928 Data time: 2.2932, Total iter time: 5.8667 +thomas 04/07 17:46:25 ===> Epoch[108](32280/301): Loss 0.3373 LR: 7.543e-02 Score 89.489 Data time: 2.2970, Total iter time: 5.9216 +thomas 04/07 17:50:36 ===> Epoch[108](32320/301): Loss 0.3725 LR: 7.540e-02 Score 88.119 Data time: 2.4109, Total iter time: 6.2059 +thomas 04/07 17:54:38 ===> Epoch[108](32360/301): Loss 0.3592 LR: 7.537e-02 Score 88.871 Data time: 2.3530, Total iter time: 5.9729 +thomas 04/07 17:58:48 ===> Epoch[108](32400/301): Loss 0.3691 LR: 7.533e-02 Score 88.611 Data time: 2.4308, Total iter time: 6.1650 +thomas 04/07 18:02:58 ===> Epoch[108](32440/301): Loss 0.3310 LR: 7.530e-02 Score 89.831 Data time: 2.4404, Total iter time: 6.1686 +thomas 04/07 18:07:15 ===> Epoch[108](32480/301): Loss 0.3296 LR: 7.527e-02 Score 89.390 Data time: 2.5197, Total iter time: 6.3253 +thomas 04/07 18:11:00 ===> Epoch[109](32520/301): Loss 0.3637 LR: 7.524e-02 Score 88.434 Data time: 2.1618, Total iter time: 5.5656 +thomas 04/07 18:15:07 ===> Epoch[109](32560/301): Loss 0.3315 LR: 7.521e-02 Score 89.295 Data time: 2.3845, Total iter time: 6.0918 +thomas 04/07 18:19:33 ===> Epoch[109](32600/301): Loss 0.3348 LR: 7.518e-02 Score 89.591 Data time: 2.5642, Total iter time: 6.5545 +thomas 04/07 18:23:45 ===> Epoch[109](32640/301): Loss 0.3885 LR: 7.515e-02 Score 88.484 Data time: 2.5024, Total iter time: 6.2298 +thomas 04/07 18:27:46 ===> Epoch[109](32680/301): Loss 0.3292 LR: 7.512e-02 Score 89.918 Data time: 2.3545, Total iter time: 5.9658 +thomas 04/07 18:31:54 ===> Epoch[109](32720/301): Loss 0.3275 LR: 7.509e-02 Score 89.676 Data time: 2.4045, Total iter time: 6.1184 +thomas 04/07 18:35:50 ===> Epoch[109](32760/301): Loss 0.3238 LR: 7.506e-02 Score 89.471 Data time: 2.2452, Total iter time: 5.8394 +thomas 04/07 18:39:44 ===> Epoch[109](32800/301): Loss 0.3748 LR: 7.502e-02 Score 88.571 Data time: 2.2355, Total iter time: 5.7727 +thomas 04/07 18:43:56 ===> Epoch[110](32840/301): Loss 0.3291 LR: 7.499e-02 Score 89.745 Data time: 2.4522, Total iter time: 6.2162 +thomas 04/07 18:48:29 ===> Epoch[110](32880/301): Loss 0.3404 LR: 7.496e-02 Score 89.372 Data time: 2.7086, Total iter time: 6.7422 +thomas 04/07 18:52:38 ===> Epoch[110](32920/301): Loss 0.3205 LR: 7.493e-02 Score 90.040 Data time: 2.4205, Total iter time: 6.1461 +thomas 04/07 18:56:43 ===> Epoch[110](32960/301): Loss 0.3225 LR: 7.490e-02 Score 89.487 Data time: 2.3518, Total iter time: 6.0505 +thomas 04/07 19:00:47 ===> Epoch[110](33000/301): Loss 0.3177 LR: 7.487e-02 Score 90.127 Data time: 2.3571, Total iter time: 6.0231 +thomas 04/07 19:00:48 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/07 19:00:48 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/07 19:02:55 101/312: Data time: 0.0023, Iter time: 0.6285 Loss 0.560 (AVG: 0.554) Score 82.423 (AVG: 83.766) mIOU 59.926 mAP 71.767 mAcc 71.662 +IOU: 76.202 95.760 55.406 79.129 84.267 70.662 56.448 44.131 37.283 60.229 11.398 58.023 59.629 71.275 45.488 49.029 74.521 38.714 86.150 44.780 +mAP: 76.966 97.108 57.203 79.151 87.206 84.495 64.539 56.655 53.638 80.949 36.449 63.126 70.823 82.372 56.685 81.904 83.028 70.122 94.991 57.938 +mAcc: 86.768 98.753 79.890 89.687 94.524 93.932 59.322 59.597 41.469 85.714 14.805 73.377 77.584 87.201 56.242 67.098 76.318 40.162 86.683 64.112 + +thomas 04/07 19:04:57 201/312: Data time: 0.0026, Iter time: 0.6802 Loss 0.261 (AVG: 0.551) Score 94.495 (AVG: 83.996) mIOU 60.204 mAP 72.740 mAcc 71.498 +IOU: 77.013 95.582 52.014 73.510 83.331 70.870 62.036 45.238 39.351 63.487 17.360 53.034 59.268 63.080 42.066 50.756 81.471 43.585 85.845 45.188 +mAP: 78.287 97.278 58.014 76.115 87.605 82.094 70.461 61.825 52.533 72.563 46.503 64.538 68.210 76.857 60.933 87.503 88.980 76.541 90.378 57.579 +mAcc: 87.398 98.798 80.732 84.025 92.679 95.496 65.562 61.092 45.151 72.129 21.592 67.046 76.794 82.423 58.361 61.244 83.168 45.863 87.748 62.654 + +thomas 04/07 19:07:07 301/312: Data time: 0.0032, Iter time: 1.0977 Loss 0.673 (AVG: 0.534) Score 79.772 (AVG: 84.337) mIOU 60.888 mAP 72.588 mAcc 72.178 +IOU: 77.010 95.800 52.495 73.245 83.571 71.539 60.790 45.182 44.318 69.628 17.566 52.472 56.977 61.684 45.332 52.852 84.189 47.647 82.468 43.000 +mAP: 77.660 97.268 58.679 76.262 88.228 82.515 71.910 62.549 52.348 72.559 42.232 64.671 67.970 76.709 62.492 88.656 91.023 79.622 82.324 56.084 +mAcc: 87.100 98.855 78.868 84.755 93.246 95.888 64.427 60.363 50.745 77.262 24.072 65.468 76.842 80.737 60.633 61.509 86.277 49.914 84.053 62.543 + +thomas 04/07 19:07:23 312/312: Data time: 0.0027, Iter time: 0.3885 Loss 0.282 (AVG: 0.533) Score 91.441 (AVG: 84.333) mIOU 60.891 mAP 72.528 mAcc 72.264 +IOU: 76.955 95.795 53.063 73.738 83.398 70.745 61.454 44.151 44.177 69.742 18.281 53.093 57.524 59.780 46.217 53.584 84.401 47.732 81.161 42.833 +mAP: 77.593 97.244 58.677 76.312 88.425 82.656 71.398 62.631 52.076 73.155 42.504 63.247 67.499 75.828 62.963 88.980 91.337 79.819 82.828 55.379 +mAcc: 86.986 98.856 78.568 85.156 93.052 95.957 65.093 58.757 50.962 77.373 25.121 66.098 76.969 80.498 61.638 62.383 86.423 50.014 82.647 62.721 + +thomas 04/07 19:07:23 Finished test. Elapsed time: 395.3904 +thomas 04/07 19:07:25 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/07 19:07:25 Current best mIoU: 60.891 at iter 33000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/07 19:11:58 ===> Epoch[110](33040/301): Loss 0.3039 LR: 7.484e-02 Score 90.481 Data time: 2.6775, Total iter time: 6.7429 +thomas 04/07 19:16:16 ===> Epoch[110](33080/301): Loss 0.2942 LR: 7.481e-02 Score 90.489 Data time: 2.5033, Total iter time: 6.3595 +thomas 04/07 19:20:20 ===> Epoch[111](33120/301): Loss 0.3365 LR: 7.478e-02 Score 89.445 Data time: 2.3701, Total iter time: 6.0460 +thomas 04/07 19:24:19 ===> Epoch[111](33160/301): Loss 0.3314 LR: 7.475e-02 Score 89.526 Data time: 2.2760, Total iter time: 5.8796 +thomas 04/07 19:28:35 ===> Epoch[111](33200/301): Loss 0.3472 LR: 7.471e-02 Score 89.098 Data time: 2.4434, Total iter time: 6.3422 +thomas 04/07 19:32:40 ===> Epoch[111](33240/301): Loss 0.3500 LR: 7.468e-02 Score 89.188 Data time: 2.4098, Total iter time: 6.0322 +thomas 04/07 19:37:10 ===> Epoch[111](33280/301): Loss 0.3870 LR: 7.465e-02 Score 87.834 Data time: 2.6774, Total iter time: 6.6804 +thomas 04/07 19:41:06 ===> Epoch[111](33320/301): Loss 0.3680 LR: 7.462e-02 Score 88.778 Data time: 2.2704, Total iter time: 5.8423 +thomas 04/07 19:45:00 ===> Epoch[111](33360/301): Loss 0.3686 LR: 7.459e-02 Score 88.326 Data time: 2.2443, Total iter time: 5.7617 +thomas 04/07 19:49:06 ===> Epoch[111](33400/301): Loss 0.3231 LR: 7.456e-02 Score 89.619 Data time: 2.3381, Total iter time: 6.0705 +thomas 04/07 19:53:16 ===> Epoch[112](33440/301): Loss 0.3869 LR: 7.453e-02 Score 87.538 Data time: 2.4121, Total iter time: 6.1851 +thomas 04/07 19:57:41 ===> Epoch[112](33480/301): Loss 0.3351 LR: 7.450e-02 Score 89.961 Data time: 2.6105, Total iter time: 6.5292 +thomas 04/07 20:01:53 ===> Epoch[112](33520/301): Loss 0.3743 LR: 7.447e-02 Score 88.519 Data time: 2.5237, Total iter time: 6.2209 +thomas 04/07 20:05:55 ===> Epoch[112](33560/301): Loss 0.3381 LR: 7.444e-02 Score 89.622 Data time: 2.3501, Total iter time: 5.9833 +thomas 04/07 20:09:56 ===> Epoch[112](33600/301): Loss 0.3202 LR: 7.440e-02 Score 89.770 Data time: 2.3035, Total iter time: 5.9385 +thomas 04/07 20:14:10 ===> Epoch[112](33640/301): Loss 0.3163 LR: 7.437e-02 Score 90.040 Data time: 2.4399, Total iter time: 6.2597 +thomas 04/07 20:18:20 ===> Epoch[112](33680/301): Loss 0.3243 LR: 7.434e-02 Score 90.094 Data time: 2.4207, Total iter time: 6.1869 +thomas 04/07 20:22:37 ===> Epoch[113](33720/301): Loss 0.3039 LR: 7.431e-02 Score 90.415 Data time: 2.5793, Total iter time: 6.3324 +thomas 04/07 20:26:42 ===> Epoch[113](33760/301): Loss 0.3330 LR: 7.428e-02 Score 89.355 Data time: 2.4102, Total iter time: 6.0556 +thomas 04/07 20:30:55 ===> Epoch[113](33800/301): Loss 0.3206 LR: 7.425e-02 Score 89.908 Data time: 2.4329, Total iter time: 6.2343 +thomas 04/07 20:34:51 ===> Epoch[113](33840/301): Loss 0.3454 LR: 7.422e-02 Score 89.391 Data time: 2.2563, Total iter time: 5.8246 +thomas 04/07 20:38:45 ===> Epoch[113](33880/301): Loss 0.3526 LR: 7.419e-02 Score 88.952 Data time: 2.2762, Total iter time: 5.7938 +thomas 04/07 20:42:16 ===> Epoch[113](33920/301): Loss 0.3264 LR: 7.416e-02 Score 89.780 Data time: 2.0586, Total iter time: 5.1980 +thomas 04/07 20:46:36 ===> Epoch[113](33960/301): Loss 0.3255 LR: 7.413e-02 Score 89.443 Data time: 2.5870, Total iter time: 6.4239 +thomas 04/07 20:50:35 ===> Epoch[113](34000/301): Loss 0.3272 LR: 7.409e-02 Score 89.401 Data time: 2.3741, Total iter time: 5.8975 +thomas 04/07 20:50:37 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/07 20:50:37 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/07 20:52:38 101/312: Data time: 0.0024, Iter time: 0.4983 Loss 0.610 (AVG: 0.595) Score 79.701 (AVG: 83.682) mIOU 58.491 mAP 70.189 mAcc 69.382 +IOU: 77.546 95.873 56.001 72.438 84.336 71.570 62.154 44.840 40.994 62.844 18.778 46.955 52.456 46.367 52.979 26.997 90.924 51.022 77.431 37.307 +mAP: 77.997 97.177 61.084 65.747 92.215 78.461 70.128 57.269 52.263 64.490 46.202 53.058 63.872 68.278 66.722 84.933 97.518 84.069 71.669 50.623 +mAcc: 90.891 98.339 86.062 86.955 87.029 93.860 76.777 57.077 45.641 90.524 20.943 62.835 75.863 47.151 67.709 28.245 91.750 51.794 78.826 49.369 + +thomas 04/07 20:54:32 201/312: Data time: 0.0033, Iter time: 0.3065 Loss 0.478 (AVG: 0.569) Score 85.360 (AVG: 84.193) mIOU 58.722 mAP 71.195 mAcc 68.944 +IOU: 77.310 95.913 50.462 73.157 86.370 74.631 66.319 42.274 44.720 64.296 13.809 50.125 54.771 44.490 51.006 28.591 86.582 49.056 80.376 40.191 +mAP: 77.593 97.303 59.611 71.669 91.068 81.986 72.060 58.737 55.390 67.283 42.270 57.275 64.417 72.445 64.737 78.168 94.400 83.627 81.421 52.440 +mAcc: 91.187 98.294 79.439 84.823 89.026 94.463 80.451 54.144 52.254 84.011 15.440 66.459 76.307 44.989 65.795 30.051 88.353 49.882 81.544 51.977 + +thomas 04/07 20:56:31 301/312: Data time: 0.0025, Iter time: 0.7376 Loss 0.466 (AVG: 0.552) Score 85.181 (AVG: 84.753) mIOU 59.419 mAP 71.235 mAcc 69.177 +IOU: 77.339 96.190 54.102 72.069 86.877 75.763 67.541 44.567 45.307 68.764 12.692 57.884 55.192 44.442 52.183 25.270 86.714 45.689 79.289 40.497 +mAP: 77.688 97.459 60.225 72.431 90.517 83.749 70.677 60.732 53.607 69.325 41.260 59.429 62.376 71.803 67.206 79.025 94.428 81.353 78.362 53.044 +mAcc: 91.092 98.340 79.574 83.961 89.526 94.134 81.376 57.285 51.945 86.447 13.949 73.576 74.889 45.026 68.554 26.154 88.313 46.267 80.371 52.754 + +thomas 04/07 20:56:45 312/312: Data time: 0.0024, Iter time: 0.5337 Loss 0.409 (AVG: 0.554) Score 84.421 (AVG: 84.628) mIOU 59.453 mAP 71.107 mAcc 69.167 +IOU: 77.019 96.172 53.493 72.451 86.731 76.569 67.446 44.631 44.867 70.332 13.035 57.796 55.403 44.442 50.225 26.726 87.325 45.876 78.185 40.332 +mAP: 77.794 97.380 59.586 72.612 90.158 83.629 70.404 60.912 53.646 69.827 40.800 58.686 62.196 71.803 64.407 80.392 94.588 81.572 79.446 52.295 +mAcc: 90.848 98.320 79.208 84.309 89.366 94.443 81.570 56.867 51.302 87.681 14.424 73.795 74.411 45.026 66.597 27.637 88.895 46.476 79.312 52.858 + +thomas 04/07 20:56:45 Finished test. Elapsed time: 368.0777 +thomas 04/07 20:56:45 Current best mIoU: 60.891 at iter 33000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/07 21:00:40 ===> Epoch[114](34040/301): Loss 0.3370 LR: 7.406e-02 Score 89.277 Data time: 2.2461, Total iter time: 5.7990 +thomas 04/07 21:04:23 ===> Epoch[114](34080/301): Loss 0.3201 LR: 7.403e-02 Score 90.134 Data time: 2.1546, Total iter time: 5.4960 +thomas 04/07 21:08:31 ===> Epoch[114](34120/301): Loss 0.3585 LR: 7.400e-02 Score 88.887 Data time: 2.4024, Total iter time: 6.1182 +thomas 04/07 21:12:30 ===> Epoch[114](34160/301): Loss 0.3454 LR: 7.397e-02 Score 89.457 Data time: 2.4093, Total iter time: 5.9130 +thomas 04/07 21:16:42 ===> Epoch[114](34200/301): Loss 0.3232 LR: 7.394e-02 Score 89.893 Data time: 2.4368, Total iter time: 6.1997 +thomas 04/07 21:20:34 ===> Epoch[114](34240/301): Loss 0.3173 LR: 7.391e-02 Score 90.309 Data time: 2.2531, Total iter time: 5.7494 +thomas 04/07 21:24:53 ===> Epoch[114](34280/301): Loss 0.3158 LR: 7.388e-02 Score 89.910 Data time: 2.4983, Total iter time: 6.3946 +thomas 04/07 21:28:43 ===> Epoch[115](34320/301): Loss 0.3121 LR: 7.385e-02 Score 90.043 Data time: 2.2303, Total iter time: 5.6783 +thomas 04/07 21:32:50 ===> Epoch[115](34360/301): Loss 0.3243 LR: 7.382e-02 Score 89.669 Data time: 2.4215, Total iter time: 6.0857 +thomas 04/07 21:36:46 ===> Epoch[115](34400/301): Loss 0.3608 LR: 7.378e-02 Score 88.721 Data time: 2.3806, Total iter time: 5.8365 +thomas 04/07 21:40:30 ===> Epoch[115](34440/301): Loss 0.3020 LR: 7.375e-02 Score 90.447 Data time: 2.1687, Total iter time: 5.5336 +thomas 04/07 21:44:19 ===> Epoch[115](34480/301): Loss 0.3370 LR: 7.372e-02 Score 89.527 Data time: 2.1812, Total iter time: 5.6569 +thomas 04/07 21:48:20 ===> Epoch[115](34520/301): Loss 0.3289 LR: 7.369e-02 Score 89.739 Data time: 2.3104, Total iter time: 5.9481 +thomas 04/07 21:52:06 ===> Epoch[115](34560/301): Loss 0.3558 LR: 7.366e-02 Score 88.861 Data time: 2.2071, Total iter time: 5.5741 +thomas 04/07 21:56:05 ===> Epoch[115](34600/301): Loss 0.3172 LR: 7.363e-02 Score 90.037 Data time: 2.3505, Total iter time: 5.9063 +thomas 04/07 22:00:18 ===> Epoch[116](34640/301): Loss 0.3263 LR: 7.360e-02 Score 89.573 Data time: 2.5492, Total iter time: 6.2455 +thomas 04/07 22:04:08 ===> Epoch[116](34680/301): Loss 0.3285 LR: 7.357e-02 Score 90.113 Data time: 2.2435, Total iter time: 5.6524 +thomas 04/07 22:08:07 ===> Epoch[116](34720/301): Loss 0.3494 LR: 7.354e-02 Score 88.968 Data time: 2.2860, Total iter time: 5.8929 +thomas 04/07 22:12:02 ===> Epoch[116](34760/301): Loss 0.3261 LR: 7.351e-02 Score 89.803 Data time: 2.2908, Total iter time: 5.8030 +thomas 04/07 22:16:02 ===> Epoch[116](34800/301): Loss 0.3346 LR: 7.347e-02 Score 89.590 Data time: 2.2871, Total iter time: 5.9070 +thomas 04/07 22:20:04 ===> Epoch[116](34840/301): Loss 0.3366 LR: 7.344e-02 Score 89.655 Data time: 2.3943, Total iter time: 5.9563 +thomas 04/07 22:24:16 ===> Epoch[116](34880/301): Loss 0.3286 LR: 7.341e-02 Score 89.612 Data time: 2.5194, Total iter time: 6.2299 +thomas 04/07 22:28:22 ===> Epoch[117](34920/301): Loss 0.3261 LR: 7.338e-02 Score 89.547 Data time: 2.3674, Total iter time: 6.0585 +thomas 04/07 22:32:14 ===> Epoch[117](34960/301): Loss 0.3195 LR: 7.335e-02 Score 89.855 Data time: 2.2196, Total iter time: 5.7338 +thomas 04/07 22:36:10 ===> Epoch[117](35000/301): Loss 0.3923 LR: 7.332e-02 Score 88.479 Data time: 2.2662, Total iter time: 5.8406 +thomas 04/07 22:36:12 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/07 22:36:12 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/07 22:38:04 101/312: Data time: 0.0027, Iter time: 0.2534 Loss 0.234 (AVG: 0.541) Score 90.804 (AVG: 84.653) mIOU 60.479 mAP 71.159 mAcc 71.222 +IOU: 78.533 95.864 56.289 71.104 87.305 80.528 68.307 38.878 39.177 67.303 9.749 32.000 46.846 49.726 58.430 68.807 87.845 54.629 87.802 30.462 +mAP: 78.696 96.947 56.035 67.193 88.260 78.575 75.318 57.296 55.838 79.622 34.661 51.636 59.872 69.556 70.205 92.993 93.883 79.976 89.636 46.978 +mAcc: 90.611 97.730 80.899 81.909 92.349 90.592 78.631 54.333 51.891 88.360 11.272 33.892 91.859 54.649 65.851 85.751 90.413 56.171 92.030 35.243 + +thomas 04/07 22:40:05 201/312: Data time: 0.0028, Iter time: 0.9097 Loss 0.412 (AVG: 0.566) Score 87.526 (AVG: 83.875) mIOU 57.774 mAP 70.196 mAcc 68.088 +IOU: 77.441 96.269 50.508 67.116 87.812 82.411 63.896 39.573 42.998 57.967 12.436 34.725 49.512 49.941 48.887 53.096 86.896 42.986 77.869 33.139 +mAP: 76.965 97.510 53.780 72.228 88.530 82.609 72.927 58.023 57.759 72.710 33.223 53.144 63.287 69.541 63.199 87.814 95.235 76.659 79.975 48.797 +mAcc: 90.246 98.070 76.935 84.577 93.857 89.349 76.180 54.251 54.196 85.657 15.009 37.327 87.728 54.536 53.650 58.175 89.769 43.861 81.059 37.334 + +thomas 04/07 22:42:00 301/312: Data time: 0.0031, Iter time: 0.3272 Loss 0.178 (AVG: 0.572) Score 94.610 (AVG: 83.664) mIOU 58.275 mAP 70.731 mAcc 68.545 +IOU: 77.820 96.049 47.703 67.162 87.095 82.918 62.889 41.107 43.637 60.343 14.846 35.910 48.151 60.200 47.939 54.258 84.646 39.268 81.749 31.798 +mAP: 76.712 97.125 55.607 74.136 88.772 83.287 69.038 59.905 55.529 70.809 40.671 54.825 63.571 74.771 62.811 89.790 91.302 76.469 80.749 48.739 +mAcc: 90.230 97.978 77.340 82.192 93.003 90.516 74.306 56.125 54.318 83.274 18.067 38.849 88.555 64.621 54.631 59.346 87.877 40.025 84.280 35.376 + +thomas 04/07 22:42:15 312/312: Data time: 0.0030, Iter time: 0.8162 Loss 0.771 (AVG: 0.578) Score 82.222 (AVG: 83.601) mIOU 58.216 mAP 70.704 mAcc 68.363 +IOU: 77.749 96.039 47.128 64.955 87.219 82.378 63.325 41.604 43.123 60.162 15.155 35.905 49.489 60.217 47.902 53.638 84.938 39.268 82.473 31.658 +mAP: 76.766 97.111 55.697 72.299 88.499 83.014 69.189 60.509 56.438 71.507 40.808 54.825 62.336 74.375 62.811 89.790 91.470 76.469 81.366 48.802 +mAcc: 90.354 97.965 77.218 80.900 93.045 90.282 74.745 56.586 52.946 80.774 18.372 38.849 88.283 64.463 54.631 59.346 88.153 40.025 84.974 35.351 + +thomas 04/07 22:42:15 Finished test. Elapsed time: 363.5837 +thomas 04/07 22:42:16 Current best mIoU: 60.891 at iter 33000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/07 22:46:16 ===> Epoch[117](35040/301): Loss 0.3284 LR: 7.329e-02 Score 89.942 Data time: 2.3747, Total iter time: 5.9373 +thomas 04/07 22:50:03 ===> Epoch[117](35080/301): Loss 0.3232 LR: 7.326e-02 Score 89.689 Data time: 2.2222, Total iter time: 5.5934 +thomas 04/07 22:53:47 ===> Epoch[117](35120/301): Loss 0.3314 LR: 7.323e-02 Score 89.599 Data time: 2.1435, Total iter time: 5.5390 +thomas 04/07 22:57:28 ===> Epoch[117](35160/301): Loss 0.3475 LR: 7.319e-02 Score 89.113 Data time: 2.1169, Total iter time: 5.4591 +thomas 04/07 23:01:20 ===> Epoch[117](35200/301): Loss 0.3172 LR: 7.316e-02 Score 90.246 Data time: 2.1910, Total iter time: 5.7220 +thomas 04/07 23:05:24 ===> Epoch[118](35240/301): Loss 0.3281 LR: 7.313e-02 Score 89.486 Data time: 2.3283, Total iter time: 6.0258 +thomas 04/07 23:09:41 ===> Epoch[118](35280/301): Loss 0.3255 LR: 7.310e-02 Score 89.795 Data time: 2.5203, Total iter time: 6.3709 +thomas 04/07 23:13:30 ===> Epoch[118](35320/301): Loss 0.3468 LR: 7.307e-02 Score 89.132 Data time: 2.2777, Total iter time: 5.6346 +thomas 04/07 23:17:18 ===> Epoch[118](35360/301): Loss 0.3179 LR: 7.304e-02 Score 90.038 Data time: 2.1546, Total iter time: 5.6257 +thomas 04/07 23:21:06 ===> Epoch[118](35400/301): Loss 0.3006 LR: 7.301e-02 Score 90.636 Data time: 2.1986, Total iter time: 5.6267 +thomas 04/07 23:24:50 ===> Epoch[118](35440/301): Loss 0.3417 LR: 7.298e-02 Score 89.489 Data time: 2.1208, Total iter time: 5.5318 +thomas 04/07 23:28:39 ===> Epoch[118](35480/301): Loss 0.3299 LR: 7.295e-02 Score 89.581 Data time: 2.1931, Total iter time: 5.6529 +thomas 04/07 23:32:41 ===> Epoch[119](35520/301): Loss 0.3326 LR: 7.291e-02 Score 89.486 Data time: 2.4281, Total iter time: 5.9921 +thomas 04/07 23:36:55 ===> Epoch[119](35560/301): Loss 0.3373 LR: 7.288e-02 Score 89.708 Data time: 2.4888, Total iter time: 6.2510 +thomas 04/07 23:40:51 ===> Epoch[119](35600/301): Loss 0.3039 LR: 7.285e-02 Score 90.274 Data time: 2.2129, Total iter time: 5.8149 +thomas 04/07 23:44:39 ===> Epoch[119](35640/301): Loss 0.3062 LR: 7.282e-02 Score 90.554 Data time: 2.1797, Total iter time: 5.6272 +thomas 04/07 23:48:34 ===> Epoch[119](35680/301): Loss 0.3374 LR: 7.279e-02 Score 89.716 Data time: 2.2210, Total iter time: 5.7964 +thomas 04/07 23:52:15 ===> Epoch[119](35720/301): Loss 0.3111 LR: 7.276e-02 Score 90.339 Data time: 2.1006, Total iter time: 5.4494 +thomas 04/07 23:56:20 ===> Epoch[119](35760/301): Loss 0.3235 LR: 7.273e-02 Score 89.967 Data time: 2.4058, Total iter time: 6.0652 +thomas 04/08 00:00:28 ===> Epoch[119](35800/301): Loss 0.3253 LR: 7.270e-02 Score 89.970 Data time: 2.4316, Total iter time: 6.1001 +thomas 04/08 00:04:10 ===> Epoch[120](35840/301): Loss 0.3625 LR: 7.267e-02 Score 88.750 Data time: 2.1304, Total iter time: 5.4958 +thomas 04/08 00:07:59 ===> Epoch[120](35880/301): Loss 0.3784 LR: 7.264e-02 Score 88.326 Data time: 2.1999, Total iter time: 5.6592 +thomas 04/08 00:11:52 ===> Epoch[120](35920/301): Loss 0.3187 LR: 7.260e-02 Score 90.162 Data time: 2.2157, Total iter time: 5.7429 +thomas 04/08 00:15:43 ===> Epoch[120](35960/301): Loss 0.3162 LR: 7.257e-02 Score 90.015 Data time: 2.2174, Total iter time: 5.7223 +thomas 04/08 00:19:56 ===> Epoch[120](36000/301): Loss 0.3446 LR: 7.254e-02 Score 88.971 Data time: 2.4825, Total iter time: 6.2363 +thomas 04/08 00:19:57 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/08 00:19:57 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/08 00:22:14 101/312: Data time: 0.0031, Iter time: 0.5536 Loss 1.097 (AVG: 0.613) Score 71.817 (AVG: 82.687) mIOU 53.495 mAP 69.355 mAcc 64.450 +IOU: 75.583 96.179 38.330 54.300 87.520 70.280 64.394 37.507 39.134 60.416 6.412 46.377 54.382 60.751 40.025 6.590 80.192 19.961 85.072 46.504 +mAP: 77.825 97.748 43.562 68.515 88.706 90.075 69.999 60.881 53.443 61.371 28.898 52.768 64.869 72.959 67.971 84.698 92.466 69.753 83.949 56.644 +mAcc: 88.877 98.434 56.594 81.010 93.961 93.214 73.133 48.997 41.265 91.678 6.825 53.530 68.964 64.279 62.300 6.625 84.717 20.237 91.762 62.602 + +thomas 04/08 00:24:25 201/312: Data time: 0.0024, Iter time: 0.7454 Loss 0.348 (AVG: 0.621) Score 89.118 (AVG: 83.095) mIOU 54.997 mAP 69.068 mAcc 65.809 +IOU: 76.986 95.652 45.004 56.647 86.391 75.455 65.638 39.999 31.770 57.339 3.742 53.182 56.196 57.115 44.509 14.603 81.491 30.751 81.767 45.694 +mAP: 77.746 96.756 50.078 64.901 88.305 83.849 72.597 60.913 45.061 64.081 25.387 57.255 63.440 73.496 68.791 83.683 93.842 74.386 81.832 54.960 +mAcc: 89.155 98.300 64.830 78.327 92.954 93.160 74.896 52.458 33.185 92.467 4.244 59.026 69.590 60.622 65.974 14.851 87.768 31.211 89.361 63.810 + +thomas 04/08 00:26:14 301/312: Data time: 0.0026, Iter time: 0.5192 Loss 0.994 (AVG: 0.625) Score 81.712 (AVG: 83.115) mIOU 55.656 mAP 69.313 mAcc 66.484 +IOU: 77.189 95.726 46.974 61.366 86.134 74.884 65.532 40.133 28.977 54.713 5.760 54.972 52.993 60.154 45.365 13.324 85.238 37.545 82.967 43.162 +mAP: 77.465 96.750 51.911 67.821 87.692 82.683 69.666 57.613 45.110 66.306 27.065 57.574 62.632 74.170 69.382 79.871 95.322 77.259 85.936 54.029 +mAcc: 89.346 98.406 64.275 80.606 92.678 92.464 74.570 53.244 30.201 92.653 6.563 61.949 67.727 64.182 66.902 13.690 89.729 37.916 89.397 63.176 + +thomas 04/08 00:26:29 312/312: Data time: 0.0026, Iter time: 0.4747 Loss 0.672 (AVG: 0.621) Score 82.817 (AVG: 83.175) mIOU 55.750 mAP 69.283 mAcc 66.567 +IOU: 77.170 95.762 46.002 62.989 86.164 75.082 65.813 40.129 28.959 54.470 6.615 54.657 53.052 62.264 44.880 12.367 84.490 36.645 82.366 45.118 +mAP: 77.733 96.820 51.623 69.315 87.890 82.880 69.817 57.740 44.892 66.145 29.256 57.276 62.903 75.154 68.256 78.382 94.919 77.529 83.600 53.523 +mAcc: 89.222 98.418 63.799 82.401 92.679 92.622 74.880 53.647 30.212 92.706 7.485 61.602 67.746 66.316 66.009 12.645 88.820 36.989 88.519 64.624 + +thomas 04/08 00:26:29 Finished test. Elapsed time: 391.8883 +thomas 04/08 00:26:29 Current best mIoU: 60.891 at iter 33000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/08 00:30:28 ===> Epoch[120](36040/301): Loss 0.3481 LR: 7.251e-02 Score 89.219 Data time: 2.3078, Total iter time: 5.8968 +thomas 04/08 00:34:29 ===> Epoch[120](36080/301): Loss 0.3164 LR: 7.248e-02 Score 89.965 Data time: 2.3083, Total iter time: 5.9319 +thomas 04/08 00:38:21 ===> Epoch[120](36120/301): Loss 0.3473 LR: 7.245e-02 Score 89.360 Data time: 2.2269, Total iter time: 5.7277 +thomas 04/08 00:42:06 ===> Epoch[121](36160/301): Loss 0.3341 LR: 7.242e-02 Score 89.463 Data time: 2.1579, Total iter time: 5.5566 +thomas 04/08 00:46:04 ===> Epoch[121](36200/301): Loss 0.3314 LR: 7.239e-02 Score 89.957 Data time: 2.3621, Total iter time: 5.8655 +thomas 04/08 00:50:08 ===> Epoch[121](36240/301): Loss 0.3141 LR: 7.236e-02 Score 89.955 Data time: 2.4107, Total iter time: 6.0359 +thomas 04/08 00:53:57 ===> Epoch[121](36280/301): Loss 0.3117 LR: 7.232e-02 Score 90.208 Data time: 2.1790, Total iter time: 5.6360 +thomas 04/08 00:57:47 ===> Epoch[121](36320/301): Loss 0.2978 LR: 7.229e-02 Score 90.608 Data time: 2.2096, Total iter time: 5.6824 +thomas 04/08 01:01:34 ===> Epoch[121](36360/301): Loss 0.3157 LR: 7.226e-02 Score 89.699 Data time: 2.1641, Total iter time: 5.6015 +thomas 04/08 01:05:34 ===> Epoch[121](36400/301): Loss 0.3379 LR: 7.223e-02 Score 89.527 Data time: 2.3189, Total iter time: 5.9242 +thomas 04/08 01:09:51 ===> Epoch[122](36440/301): Loss 0.3398 LR: 7.220e-02 Score 89.189 Data time: 2.5463, Total iter time: 6.3360 +thomas 04/08 01:13:52 ===> Epoch[122](36480/301): Loss 0.3009 LR: 7.217e-02 Score 90.596 Data time: 2.3527, Total iter time: 5.9464 +thomas 04/08 01:17:44 ===> Epoch[122](36520/301): Loss 0.3353 LR: 7.214e-02 Score 89.483 Data time: 2.2436, Total iter time: 5.7312 +thomas 04/08 01:21:22 ===> Epoch[122](36560/301): Loss 0.3212 LR: 7.211e-02 Score 90.020 Data time: 2.1105, Total iter time: 5.3766 +thomas 04/08 01:25:28 ===> Epoch[122](36600/301): Loss 0.3273 LR: 7.208e-02 Score 89.856 Data time: 2.3342, Total iter time: 6.0526 +thomas 04/08 01:29:44 ===> Epoch[122](36640/301): Loss 0.3245 LR: 7.204e-02 Score 90.065 Data time: 2.4880, Total iter time: 6.3430 +thomas 04/08 01:33:51 ===> Epoch[122](36680/301): Loss 0.3285 LR: 7.201e-02 Score 89.821 Data time: 2.4279, Total iter time: 6.1039 +thomas 04/08 01:37:58 ===> Epoch[122](36720/301): Loss 0.2999 LR: 7.198e-02 Score 90.450 Data time: 2.4303, Total iter time: 6.0833 +thomas 04/08 01:42:09 ===> Epoch[123](36760/301): Loss 0.3204 LR: 7.195e-02 Score 90.077 Data time: 2.4218, Total iter time: 6.1906 +thomas 04/08 01:45:47 ===> Epoch[123](36800/301): Loss 0.3317 LR: 7.192e-02 Score 89.609 Data time: 2.1056, Total iter time: 5.3827 +thomas 04/08 01:49:38 ===> Epoch[123](36840/301): Loss 0.3614 LR: 7.189e-02 Score 88.680 Data time: 2.2346, Total iter time: 5.7042 +thomas 04/08 01:53:27 ===> Epoch[123](36880/301): Loss 0.3074 LR: 7.186e-02 Score 90.530 Data time: 2.2435, Total iter time: 5.6623 +thomas 04/08 01:57:41 ===> Epoch[123](36920/301): Loss 0.3031 LR: 7.183e-02 Score 90.529 Data time: 2.4926, Total iter time: 6.2696 +thomas 04/08 02:01:43 ===> Epoch[123](36960/301): Loss 0.3386 LR: 7.180e-02 Score 89.239 Data time: 2.3641, Total iter time: 5.9700 +thomas 04/08 02:05:34 ===> Epoch[123](37000/301): Loss 0.3717 LR: 7.176e-02 Score 88.734 Data time: 2.2115, Total iter time: 5.6993 +thomas 04/08 02:05:35 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/08 02:05:36 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/08 02:07:26 101/312: Data time: 0.0026, Iter time: 0.8462 Loss 1.044 (AVG: 0.658) Score 70.422 (AVG: 81.928) mIOU 55.643 mAP 69.655 mAcc 68.696 +IOU: 77.016 95.471 48.466 74.474 82.284 72.707 47.509 34.521 34.862 77.121 7.512 46.780 42.012 40.019 36.011 44.327 80.270 49.816 82.671 39.019 +mAP: 75.356 97.623 50.305 83.364 89.177 75.444 65.149 61.255 52.504 66.553 23.679 51.169 69.812 72.763 64.812 79.940 94.107 80.911 87.661 51.507 +mAcc: 86.724 98.433 61.249 93.093 91.438 78.455 49.348 63.906 37.671 93.078 7.903 57.447 86.995 40.762 81.192 56.598 84.838 52.100 87.746 64.946 + +thomas 04/08 02:09:24 201/312: Data time: 0.0024, Iter time: 0.7781 Loss 0.567 (AVG: 0.641) Score 82.556 (AVG: 82.629) mIOU 56.499 mAP 69.954 mAcc 68.033 +IOU: 77.113 95.906 49.717 66.637 83.804 74.025 49.523 39.503 39.199 74.848 4.941 51.423 44.164 42.410 41.808 41.037 82.176 55.868 75.156 40.718 +mAP: 74.791 97.697 50.821 78.371 88.814 74.507 67.452 61.488 51.769 68.225 25.644 57.634 67.640 73.007 64.729 79.529 93.975 82.697 88.679 51.611 +mAcc: 87.685 98.535 63.231 86.619 91.662 79.542 51.917 69.079 41.542 91.756 5.110 62.081 86.792 43.628 63.767 52.112 86.991 58.235 78.552 61.822 + +thomas 04/08 02:11:30 301/312: Data time: 0.0025, Iter time: 0.3534 Loss 0.153 (AVG: 0.658) Score 97.448 (AVG: 82.565) mIOU 55.861 mAP 69.084 mAcc 67.346 +IOU: 76.754 96.057 50.523 63.545 83.799 73.082 53.159 40.562 35.853 71.899 6.644 57.388 42.373 45.033 35.385 38.211 81.195 54.844 70.725 40.193 +mAP: 74.837 97.260 52.364 71.080 87.819 77.486 68.750 60.321 48.650 65.898 30.942 58.596 66.989 74.334 55.399 81.387 94.876 82.978 80.635 51.070 +mAcc: 87.939 98.649 63.254 81.198 91.370 78.089 55.571 69.153 37.942 88.145 6.917 68.963 87.194 46.363 58.537 45.192 87.828 57.344 74.166 63.110 + +thomas 04/08 02:11:41 312/312: Data time: 0.0024, Iter time: 0.4161 Loss 0.796 (AVG: 0.655) Score 75.710 (AVG: 82.637) mIOU 56.131 mAP 68.986 mAcc 67.496 +IOU: 76.755 96.075 50.964 64.028 84.121 73.407 53.694 40.750 35.527 71.143 6.266 57.390 43.603 45.796 36.955 38.823 80.683 55.192 71.471 39.983 +mAP: 74.784 97.264 52.566 71.721 88.119 76.551 68.888 60.707 48.661 63.652 29.602 58.521 67.413 74.140 56.770 80.812 94.285 83.072 81.187 51.007 +mAcc: 87.852 98.641 63.893 81.598 91.510 78.335 56.094 69.586 37.597 87.262 6.508 68.293 87.860 47.363 59.079 45.692 87.001 57.654 74.834 63.264 + +thomas 04/08 02:11:41 Finished test. Elapsed time: 365.8801 +thomas 04/08 02:11:41 Current best mIoU: 60.891 at iter 33000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/08 02:15:40 ===> Epoch[124](37040/301): Loss 0.3369 LR: 7.173e-02 Score 89.651 Data time: 2.2795, Total iter time: 5.8782 +thomas 04/08 02:19:49 ===> Epoch[124](37080/301): Loss 0.2963 LR: 7.170e-02 Score 90.811 Data time: 2.4698, Total iter time: 6.1439 +thomas 04/08 02:23:55 ===> Epoch[124](37120/301): Loss 0.3205 LR: 7.167e-02 Score 89.791 Data time: 2.4088, Total iter time: 6.0704 +thomas 04/08 02:27:57 ===> Epoch[124](37160/301): Loss 0.3056 LR: 7.164e-02 Score 90.141 Data time: 2.3472, Total iter time: 5.9761 +thomas 04/08 02:31:45 ===> Epoch[124](37200/301): Loss 0.3402 LR: 7.161e-02 Score 89.324 Data time: 2.2222, Total iter time: 5.6416 +thomas 04/08 02:35:32 ===> Epoch[124](37240/301): Loss 0.3212 LR: 7.158e-02 Score 89.581 Data time: 2.1678, Total iter time: 5.5940 +thomas 04/08 02:39:25 ===> Epoch[124](37280/301): Loss 0.3080 LR: 7.155e-02 Score 90.523 Data time: 2.2975, Total iter time: 5.7483 +thomas 04/08 02:43:29 ===> Epoch[124](37320/301): Loss 0.3131 LR: 7.152e-02 Score 89.959 Data time: 2.3597, Total iter time: 6.0192 +thomas 04/08 02:47:28 ===> Epoch[125](37360/301): Loss 0.3361 LR: 7.148e-02 Score 89.430 Data time: 2.3373, Total iter time: 5.8960 +thomas 04/08 02:51:31 ===> Epoch[125](37400/301): Loss 0.3630 LR: 7.145e-02 Score 88.938 Data time: 2.3798, Total iter time: 6.0008 +thomas 04/08 02:55:24 ===> Epoch[125](37440/301): Loss 0.3508 LR: 7.142e-02 Score 89.184 Data time: 2.2208, Total iter time: 5.7461 +thomas 04/08 02:59:21 ===> Epoch[125](37480/301): Loss 0.3045 LR: 7.139e-02 Score 90.267 Data time: 2.2846, Total iter time: 5.8362 +thomas 04/08 03:03:28 ===> Epoch[125](37520/301): Loss 0.3238 LR: 7.136e-02 Score 89.745 Data time: 2.4306, Total iter time: 6.1147 +thomas 04/08 03:07:29 ===> Epoch[125](37560/301): Loss 0.3051 LR: 7.133e-02 Score 90.411 Data time: 2.3878, Total iter time: 5.9405 +thomas 04/08 03:11:35 ===> Epoch[125](37600/301): Loss 0.2873 LR: 7.130e-02 Score 91.049 Data time: 2.4120, Total iter time: 6.0703 +thomas 04/08 03:15:41 ===> Epoch[126](37640/301): Loss 0.3114 LR: 7.127e-02 Score 90.214 Data time: 2.3956, Total iter time: 6.0731 +thomas 04/08 03:19:34 ===> Epoch[126](37680/301): Loss 0.3090 LR: 7.123e-02 Score 90.473 Data time: 2.2588, Total iter time: 5.7586 +thomas 04/08 03:23:28 ===> Epoch[126](37720/301): Loss 0.3501 LR: 7.120e-02 Score 88.753 Data time: 2.2644, Total iter time: 5.7720 +thomas 04/08 03:27:19 ===> Epoch[126](37760/301): Loss 0.2982 LR: 7.117e-02 Score 90.689 Data time: 2.2511, Total iter time: 5.6814 +thomas 04/08 03:31:15 ===> Epoch[126](37800/301): Loss 0.3179 LR: 7.114e-02 Score 89.947 Data time: 2.3376, Total iter time: 5.8393 +thomas 04/08 03:35:15 ===> Epoch[126](37840/301): Loss 0.3201 LR: 7.111e-02 Score 89.968 Data time: 2.3184, Total iter time: 5.9044 +thomas 04/08 03:39:03 ===> Epoch[126](37880/301): Loss 0.3170 LR: 7.108e-02 Score 90.184 Data time: 2.2572, Total iter time: 5.6294 +thomas 04/08 03:43:17 ===> Epoch[126](37920/301): Loss 0.2854 LR: 7.105e-02 Score 90.895 Data time: 2.4500, Total iter time: 6.2744 +thomas 04/08 03:47:11 ===> Epoch[127](37960/301): Loss 0.3067 LR: 7.102e-02 Score 90.319 Data time: 2.2821, Total iter time: 5.7765 +thomas 04/08 03:51:15 ===> Epoch[127](38000/301): Loss 0.2911 LR: 7.099e-02 Score 90.927 Data time: 2.4109, Total iter time: 6.0293 +thomas 04/08 03:51:16 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/08 03:51:17 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/08 03:53:17 101/312: Data time: 0.1612, Iter time: 0.4697 Loss 0.535 (AVG: 0.545) Score 86.317 (AVG: 84.997) mIOU 61.276 mAP 71.851 mAcc 70.574 +IOU: 76.953 95.405 53.254 67.598 86.214 83.775 57.223 41.780 42.042 69.971 6.987 58.135 45.718 74.605 54.115 54.607 75.887 40.251 85.720 55.271 +mAP: 79.945 96.812 51.235 76.658 90.205 88.377 71.823 55.887 55.318 62.548 30.591 60.982 68.358 84.642 75.927 77.428 87.698 81.008 87.941 53.642 +mAcc: 94.433 98.273 72.834 81.632 89.819 95.442 59.359 50.723 43.900 78.067 7.314 66.875 84.037 81.418 71.328 60.951 76.196 40.710 93.551 64.616 + +thomas 04/08 03:55:28 201/312: Data time: 0.1975, Iter time: 0.7190 Loss 0.260 (AVG: 0.557) Score 91.149 (AVG: 84.855) mIOU 61.329 mAP 71.154 mAcc 70.821 +IOU: 77.609 95.752 53.938 69.054 87.255 81.639 61.241 41.982 38.561 71.360 9.811 58.774 50.094 71.065 41.147 54.933 84.034 47.916 82.516 47.901 +mAP: 79.764 96.421 54.535 72.032 90.387 87.548 71.103 57.590 52.344 65.300 32.857 63.800 65.408 81.591 61.796 82.686 92.903 82.093 80.138 52.789 +mAcc: 94.110 98.423 72.497 78.742 91.679 96.376 64.507 51.204 42.519 82.758 10.403 71.922 86.115 78.710 52.653 64.153 84.527 48.730 89.454 56.943 + +thomas 04/08 03:57:38 301/312: Data time: 0.0029, Iter time: 0.5761 Loss 0.530 (AVG: 0.549) Score 87.097 (AVG: 85.132) mIOU 61.611 mAP 70.922 mAcc 70.937 +IOU: 77.489 96.025 56.598 69.724 86.919 80.543 63.404 41.053 40.491 72.440 10.149 58.431 51.757 70.155 40.999 57.437 84.727 46.608 81.144 46.129 +mAP: 79.293 96.585 57.675 72.509 90.285 85.628 72.307 58.661 52.013 65.117 30.542 61.392 64.310 79.875 56.223 84.908 93.658 80.508 84.127 52.833 +mAcc: 93.945 98.513 75.708 79.780 91.619 94.382 66.799 49.903 44.971 81.625 10.816 73.721 86.714 76.913 49.072 68.114 86.374 47.335 86.325 56.120 + +thomas 04/08 03:57:50 312/312: Data time: 0.0031, Iter time: 0.3095 Loss 0.014 (AVG: 0.544) Score 99.946 (AVG: 85.276) mIOU 61.872 mAP 70.985 mAcc 71.064 +IOU: 77.465 96.057 56.954 70.005 87.176 80.541 64.253 41.277 39.712 73.633 10.329 58.607 53.385 70.424 41.761 58.121 84.903 46.776 80.282 45.777 +mAP: 79.287 96.652 57.779 72.944 90.227 85.505 73.010 58.861 52.142 66.596 31.241 60.664 64.247 80.095 57.105 85.020 93.747 80.604 81.659 52.308 +mAcc: 94.027 98.514 75.588 80.150 91.827 94.448 67.609 50.097 43.975 82.445 10.998 74.070 87.261 77.216 50.071 67.568 86.551 47.501 85.584 55.777 + +thomas 04/08 03:57:50 Finished test. Elapsed time: 393.6701 +thomas 04/08 03:57:52 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/08 03:57:52 Current best mIoU: 61.872 at iter 38000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/08 04:01:49 ===> Epoch[127](38040/301): Loss 0.3138 LR: 7.095e-02 Score 90.011 Data time: 2.3626, Total iter time: 5.8582 +thomas 04/08 04:05:36 ===> Epoch[127](38080/301): Loss 0.3574 LR: 7.092e-02 Score 88.949 Data time: 2.2264, Total iter time: 5.5959 +thomas 04/08 04:09:37 ===> Epoch[127](38120/301): Loss 0.3554 LR: 7.089e-02 Score 89.205 Data time: 2.3120, Total iter time: 5.9293 +thomas 04/08 04:13:36 ===> Epoch[127](38160/301): Loss 0.3118 LR: 7.086e-02 Score 90.359 Data time: 2.3347, Total iter time: 5.8958 +thomas 04/08 04:17:32 ===> Epoch[127](38200/301): Loss 0.2879 LR: 7.083e-02 Score 90.945 Data time: 2.2960, Total iter time: 5.8258 +thomas 04/08 04:21:31 ===> Epoch[128](38240/301): Loss 0.3393 LR: 7.080e-02 Score 89.761 Data time: 2.3351, Total iter time: 5.8970 +thomas 04/08 04:25:37 ===> Epoch[128](38280/301): Loss 0.3252 LR: 7.077e-02 Score 89.599 Data time: 2.4206, Total iter time: 6.0852 +thomas 04/08 04:29:36 ===> Epoch[128](38320/301): Loss 0.3249 LR: 7.074e-02 Score 89.701 Data time: 2.3323, Total iter time: 5.9037 +thomas 04/08 04:33:36 ===> Epoch[128](38360/301): Loss 0.3406 LR: 7.071e-02 Score 89.395 Data time: 2.2961, Total iter time: 5.9173 +thomas 04/08 04:37:25 ===> Epoch[128](38400/301): Loss 0.3085 LR: 7.067e-02 Score 90.231 Data time: 2.2042, Total iter time: 5.6693 +thomas 04/08 04:41:38 ===> Epoch[128](38440/301): Loss 0.3235 LR: 7.064e-02 Score 90.294 Data time: 2.4756, Total iter time: 6.2378 +thomas 04/08 04:45:57 ===> Epoch[128](38480/301): Loss 0.3487 LR: 7.061e-02 Score 89.403 Data time: 2.4903, Total iter time: 6.3908 +thomas 04/08 04:50:00 ===> Epoch[128](38520/301): Loss 0.2987 LR: 7.058e-02 Score 90.527 Data time: 2.3836, Total iter time: 6.0205 +thomas 04/08 04:53:56 ===> Epoch[129](38560/301): Loss 0.3237 LR: 7.055e-02 Score 89.877 Data time: 2.2787, Total iter time: 5.8251 +thomas 04/08 04:57:52 ===> Epoch[129](38600/301): Loss 0.3127 LR: 7.052e-02 Score 90.104 Data time: 2.2688, Total iter time: 5.8067 +thomas 04/08 05:01:53 ===> Epoch[129](38640/301): Loss 0.3491 LR: 7.049e-02 Score 88.766 Data time: 2.3466, Total iter time: 5.9647 +thomas 04/08 05:05:36 ===> Epoch[129](38680/301): Loss 0.3128 LR: 7.046e-02 Score 90.121 Data time: 2.1581, Total iter time: 5.4858 +thomas 04/08 05:09:22 ===> Epoch[129](38720/301): Loss 0.3194 LR: 7.042e-02 Score 90.130 Data time: 2.1927, Total iter time: 5.5893 +thomas 04/08 05:13:23 ===> Epoch[129](38760/301): Loss 0.3011 LR: 7.039e-02 Score 90.512 Data time: 2.3712, Total iter time: 5.9521 +thomas 04/08 05:17:27 ===> Epoch[129](38800/301): Loss 0.2994 LR: 7.036e-02 Score 90.521 Data time: 2.3458, Total iter time: 6.0035 +thomas 04/08 05:21:19 ===> Epoch[130](38840/301): Loss 0.3431 LR: 7.033e-02 Score 89.283 Data time: 2.2654, Total iter time: 5.7401 +thomas 04/08 05:25:18 ===> Epoch[130](38880/301): Loss 0.3096 LR: 7.030e-02 Score 90.372 Data time: 2.3103, Total iter time: 5.8881 +thomas 04/08 05:29:30 ===> Epoch[130](38920/301): Loss 0.3221 LR: 7.027e-02 Score 89.854 Data time: 2.4370, Total iter time: 6.2206 +thomas 04/08 05:33:33 ===> Epoch[130](38960/301): Loss 0.3408 LR: 7.024e-02 Score 89.779 Data time: 2.3584, Total iter time: 6.0018 +thomas 04/08 05:37:45 ===> Epoch[130](39000/301): Loss 0.2794 LR: 7.021e-02 Score 91.369 Data time: 2.4400, Total iter time: 6.2300 +thomas 04/08 05:37:47 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/08 05:37:47 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/08 05:39:53 101/312: Data time: 0.0031, Iter time: 0.3286 Loss 0.012 (AVG: 0.550) Score 99.837 (AVG: 84.594) mIOU 57.419 mAP 69.737 mAcc 68.589 +IOU: 76.082 95.941 59.648 65.748 86.488 63.404 70.519 39.529 40.988 59.511 24.180 51.635 63.759 55.781 47.726 27.574 58.019 46.373 72.448 43.035 +mAP: 76.388 97.899 61.145 69.545 88.981 80.645 72.175 55.054 49.168 62.124 49.420 63.726 64.642 69.956 56.479 78.987 92.895 81.377 73.402 50.726 +mAcc: 88.550 99.094 74.999 77.628 93.121 92.405 82.898 60.637 43.673 86.465 29.750 89.319 80.457 66.204 50.869 27.699 58.154 48.005 72.621 49.238 + +thomas 04/08 05:41:56 201/312: Data time: 0.0027, Iter time: 0.8615 Loss 0.141 (AVG: 0.558) Score 96.079 (AVG: 84.378) mIOU 57.028 mAP 70.485 mAcc 68.425 +IOU: 76.637 95.805 56.394 68.578 85.113 68.751 68.779 41.966 38.641 60.553 22.707 49.912 58.032 63.875 45.281 16.981 68.007 49.282 62.455 42.800 +mAP: 76.712 97.492 60.616 72.815 88.765 82.910 72.258 58.655 48.407 64.599 45.033 60.474 66.352 74.516 62.556 77.969 93.802 81.205 72.820 51.753 +mAcc: 88.470 98.958 73.001 80.674 91.783 95.360 79.556 65.365 40.797 89.212 29.628 81.725 82.528 74.314 49.404 17.617 68.230 50.934 62.599 48.338 + +thomas 04/08 05:44:01 301/312: Data time: 0.0031, Iter time: 0.8324 Loss 0.307 (AVG: 0.575) Score 92.698 (AVG: 84.028) mIOU 56.779 mAP 70.292 mAcc 67.879 +IOU: 76.774 95.819 54.357 63.647 85.346 67.781 69.418 41.757 36.939 63.038 18.497 50.879 55.604 66.534 47.506 20.058 66.731 48.165 66.820 39.919 +mAP: 77.319 97.476 56.325 65.478 88.620 84.822 73.827 57.985 48.063 66.481 44.663 61.530 65.223 76.480 60.159 78.025 93.446 80.032 78.595 51.287 +mAcc: 89.044 98.976 71.385 73.397 92.406 95.129 80.095 63.804 39.123 87.542 22.924 83.553 82.612 76.897 51.631 20.677 66.930 49.611 67.004 44.841 + +thomas 04/08 05:44:12 312/312: Data time: 0.0033, Iter time: 0.6473 Loss 0.528 (AVG: 0.572) Score 84.170 (AVG: 84.057) mIOU 57.022 mAP 70.400 mAcc 68.142 +IOU: 76.848 95.822 54.565 63.943 85.488 68.007 69.291 41.795 36.779 63.272 18.450 50.841 55.315 66.358 46.778 21.967 68.206 48.004 68.645 40.067 +mAP: 77.541 97.365 56.716 66.417 88.805 83.119 73.237 58.342 47.685 67.036 44.930 60.028 65.307 76.966 58.915 79.307 93.798 80.144 80.763 51.577 +mAcc: 88.968 98.977 71.604 73.917 92.485 95.150 79.730 63.931 39.111 87.645 22.832 83.428 82.988 76.665 50.777 22.896 68.404 49.513 68.870 44.940 + +thomas 04/08 05:44:12 Finished test. Elapsed time: 384.8406 +thomas 04/08 05:44:12 Current best mIoU: 61.872 at iter 38000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/08 05:47:54 ===> Epoch[130](39040/301): Loss 0.3396 LR: 7.017e-02 Score 89.161 Data time: 2.1959, Total iter time: 5.4897 +thomas 04/08 05:51:59 ===> Epoch[130](39080/301): Loss 0.3090 LR: 7.014e-02 Score 90.364 Data time: 2.3883, Total iter time: 6.0355 +thomas 04/08 05:55:44 ===> Epoch[130](39120/301): Loss 0.3143 LR: 7.011e-02 Score 89.820 Data time: 2.1421, Total iter time: 5.5529 +thomas 04/08 05:59:43 ===> Epoch[131](39160/301): Loss 0.3186 LR: 7.008e-02 Score 89.862 Data time: 2.3251, Total iter time: 5.8901 +thomas 04/08 06:03:57 ===> Epoch[131](39200/301): Loss 0.3454 LR: 7.005e-02 Score 89.297 Data time: 2.5069, Total iter time: 6.2749 +thomas 04/08 06:07:47 ===> Epoch[131](39240/301): Loss 0.3351 LR: 7.002e-02 Score 89.180 Data time: 2.2707, Total iter time: 5.6816 +thomas 04/08 06:11:59 ===> Epoch[131](39280/301): Loss 0.3189 LR: 6.999e-02 Score 90.156 Data time: 2.4746, Total iter time: 6.2098 +thomas 04/08 06:16:14 ===> Epoch[131](39320/301): Loss 0.2757 LR: 6.996e-02 Score 91.399 Data time: 2.4578, Total iter time: 6.2897 +thomas 04/08 06:19:47 ===> Epoch[131](39360/301): Loss 0.3325 LR: 6.993e-02 Score 89.377 Data time: 2.0747, Total iter time: 5.2663 +thomas 04/08 06:23:59 ===> Epoch[131](39400/301): Loss 0.2974 LR: 6.989e-02 Score 90.647 Data time: 2.4923, Total iter time: 6.1988 +thomas 04/08 06:28:19 ===> Epoch[132](39440/301): Loss 0.2989 LR: 6.986e-02 Score 90.795 Data time: 2.5350, Total iter time: 6.4279 +thomas 04/08 06:32:38 ===> Epoch[132](39480/301): Loss 0.3097 LR: 6.983e-02 Score 89.918 Data time: 2.5623, Total iter time: 6.3980 +thomas 04/08 06:36:48 ===> Epoch[132](39520/301): Loss 0.3086 LR: 6.980e-02 Score 90.237 Data time: 2.4193, Total iter time: 6.1638 +thomas 04/08 06:40:36 ===> Epoch[132](39560/301): Loss 0.3068 LR: 6.977e-02 Score 90.299 Data time: 2.2193, Total iter time: 5.6415 +thomas 04/08 06:44:28 ===> Epoch[132](39600/301): Loss 0.3083 LR: 6.974e-02 Score 90.385 Data time: 2.2504, Total iter time: 5.7236 +thomas 04/08 06:48:08 ===> Epoch[132](39640/301): Loss 0.3169 LR: 6.971e-02 Score 89.668 Data time: 2.1742, Total iter time: 5.4249 +thomas 04/08 06:52:01 ===> Epoch[132](39680/301): Loss 0.3056 LR: 6.968e-02 Score 90.215 Data time: 2.3024, Total iter time: 5.7599 +thomas 04/08 06:56:02 ===> Epoch[132](39720/301): Loss 0.3218 LR: 6.964e-02 Score 89.846 Data time: 2.3664, Total iter time: 5.9416 +thomas 04/08 07:00:02 ===> Epoch[133](39760/301): Loss 0.2840 LR: 6.961e-02 Score 91.011 Data time: 2.3557, Total iter time: 5.9318 +thomas 04/08 07:04:02 ===> Epoch[133](39800/301): Loss 0.3116 LR: 6.958e-02 Score 90.114 Data time: 2.3053, Total iter time: 5.9112 +thomas 04/08 07:08:11 ===> Epoch[133](39840/301): Loss 0.3390 LR: 6.955e-02 Score 89.693 Data time: 2.4315, Total iter time: 6.1562 +thomas 04/08 07:12:14 ===> Epoch[133](39880/301): Loss 0.3177 LR: 6.952e-02 Score 90.089 Data time: 2.4000, Total iter time: 5.9860 +thomas 04/08 07:16:26 ===> Epoch[133](39920/301): Loss 0.3206 LR: 6.949e-02 Score 89.775 Data time: 2.4836, Total iter time: 6.2262 +thomas 04/08 07:20:19 ===> Epoch[133](39960/301): Loss 0.3241 LR: 6.946e-02 Score 89.680 Data time: 2.2562, Total iter time: 5.7312 +thomas 04/08 07:24:21 ===> Epoch[133](40000/301): Loss 0.3041 LR: 6.943e-02 Score 90.756 Data time: 2.3683, Total iter time: 5.9728 +thomas 04/08 07:24:22 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/08 07:24:22 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/08 07:26:28 101/312: Data time: 0.0025, Iter time: 1.1377 Loss 1.126 (AVG: 0.613) Score 69.673 (AVG: 83.064) mIOU 57.283 mAP 69.434 mAcc 68.524 +IOU: 74.666 95.722 56.302 69.690 88.900 74.974 69.656 40.559 21.365 72.837 16.547 56.365 56.809 42.888 27.807 45.532 73.953 51.783 77.255 32.056 +mAP: 76.854 96.036 61.988 73.812 89.173 78.861 67.867 55.981 52.447 65.219 43.310 57.673 61.097 66.648 48.793 87.516 93.214 84.140 77.809 50.232 +mAcc: 86.216 98.058 73.229 83.742 95.405 87.639 81.703 68.534 21.691 93.399 23.542 66.478 79.927 45.632 57.574 50.607 74.618 52.882 78.644 50.960 + +thomas 04/08 07:28:27 201/312: Data time: 0.0040, Iter time: 0.5267 Loss 0.298 (AVG: 0.618) Score 92.243 (AVG: 82.886) mIOU 56.926 mAP 70.153 mAcc 68.378 +IOU: 74.057 95.882 54.097 59.330 87.727 79.453 68.718 41.812 21.298 68.847 18.425 52.827 52.919 54.761 39.359 36.950 77.794 45.526 69.794 38.935 +mAP: 77.396 96.528 61.602 67.597 88.189 81.564 69.742 61.793 49.300 67.245 44.187 53.275 62.039 76.307 58.408 81.498 94.794 77.753 79.504 54.333 +mAcc: 85.546 98.199 74.206 79.863 93.571 90.664 76.967 71.407 21.626 90.347 24.028 59.807 83.683 56.943 65.272 43.005 78.564 46.506 72.052 55.314 + +thomas 04/08 07:30:24 301/312: Data time: 0.0027, Iter time: 0.5208 Loss 0.257 (AVG: 0.614) Score 93.616 (AVG: 83.071) mIOU 57.208 mAP 70.603 mAcc 68.935 +IOU: 74.706 96.030 55.764 60.594 86.922 77.039 67.738 43.087 21.720 71.020 17.597 53.349 52.639 51.238 38.924 36.964 80.968 47.004 71.051 39.803 +mAP: 78.385 96.636 61.152 71.588 89.118 81.625 70.192 61.743 48.681 65.926 41.852 52.946 61.375 75.050 63.506 82.663 95.406 79.694 79.712 54.809 +mAcc: 85.506 98.384 73.552 82.670 92.697 91.176 76.917 73.577 22.234 91.906 23.042 58.479 80.436 53.107 67.120 47.222 81.769 48.254 74.593 56.058 + +thomas 04/08 07:30:35 312/312: Data time: 0.0030, Iter time: 0.3571 Loss 0.793 (AVG: 0.615) Score 73.160 (AVG: 83.051) mIOU 57.339 mAP 70.798 mAcc 69.146 +IOU: 74.782 95.996 56.143 60.754 86.916 76.329 67.647 43.072 21.245 71.107 17.391 53.888 52.931 53.277 39.952 36.996 79.984 46.868 71.568 39.923 +mAP: 78.472 96.689 60.403 72.213 89.271 80.881 69.982 62.360 48.835 67.038 41.496 53.394 61.246 76.097 64.257 83.279 95.502 79.589 80.220 54.741 +mAcc: 85.574 98.375 73.403 83.431 92.725 90.358 76.682 73.735 21.727 91.884 22.968 59.124 80.704 55.193 68.792 47.658 80.754 48.114 75.313 56.401 + +thomas 04/08 07:30:35 Finished test. Elapsed time: 372.5435 +thomas 04/08 07:30:35 Current best mIoU: 61.872 at iter 38000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/08 07:34:32 ===> Epoch[134](40040/301): Loss 0.3510 LR: 6.939e-02 Score 89.543 Data time: 2.3584, Total iter time: 5.8459 +thomas 04/08 07:38:44 ===> Epoch[134](40080/301): Loss 0.3448 LR: 6.936e-02 Score 89.060 Data time: 2.4513, Total iter time: 6.2179 +thomas 04/08 07:42:43 ===> Epoch[134](40120/301): Loss 0.3203 LR: 6.933e-02 Score 90.165 Data time: 2.3554, Total iter time: 5.9027 +thomas 04/08 07:46:38 ===> Epoch[134](40160/301): Loss 0.3090 LR: 6.930e-02 Score 90.377 Data time: 2.3175, Total iter time: 5.8081 +thomas 04/08 07:50:28 ===> Epoch[134](40200/301): Loss 0.3185 LR: 6.927e-02 Score 90.085 Data time: 2.2429, Total iter time: 5.6851 +thomas 04/08 07:54:20 ===> Epoch[134](40240/301): Loss 0.3106 LR: 6.924e-02 Score 89.833 Data time: 2.2315, Total iter time: 5.7153 +thomas 04/08 07:58:30 ===> Epoch[134](40280/301): Loss 0.2959 LR: 6.921e-02 Score 90.708 Data time: 2.4274, Total iter time: 6.1738 +thomas 04/08 08:02:17 ===> Epoch[134](40320/301): Loss 0.2969 LR: 6.918e-02 Score 90.812 Data time: 2.2353, Total iter time: 5.6111 +thomas 04/08 08:06:13 ===> Epoch[135](40360/301): Loss 0.2897 LR: 6.914e-02 Score 90.954 Data time: 2.3137, Total iter time: 5.8067 +thomas 04/08 08:10:18 ===> Epoch[135](40400/301): Loss 0.2999 LR: 6.911e-02 Score 90.672 Data time: 2.3979, Total iter time: 6.0503 +thomas 04/08 08:14:01 ===> Epoch[135](40440/301): Loss 0.3031 LR: 6.908e-02 Score 90.412 Data time: 2.1541, Total iter time: 5.4861 +thomas 04/08 08:18:11 ===> Epoch[135](40480/301): Loss 0.3299 LR: 6.905e-02 Score 89.346 Data time: 2.4067, Total iter time: 6.1691 +thomas 04/08 08:22:01 ===> Epoch[135](40520/301): Loss 0.2925 LR: 6.902e-02 Score 90.827 Data time: 2.2389, Total iter time: 5.6663 +thomas 04/08 08:25:56 ===> Epoch[135](40560/301): Loss 0.2990 LR: 6.899e-02 Score 90.576 Data time: 2.3175, Total iter time: 5.8126 +thomas 04/08 08:30:10 ===> Epoch[135](40600/301): Loss 0.2942 LR: 6.896e-02 Score 90.695 Data time: 2.5145, Total iter time: 6.2677 +thomas 04/08 08:33:59 ===> Epoch[136](40640/301): Loss 0.2882 LR: 6.893e-02 Score 90.821 Data time: 2.2189, Total iter time: 5.6430 +thomas 04/08 08:37:41 ===> Epoch[136](40680/301): Loss 0.2938 LR: 6.889e-02 Score 90.783 Data time: 2.1668, Total iter time: 5.4919 +thomas 04/08 08:41:38 ===> Epoch[136](40720/301): Loss 0.2781 LR: 6.886e-02 Score 91.300 Data time: 2.2847, Total iter time: 5.8494 +thomas 04/08 08:45:17 ===> Epoch[136](40760/301): Loss 0.3311 LR: 6.883e-02 Score 89.644 Data time: 2.1465, Total iter time: 5.4049 +thomas 04/08 08:49:26 ===> Epoch[136](40800/301): Loss 0.2830 LR: 6.880e-02 Score 91.401 Data time: 2.4376, Total iter time: 6.1439 +thomas 04/08 08:53:49 ===> Epoch[136](40840/301): Loss 0.3118 LR: 6.877e-02 Score 90.116 Data time: 2.5774, Total iter time: 6.5034 +thomas 04/08 08:58:01 ===> Epoch[136](40880/301): Loss 0.2991 LR: 6.874e-02 Score 90.308 Data time: 2.4616, Total iter time: 6.2146 +thomas 04/08 09:01:44 ===> Epoch[136](40920/301): Loss 0.3148 LR: 6.871e-02 Score 90.084 Data time: 2.1609, Total iter time: 5.5165 +thomas 04/08 09:05:52 ===> Epoch[137](40960/301): Loss 0.3248 LR: 6.868e-02 Score 89.874 Data time: 2.4152, Total iter time: 6.1182 +thomas 04/08 09:09:55 ===> Epoch[137](41000/301): Loss 0.3199 LR: 6.864e-02 Score 90.089 Data time: 2.3605, Total iter time: 5.9979 +thomas 04/08 09:09:56 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/08 09:09:56 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/08 09:12:19 101/312: Data time: 0.2274, Iter time: 0.8869 Loss 0.212 (AVG: 0.599) Score 91.469 (AVG: 83.114) mIOU 58.466 mAP 69.081 mAcc 70.235 +IOU: 76.675 95.833 42.971 53.303 88.186 75.255 66.197 50.429 42.552 41.458 17.084 49.102 60.012 59.202 47.046 44.100 81.020 60.334 83.653 34.918 +mAP: 78.838 96.472 49.210 70.787 89.923 83.264 64.128 65.771 42.294 73.110 31.378 57.565 63.584 73.558 60.179 75.424 85.117 81.711 94.320 44.996 +mAcc: 86.765 98.603 52.152 85.348 93.204 87.826 77.983 80.607 49.494 45.091 24.923 86.716 76.080 60.255 56.822 51.002 82.575 65.900 90.865 52.497 + +thomas 04/08 09:14:22 201/312: Data time: 0.0027, Iter time: 0.3639 Loss 0.420 (AVG: 0.631) Score 84.186 (AVG: 82.482) mIOU 57.132 mAP 69.219 mAcc 69.215 +IOU: 75.306 95.687 40.362 49.823 87.829 75.051 64.290 47.387 35.375 53.166 14.335 45.416 57.986 59.275 46.104 32.822 82.532 61.893 79.156 38.855 +mAP: 78.088 96.724 47.660 69.558 88.766 80.526 65.676 67.628 42.926 69.031 39.650 64.072 62.915 70.593 63.997 75.709 88.122 84.824 79.896 48.006 +mAcc: 86.371 98.618 50.934 84.958 92.285 87.222 74.142 80.414 40.400 55.900 19.115 89.970 78.039 60.579 59.274 34.428 84.409 66.443 83.142 57.662 + +thomas 04/08 09:16:35 301/312: Data time: 0.0024, Iter time: 0.5826 Loss 1.594 (AVG: 0.634) Score 63.969 (AVG: 82.520) mIOU 57.760 mAP 70.600 mAcc 69.708 +IOU: 75.203 95.762 43.929 51.345 87.777 77.089 64.059 46.275 38.895 50.094 15.949 49.628 59.859 56.564 46.887 33.377 86.013 59.591 77.997 38.904 +mAP: 78.188 96.936 50.736 75.178 89.175 81.732 68.379 66.384 45.826 65.662 39.906 65.762 65.708 71.494 67.496 75.495 91.417 82.344 84.052 50.141 +mAcc: 85.928 98.480 53.602 88.994 91.790 88.680 74.190 79.418 44.143 52.175 21.327 84.595 80.079 58.722 58.103 35.662 88.151 64.121 85.154 60.841 + +thomas 04/08 09:16:48 312/312: Data time: 0.0024, Iter time: 0.8868 Loss 0.419 (AVG: 0.633) Score 86.310 (AVG: 82.509) mIOU 57.614 mAP 70.408 mAcc 69.582 +IOU: 75.269 95.792 43.743 51.516 87.864 77.029 64.015 46.269 39.249 50.244 15.814 48.921 59.391 56.812 46.581 32.675 86.149 58.563 77.997 38.380 +mAP: 78.161 96.959 51.012 74.558 88.807 81.285 68.250 66.168 46.202 66.251 39.496 65.056 64.983 71.459 67.496 74.233 91.562 81.925 84.052 50.247 +mAcc: 86.022 98.488 53.218 88.886 91.874 88.731 73.960 79.312 44.476 52.327 21.087 83.945 80.170 59.426 58.103 34.862 88.278 63.052 85.154 60.277 + +thomas 04/08 09:16:48 Finished test. Elapsed time: 411.4510 +thomas 04/08 09:16:48 Current best mIoU: 61.872 at iter 38000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/08 09:20:28 ===> Epoch[137](41040/301): Loss 0.3066 LR: 6.861e-02 Score 90.092 Data time: 2.1716, Total iter time: 5.4512 +thomas 04/08 09:24:20 ===> Epoch[137](41080/301): Loss 0.3342 LR: 6.858e-02 Score 89.583 Data time: 2.2349, Total iter time: 5.7263 +thomas 04/08 09:28:26 ===> Epoch[137](41120/301): Loss 0.3401 LR: 6.855e-02 Score 89.394 Data time: 2.3670, Total iter time: 6.0647 +thomas 04/08 09:32:09 ===> Epoch[137](41160/301): Loss 0.2998 LR: 6.852e-02 Score 90.595 Data time: 2.1740, Total iter time: 5.5287 +thomas 04/08 09:36:06 ===> Epoch[137](41200/301): Loss 0.2873 LR: 6.849e-02 Score 90.679 Data time: 2.3398, Total iter time: 5.8601 +thomas 04/08 09:40:10 ===> Epoch[138](41240/301): Loss 0.3179 LR: 6.846e-02 Score 90.209 Data time: 2.3989, Total iter time: 6.0096 +thomas 04/08 09:44:07 ===> Epoch[138](41280/301): Loss 0.3724 LR: 6.843e-02 Score 88.990 Data time: 2.2972, Total iter time: 5.8554 +thomas 04/08 09:48:08 ===> Epoch[138](41320/301): Loss 0.3430 LR: 6.839e-02 Score 88.985 Data time: 2.3622, Total iter time: 5.9473 +thomas 04/08 09:51:58 ===> Epoch[138](41360/301): Loss 0.2928 LR: 6.836e-02 Score 90.651 Data time: 2.2234, Total iter time: 5.6978 +thomas 04/08 09:55:54 ===> Epoch[138](41400/301): Loss 0.3054 LR: 6.833e-02 Score 90.499 Data time: 2.2543, Total iter time: 5.8156 +thomas 04/08 10:00:03 ===> Epoch[138](41440/301): Loss 0.2738 LR: 6.830e-02 Score 91.457 Data time: 2.4379, Total iter time: 6.1552 +thomas 04/08 10:04:33 ===> Epoch[138](41480/301): Loss 0.2886 LR: 6.827e-02 Score 90.640 Data time: 2.7314, Total iter time: 6.6667 +thomas 04/08 10:08:23 ===> Epoch[138](41520/301): Loss 0.2787 LR: 6.824e-02 Score 91.226 Data time: 2.2506, Total iter time: 5.6955 +thomas 04/08 10:12:15 ===> Epoch[139](41560/301): Loss 0.3375 LR: 6.821e-02 Score 89.281 Data time: 2.2510, Total iter time: 5.7247 +thomas 04/08 10:16:00 ===> Epoch[139](41600/301): Loss 0.3301 LR: 6.817e-02 Score 89.439 Data time: 2.1719, Total iter time: 5.5432 +thomas 04/08 10:19:44 ===> Epoch[139](41640/301): Loss 0.2915 LR: 6.814e-02 Score 90.637 Data time: 2.1781, Total iter time: 5.5375 +thomas 04/08 10:23:49 ===> Epoch[139](41680/301): Loss 0.3057 LR: 6.811e-02 Score 90.430 Data time: 2.3804, Total iter time: 6.0608 +thomas 04/08 10:27:59 ===> Epoch[139](41720/301): Loss 0.2962 LR: 6.808e-02 Score 90.753 Data time: 2.4726, Total iter time: 6.1659 +thomas 04/08 10:32:21 ===> Epoch[139](41760/301): Loss 0.3138 LR: 6.805e-02 Score 90.252 Data time: 2.5591, Total iter time: 6.4374 +thomas 04/08 10:36:18 ===> Epoch[139](41800/301): Loss 0.2746 LR: 6.802e-02 Score 91.190 Data time: 2.3245, Total iter time: 5.8592 +thomas 04/08 10:40:22 ===> Epoch[140](41840/301): Loss 0.2962 LR: 6.799e-02 Score 90.782 Data time: 2.3239, Total iter time: 6.0052 +thomas 04/08 10:43:58 ===> Epoch[140](41880/301): Loss 0.2899 LR: 6.796e-02 Score 90.699 Data time: 2.0936, Total iter time: 5.3536 +thomas 04/08 10:47:57 ===> Epoch[140](41920/301): Loss 0.3135 LR: 6.792e-02 Score 90.171 Data time: 2.3191, Total iter time: 5.8872 +thomas 04/08 10:52:06 ===> Epoch[140](41960/301): Loss 0.3382 LR: 6.789e-02 Score 89.693 Data time: 2.4839, Total iter time: 6.1540 +thomas 04/08 10:56:02 ===> Epoch[140](42000/301): Loss 0.3376 LR: 6.786e-02 Score 89.278 Data time: 2.3109, Total iter time: 5.8260 +thomas 04/08 10:56:04 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/08 10:56:04 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/08 10:58:11 101/312: Data time: 0.0035, Iter time: 0.6132 Loss 0.688 (AVG: 0.636) Score 80.712 (AVG: 81.702) mIOU 55.640 mAP 68.816 mAcc 68.492 +IOU: 70.973 95.872 51.838 38.790 88.449 64.733 63.061 47.753 47.938 71.376 14.275 57.073 58.068 40.445 10.337 39.760 84.523 44.907 81.493 41.146 +mAP: 77.352 98.050 57.507 46.516 86.204 72.041 64.447 61.952 54.239 68.829 39.264 60.374 71.763 87.942 27.997 90.433 96.203 78.931 78.572 57.706 +mAcc: 79.445 98.150 72.079 55.812 96.128 94.782 70.126 68.971 54.746 88.992 18.011 68.954 76.547 95.232 14.382 41.070 84.806 45.841 82.498 63.263 + +thomas 04/08 11:00:04 201/312: Data time: 0.0030, Iter time: 0.3417 Loss 0.439 (AVG: 0.608) Score 85.678 (AVG: 82.339) mIOU 57.880 mAP 69.455 mAcc 70.507 +IOU: 72.107 96.035 54.688 53.377 87.275 69.803 63.451 46.651 41.868 74.474 16.329 59.134 55.039 50.260 30.098 39.048 84.785 47.826 78.901 36.450 +mAP: 77.102 97.125 59.590 55.156 88.806 80.052 66.086 62.706 49.139 67.554 37.486 57.241 69.348 81.056 46.505 89.332 93.785 80.165 77.874 52.993 +mAcc: 79.839 98.096 72.305 66.986 95.871 92.255 71.602 70.515 48.302 90.188 21.168 72.624 75.280 86.206 40.772 46.977 85.057 48.898 84.427 62.770 + +thomas 04/08 11:02:02 301/312: Data time: 0.0026, Iter time: 0.8160 Loss 0.202 (AVG: 0.606) Score 94.377 (AVG: 82.634) mIOU 58.206 mAP 70.305 mAcc 70.695 +IOU: 72.320 96.153 56.583 56.494 87.352 71.065 66.520 45.515 43.077 72.942 15.838 61.323 54.833 49.386 33.432 38.186 83.165 46.297 78.370 35.278 +mAP: 76.967 97.328 60.922 57.628 89.359 81.185 69.113 62.980 50.228 67.728 42.421 58.273 70.570 77.797 51.156 87.457 90.832 80.212 80.583 53.362 +mAcc: 79.946 98.232 74.509 69.938 94.764 93.257 75.746 68.557 49.356 91.329 19.400 74.875 76.600 84.777 43.493 42.887 83.772 47.079 82.371 63.008 + +thomas 04/08 11:02:12 312/312: Data time: 0.0024, Iter time: 0.1928 Loss 0.229 (AVG: 0.609) Score 93.846 (AVG: 82.592) mIOU 58.268 mAP 70.340 mAcc 70.792 +IOU: 72.361 96.146 56.609 57.096 87.303 70.645 66.761 45.510 42.678 72.696 16.330 60.906 54.788 49.045 34.792 37.757 83.754 46.454 78.826 34.908 +mAP: 77.156 97.358 60.552 58.266 89.274 81.153 69.588 63.469 49.482 68.080 42.197 57.885 70.416 77.360 51.564 87.457 91.189 80.381 80.972 53.009 +mAcc: 79.934 98.211 74.378 70.571 94.685 93.372 75.923 68.900 48.801 90.958 19.853 74.426 76.595 84.786 44.538 42.887 84.369 47.232 82.583 62.845 + +thomas 04/08 11:02:12 Finished test. Elapsed time: 368.5821 +thomas 04/08 11:02:12 Current best mIoU: 61.872 at iter 38000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/08 11:06:18 ===> Epoch[140](42040/301): Loss 0.2763 LR: 6.783e-02 Score 91.056 Data time: 2.3670, Total iter time: 6.0775 +thomas 04/08 11:10:16 ===> Epoch[140](42080/301): Loss 0.2641 LR: 6.780e-02 Score 91.645 Data time: 2.3210, Total iter time: 5.8728 +thomas 04/08 11:14:21 ===> Epoch[140](42120/301): Loss 0.3146 LR: 6.777e-02 Score 90.399 Data time: 2.4135, Total iter time: 6.0223 +thomas 04/08 11:18:23 ===> Epoch[141](42160/301): Loss 0.3248 LR: 6.774e-02 Score 89.900 Data time: 2.4086, Total iter time: 5.9707 +thomas 04/08 11:22:17 ===> Epoch[141](42200/301): Loss 0.3168 LR: 6.770e-02 Score 90.084 Data time: 2.2438, Total iter time: 5.7935 +thomas 04/08 11:26:01 ===> Epoch[141](42240/301): Loss 0.3254 LR: 6.767e-02 Score 89.967 Data time: 2.1408, Total iter time: 5.5311 +thomas 04/08 11:29:50 ===> Epoch[141](42280/301): Loss 0.2983 LR: 6.764e-02 Score 90.491 Data time: 2.2117, Total iter time: 5.6302 +thomas 04/08 11:33:48 ===> Epoch[141](42320/301): Loss 0.2864 LR: 6.761e-02 Score 90.958 Data time: 2.2881, Total iter time: 5.8726 +thomas 04/08 11:38:00 ===> Epoch[141](42360/301): Loss 0.2741 LR: 6.758e-02 Score 91.299 Data time: 2.5030, Total iter time: 6.2417 +thomas 04/08 11:42:23 ===> Epoch[141](42400/301): Loss 0.2648 LR: 6.755e-02 Score 91.365 Data time: 2.6422, Total iter time: 6.4998 +thomas 04/08 11:46:15 ===> Epoch[141](42440/301): Loss 0.2957 LR: 6.752e-02 Score 90.687 Data time: 2.2116, Total iter time: 5.7162 +thomas 04/08 11:50:28 ===> Epoch[142](42480/301): Loss 0.3495 LR: 6.749e-02 Score 89.327 Data time: 2.4256, Total iter time: 6.2532 +thomas 04/08 11:54:25 ===> Epoch[142](42520/301): Loss 0.2969 LR: 6.745e-02 Score 90.983 Data time: 2.2921, Total iter time: 5.8505 +thomas 04/08 11:58:14 ===> Epoch[142](42560/301): Loss 0.3209 LR: 6.742e-02 Score 89.772 Data time: 2.2147, Total iter time: 5.6465 +thomas 04/08 12:02:07 ===> Epoch[142](42600/301): Loss 0.3136 LR: 6.739e-02 Score 90.318 Data time: 2.3474, Total iter time: 5.7598 +thomas 04/08 12:06:02 ===> Epoch[142](42640/301): Loss 0.3049 LR: 6.736e-02 Score 90.343 Data time: 2.2993, Total iter time: 5.7807 +thomas 04/08 12:09:56 ===> Epoch[142](42680/301): Loss 0.2746 LR: 6.733e-02 Score 91.291 Data time: 2.2565, Total iter time: 5.7664 +thomas 04/08 12:13:51 ===> Epoch[142](42720/301): Loss 0.2934 LR: 6.730e-02 Score 90.723 Data time: 2.2563, Total iter time: 5.7967 +thomas 04/08 12:17:44 ===> Epoch[143](42760/301): Loss 0.3094 LR: 6.727e-02 Score 90.413 Data time: 2.2593, Total iter time: 5.7605 +thomas 04/08 12:21:35 ===> Epoch[143](42800/301): Loss 0.3016 LR: 6.723e-02 Score 90.288 Data time: 2.2222, Total iter time: 5.6934 +thomas 04/08 12:25:50 ===> Epoch[143](42840/301): Loss 0.3128 LR: 6.720e-02 Score 90.331 Data time: 2.5037, Total iter time: 6.3132 +thomas 04/08 12:29:47 ===> Epoch[143](42880/301): Loss 0.2853 LR: 6.717e-02 Score 90.933 Data time: 2.3241, Total iter time: 5.8308 +thomas 04/08 12:33:50 ===> Epoch[143](42920/301): Loss 0.3130 LR: 6.714e-02 Score 89.907 Data time: 2.3199, Total iter time: 5.9941 +thomas 04/08 12:37:36 ===> Epoch[143](42960/301): Loss 0.2874 LR: 6.711e-02 Score 91.042 Data time: 2.1611, Total iter time: 5.5923 +thomas 04/08 12:41:31 ===> Epoch[143](43000/301): Loss 0.3044 LR: 6.708e-02 Score 90.646 Data time: 2.2378, Total iter time: 5.7946 +thomas 04/08 12:41:33 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/08 12:41:33 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/08 12:43:33 101/312: Data time: 0.0024, Iter time: 0.9089 Loss 0.296 (AVG: 0.631) Score 92.896 (AVG: 82.405) mIOU 53.372 mAP 68.478 mAcc 65.248 +IOU: 75.000 95.858 46.290 50.238 87.100 74.125 70.749 40.648 42.581 54.211 5.650 60.242 41.613 59.732 31.975 24.491 65.221 41.897 57.469 42.347 +mAP: 78.523 97.802 53.716 59.544 86.921 76.408 73.379 58.377 50.889 64.369 38.536 61.247 59.041 79.965 62.672 65.145 82.049 76.301 86.581 58.097 +mAcc: 87.416 98.730 64.042 56.572 90.864 88.617 77.534 59.096 47.533 59.892 5.898 69.633 73.465 72.969 75.327 30.615 72.307 42.808 58.084 73.554 + +thomas 04/08 12:45:26 201/312: Data time: 0.0029, Iter time: 0.3645 Loss 0.173 (AVG: 0.605) Score 96.366 (AVG: 83.035) mIOU 55.168 mAP 68.130 mAcc 66.387 +IOU: 75.936 95.640 45.770 61.026 87.756 74.559 68.638 41.209 37.176 65.160 9.813 58.428 50.977 62.487 30.478 25.938 69.707 41.508 59.764 41.379 +mAP: 77.820 97.732 50.900 65.412 88.793 80.645 69.707 58.333 49.759 66.409 36.671 56.985 62.632 76.783 60.153 72.523 84.750 72.941 79.496 54.157 +mAcc: 88.036 98.810 64.933 70.118 92.430 88.397 76.746 60.027 40.785 74.346 10.707 65.387 79.505 76.362 73.408 29.245 73.662 42.175 60.610 62.050 + +thomas 04/08 12:47:28 301/312: Data time: 0.0026, Iter time: 1.4618 Loss 0.749 (AVG: 0.598) Score 81.080 (AVG: 83.339) mIOU 55.231 mAP 68.191 mAcc 66.287 +IOU: 76.695 95.807 46.632 60.876 88.962 76.052 66.664 42.864 35.639 65.700 12.549 56.803 48.454 63.661 31.161 22.907 73.536 39.725 56.667 43.262 +mAP: 78.156 97.473 51.415 64.667 89.786 80.678 70.983 60.257 47.685 65.819 39.213 54.823 62.850 77.240 60.454 72.567 87.759 72.380 77.402 52.209 +mAcc: 88.640 98.786 65.979 70.654 93.541 89.448 75.785 62.096 39.031 74.967 14.044 62.825 77.382 76.267 74.692 24.514 77.050 40.448 57.224 62.365 + +thomas 04/08 12:47:45 312/312: Data time: 0.0026, Iter time: 0.5585 Loss 0.254 (AVG: 0.592) Score 93.662 (AVG: 83.538) mIOU 55.459 mAP 68.122 mAcc 66.481 +IOU: 77.023 95.879 46.500 61.057 88.909 76.096 66.679 42.653 36.072 65.394 12.655 56.803 50.555 62.880 30.621 23.691 73.896 40.750 57.866 43.198 +mAP: 78.318 97.520 51.694 64.079 89.637 79.611 71.193 60.255 47.415 66.317 38.735 54.823 62.492 77.043 60.220 72.697 88.096 72.988 77.417 51.885 +mAcc: 88.815 98.782 65.954 70.961 93.559 89.541 75.883 61.658 39.557 74.733 14.323 62.825 78.382 75.257 74.322 25.318 77.304 41.480 58.421 62.547 + +thomas 04/08 12:47:45 Finished test. Elapsed time: 371.7448 +thomas 04/08 12:47:45 Current best mIoU: 61.872 at iter 38000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/08 12:51:49 ===> Epoch[143](43040/301): Loss 0.2820 LR: 6.705e-02 Score 90.977 Data time: 2.4428, Total iter time: 6.0178 +thomas 04/08 12:56:00 ===> Epoch[144](43080/301): Loss 0.2882 LR: 6.702e-02 Score 90.968 Data time: 2.4771, Total iter time: 6.2071 +thomas 04/08 12:59:54 ===> Epoch[144](43120/301): Loss 0.3033 LR: 6.698e-02 Score 90.312 Data time: 2.2389, Total iter time: 5.7843 +thomas 04/08 13:03:45 ===> Epoch[144](43160/301): Loss 0.3206 LR: 6.695e-02 Score 90.084 Data time: 2.2081, Total iter time: 5.7063 +thomas 04/08 13:07:40 ===> Epoch[144](43200/301): Loss 0.3128 LR: 6.692e-02 Score 89.873 Data time: 2.2721, Total iter time: 5.8003 +thomas 04/08 13:11:34 ===> Epoch[144](43240/301): Loss 0.3435 LR: 6.689e-02 Score 89.213 Data time: 2.2559, Total iter time: 5.7795 +thomas 04/08 13:15:58 ===> Epoch[144](43280/301): Loss 0.3383 LR: 6.686e-02 Score 89.405 Data time: 2.6173, Total iter time: 6.5016 +thomas 04/08 13:20:12 ===> Epoch[144](43320/301): Loss 0.2980 LR: 6.683e-02 Score 90.255 Data time: 2.5015, Total iter time: 6.2676 +thomas 04/08 13:23:57 ===> Epoch[145](43360/301): Loss 0.3092 LR: 6.680e-02 Score 89.944 Data time: 2.1582, Total iter time: 5.5555 +thomas 04/08 13:27:55 ===> Epoch[145](43400/301): Loss 0.3219 LR: 6.676e-02 Score 89.964 Data time: 2.2897, Total iter time: 5.8951 +thomas 04/08 13:31:44 ===> Epoch[145](43440/301): Loss 0.3083 LR: 6.673e-02 Score 90.469 Data time: 2.1767, Total iter time: 5.6490 +thomas 04/08 13:35:28 ===> Epoch[145](43480/301): Loss 0.3095 LR: 6.670e-02 Score 90.462 Data time: 2.1631, Total iter time: 5.5166 +thomas 04/08 13:39:41 ===> Epoch[145](43520/301): Loss 0.3322 LR: 6.667e-02 Score 89.736 Data time: 2.5103, Total iter time: 6.2650 +thomas 04/08 13:43:48 ===> Epoch[145](43560/301): Loss 0.3173 LR: 6.664e-02 Score 89.958 Data time: 2.4413, Total iter time: 6.1135 +thomas 04/08 13:47:36 ===> Epoch[145](43600/301): Loss 0.3180 LR: 6.661e-02 Score 90.042 Data time: 2.1704, Total iter time: 5.6171 +thomas 04/08 13:51:39 ===> Epoch[145](43640/301): Loss 0.2826 LR: 6.658e-02 Score 91.006 Data time: 2.3529, Total iter time: 5.9990 +thomas 04/08 13:55:29 ===> Epoch[146](43680/301): Loss 0.2847 LR: 6.654e-02 Score 91.077 Data time: 2.1954, Total iter time: 5.6890 +thomas 04/08 13:59:41 ===> Epoch[146](43720/301): Loss 0.3075 LR: 6.651e-02 Score 90.330 Data time: 2.4490, Total iter time: 6.2228 +thomas 04/08 14:04:06 ===> Epoch[146](43760/301): Loss 0.3486 LR: 6.648e-02 Score 89.253 Data time: 2.6290, Total iter time: 6.5345 +thomas 04/08 14:08:22 ===> Epoch[146](43800/301): Loss 0.3070 LR: 6.645e-02 Score 90.129 Data time: 2.5085, Total iter time: 6.3279 +thomas 04/08 14:12:33 ===> Epoch[146](43840/301): Loss 0.3069 LR: 6.642e-02 Score 90.251 Data time: 2.3967, Total iter time: 6.1916 +thomas 04/08 14:16:43 ===> Epoch[146](43880/301): Loss 0.3026 LR: 6.639e-02 Score 90.622 Data time: 2.3914, Total iter time: 6.1817 +thomas 04/08 14:20:41 ===> Epoch[146](43920/301): Loss 0.2748 LR: 6.636e-02 Score 91.129 Data time: 2.2689, Total iter time: 5.8627 +thomas 04/08 14:24:52 ===> Epoch[147](43960/301): Loss 0.2785 LR: 6.632e-02 Score 91.345 Data time: 2.4611, Total iter time: 6.2084 +thomas 04/08 14:29:12 ===> Epoch[147](44000/301): Loss 0.2887 LR: 6.629e-02 Score 90.959 Data time: 2.5986, Total iter time: 6.4106 +thomas 04/08 14:29:13 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/08 14:29:13 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/08 14:31:34 101/312: Data time: 0.0028, Iter time: 0.3896 Loss 0.294 (AVG: 0.639) Score 91.600 (AVG: 82.139) mIOU 56.237 mAP 71.578 mAcc 68.021 +IOU: 72.891 95.355 48.095 73.884 85.054 69.734 63.248 41.410 29.333 69.329 19.936 48.732 54.376 57.275 52.942 13.409 79.738 37.874 68.166 43.954 +mAP: 76.586 97.493 59.270 76.006 88.242 80.913 66.628 63.779 43.805 74.514 39.966 56.200 69.083 72.109 78.835 82.126 92.244 81.813 82.774 49.166 +mAcc: 81.831 96.994 65.742 83.924 87.902 92.463 77.262 87.696 30.953 97.317 22.044 71.213 62.164 61.089 73.427 13.454 80.584 38.233 68.825 67.303 + +thomas 04/08 14:33:34 201/312: Data time: 0.0025, Iter time: 0.3250 Loss 0.555 (AVG: 0.629) Score 84.546 (AVG: 82.616) mIOU 56.302 mAP 70.927 mAcc 68.292 +IOU: 74.059 95.140 51.407 74.322 84.900 70.882 63.991 40.393 27.866 64.056 21.447 51.861 54.664 58.431 49.946 12.355 77.923 41.639 70.647 40.104 +mAP: 78.234 97.752 58.884 74.372 88.750 79.383 69.923 63.001 45.333 73.447 45.831 56.860 71.597 72.847 67.984 80.390 91.593 77.757 76.935 47.674 +mAcc: 82.582 96.818 69.723 81.701 88.123 93.288 74.661 82.880 29.932 97.016 24.035 76.534 65.617 62.844 70.176 12.382 78.504 42.246 71.259 65.521 + +thomas 04/08 14:35:33 301/312: Data time: 0.0030, Iter time: 0.8630 Loss 0.215 (AVG: 0.655) Score 93.346 (AVG: 81.983) mIOU 56.073 mAP 70.852 mAcc 67.890 +IOU: 73.116 95.051 52.980 73.937 86.384 69.197 64.404 38.056 30.435 61.388 16.714 53.940 53.630 54.934 48.950 11.501 80.130 45.136 71.536 40.042 +mAP: 78.114 97.656 57.726 75.954 89.424 79.434 68.899 58.483 49.070 71.120 41.629 57.640 69.265 71.036 66.732 81.058 92.454 81.396 80.683 49.267 +mAcc: 82.406 96.940 70.338 81.640 89.558 93.206 75.053 77.305 32.498 95.105 18.038 75.270 65.746 61.321 70.460 11.646 80.674 45.850 72.106 62.640 + +thomas 04/08 14:35:47 312/312: Data time: 0.0024, Iter time: 0.9331 Loss 0.720 (AVG: 0.651) Score 82.058 (AVG: 82.079) mIOU 56.242 mAP 70.801 mAcc 67.929 +IOU: 73.144 95.019 52.545 73.504 86.442 70.186 64.844 38.009 31.491 62.063 16.515 53.599 54.058 54.566 49.414 11.501 80.036 45.884 71.536 40.492 +mAP: 78.123 97.698 57.531 74.767 89.429 79.751 68.728 58.368 49.528 70.672 41.395 57.094 70.001 70.654 67.116 81.058 92.466 81.541 80.683 49.427 +mAcc: 82.414 96.899 70.252 81.122 89.627 93.384 75.203 77.224 33.557 95.254 17.976 73.836 66.477 60.873 70.373 11.646 80.575 46.595 72.106 63.188 + +thomas 04/08 14:35:47 Finished test. Elapsed time: 393.0947 +thomas 04/08 14:35:47 Current best mIoU: 61.872 at iter 38000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/08 14:40:06 ===> Epoch[147](44040/301): Loss 0.2979 LR: 6.626e-02 Score 91.091 Data time: 2.4627, Total iter time: 6.4032 +thomas 04/08 14:44:15 ===> Epoch[147](44080/301): Loss 0.2898 LR: 6.623e-02 Score 91.016 Data time: 2.3722, Total iter time: 6.1422 +thomas 04/08 14:48:03 ===> Epoch[147](44120/301): Loss 0.2967 LR: 6.620e-02 Score 90.220 Data time: 2.2208, Total iter time: 5.6156 +thomas 04/08 14:52:03 ===> Epoch[147](44160/301): Loss 0.2928 LR: 6.617e-02 Score 90.543 Data time: 2.3966, Total iter time: 5.9429 +thomas 04/08 14:56:04 ===> Epoch[147](44200/301): Loss 0.2624 LR: 6.614e-02 Score 91.632 Data time: 2.3898, Total iter time: 5.9525 +thomas 04/08 15:00:06 ===> Epoch[147](44240/301): Loss 0.2984 LR: 6.611e-02 Score 90.321 Data time: 2.3084, Total iter time: 5.9509 +thomas 04/08 15:04:02 ===> Epoch[148](44280/301): Loss 0.3225 LR: 6.607e-02 Score 89.657 Data time: 2.2597, Total iter time: 5.8239 +thomas 04/08 15:08:05 ===> Epoch[148](44320/301): Loss 0.2766 LR: 6.604e-02 Score 91.083 Data time: 2.3440, Total iter time: 5.9982 +thomas 04/08 15:12:13 ===> Epoch[148](44360/301): Loss 0.2883 LR: 6.601e-02 Score 90.855 Data time: 2.4414, Total iter time: 6.1388 +thomas 04/08 15:16:11 ===> Epoch[148](44400/301): Loss 0.3090 LR: 6.598e-02 Score 90.432 Data time: 2.3771, Total iter time: 5.8730 +thomas 04/08 15:19:59 ===> Epoch[148](44440/301): Loss 0.3040 LR: 6.595e-02 Score 90.307 Data time: 2.2133, Total iter time: 5.6127 +thomas 04/08 15:23:47 ===> Epoch[148](44480/301): Loss 0.3069 LR: 6.592e-02 Score 90.479 Data time: 2.2017, Total iter time: 5.6281 +thomas 04/08 15:27:39 ===> Epoch[148](44520/301): Loss 0.3062 LR: 6.589e-02 Score 90.302 Data time: 2.1972, Total iter time: 5.7058 +thomas 04/08 15:31:54 ===> Epoch[149](44560/301): Loss 0.3286 LR: 6.585e-02 Score 89.763 Data time: 2.4644, Total iter time: 6.3032 +thomas 04/08 15:36:08 ===> Epoch[149](44600/301): Loss 0.3359 LR: 6.582e-02 Score 89.305 Data time: 2.5125, Total iter time: 6.2900 +thomas 04/08 15:40:01 ===> Epoch[149](44640/301): Loss 0.3018 LR: 6.579e-02 Score 90.276 Data time: 2.2822, Total iter time: 5.7393 +thomas 04/08 15:43:52 ===> Epoch[149](44680/301): Loss 0.3184 LR: 6.576e-02 Score 89.914 Data time: 2.2409, Total iter time: 5.7070 +thomas 04/08 15:47:43 ===> Epoch[149](44720/301): Loss 0.3127 LR: 6.573e-02 Score 90.003 Data time: 2.2211, Total iter time: 5.7058 +thomas 04/08 15:51:47 ===> Epoch[149](44760/301): Loss 0.2906 LR: 6.570e-02 Score 90.700 Data time: 2.3234, Total iter time: 6.0167 +thomas 04/08 15:55:41 ===> Epoch[149](44800/301): Loss 0.3200 LR: 6.567e-02 Score 90.193 Data time: 2.2710, Total iter time: 5.7739 +thomas 04/08 15:59:47 ===> Epoch[149](44840/301): Loss 0.2794 LR: 6.563e-02 Score 91.083 Data time: 2.4023, Total iter time: 6.0777 +thomas 04/08 16:04:06 ===> Epoch[150](44880/301): Loss 0.2774 LR: 6.560e-02 Score 91.207 Data time: 2.5536, Total iter time: 6.3966 +thomas 04/08 16:07:59 ===> Epoch[150](44920/301): Loss 0.3009 LR: 6.557e-02 Score 90.151 Data time: 2.2744, Total iter time: 5.7560 +thomas 04/08 16:11:44 ===> Epoch[150](44960/301): Loss 0.2900 LR: 6.554e-02 Score 91.035 Data time: 2.1685, Total iter time: 5.5579 +thomas 04/08 16:15:37 ===> Epoch[150](45000/301): Loss 0.2915 LR: 6.551e-02 Score 90.500 Data time: 2.2342, Total iter time: 5.7481 +thomas 04/08 16:15:38 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/08 16:15:38 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/08 16:17:37 101/312: Data time: 0.0024, Iter time: 0.5619 Loss 1.591 (AVG: 0.616) Score 60.336 (AVG: 83.574) mIOU 56.495 mAP 68.097 mAcc 66.342 +IOU: 77.898 95.429 49.267 68.929 86.601 69.262 66.171 23.586 38.975 68.109 4.540 60.394 66.305 53.714 36.143 33.599 76.252 50.863 69.667 34.203 +mAP: 79.474 97.144 56.422 69.820 88.415 70.875 70.886 56.386 51.679 61.778 24.993 64.586 64.081 79.407 48.220 89.073 86.857 78.804 76.630 46.410 +mAcc: 93.881 98.421 69.615 81.217 91.814 77.606 77.417 25.152 44.951 78.651 4.783 70.753 80.421 83.090 57.062 34.952 77.085 54.354 69.825 55.794 + +thomas 04/08 16:19:44 201/312: Data time: 0.0024, Iter time: 0.4345 Loss 0.141 (AVG: 0.589) Score 95.495 (AVG: 83.755) mIOU 56.875 mAP 68.527 mAcc 66.784 +IOU: 77.558 95.991 47.796 66.853 87.976 70.755 65.314 28.218 38.237 64.383 8.342 54.516 59.139 60.702 36.147 36.180 80.930 56.164 67.566 34.729 +mAP: 78.470 97.481 52.317 71.170 89.633 75.334 69.628 56.620 50.516 61.840 28.967 63.689 64.597 82.070 50.447 86.865 91.565 78.183 76.459 44.693 +mAcc: 93.329 98.633 64.849 77.412 93.040 80.702 75.070 30.328 44.027 78.617 9.058 66.715 78.491 85.881 54.520 38.325 81.390 61.694 67.782 55.818 + +thomas 04/08 16:21:48 301/312: Data time: 0.0023, Iter time: 0.4364 Loss 0.459 (AVG: 0.603) Score 85.918 (AVG: 83.330) mIOU 57.076 mAP 68.168 mAcc 66.837 +IOU: 76.835 96.206 45.675 66.654 87.666 71.062 65.677 27.435 39.593 69.429 8.992 54.050 54.791 59.740 38.105 37.477 82.509 55.898 70.405 33.317 +mAP: 77.532 97.698 49.446 69.950 89.226 76.332 70.205 54.506 50.533 61.433 28.772 61.943 63.803 76.842 51.903 87.746 92.804 78.607 79.359 44.713 +mAcc: 93.002 98.589 62.241 77.697 93.195 81.124 75.371 29.531 45.569 82.122 9.695 66.456 76.390 83.091 54.990 39.425 83.237 61.234 70.619 53.169 + +thomas 04/08 16:21:59 312/312: Data time: 0.0026, Iter time: 0.3467 Loss 0.474 (AVG: 0.604) Score 83.599 (AVG: 83.232) mIOU 57.103 mAP 68.150 mAcc 66.924 +IOU: 76.712 96.171 46.312 65.736 87.638 71.193 66.126 27.036 39.783 69.244 8.805 55.093 54.420 59.508 38.532 37.477 82.509 56.137 70.405 33.222 +mAP: 77.629 97.591 49.735 68.960 89.123 76.447 70.663 54.150 51.138 61.716 28.132 61.014 63.967 76.638 52.599 87.746 92.804 78.698 79.359 44.893 +mAcc: 92.850 98.561 63.514 76.645 93.207 81.576 75.844 29.184 45.741 82.202 9.488 67.149 76.501 82.714 55.818 39.425 83.237 61.757 70.619 52.457 + +thomas 04/08 16:21:59 Finished test. Elapsed time: 380.9215 +thomas 04/08 16:21:59 Current best mIoU: 61.872 at iter 38000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/08 16:25:55 ===> Epoch[150](45040/301): Loss 0.3135 LR: 6.548e-02 Score 90.194 Data time: 2.3227, Total iter time: 5.8310 +thomas 04/08 16:30:01 ===> Epoch[150](45080/301): Loss 0.2892 LR: 6.545e-02 Score 90.843 Data time: 2.4019, Total iter time: 6.0784 +thomas 04/08 16:33:57 ===> Epoch[150](45120/301): Loss 0.2766 LR: 6.541e-02 Score 91.263 Data time: 2.2909, Total iter time: 5.8088 +thomas 04/08 16:37:50 ===> Epoch[151](45160/301): Loss 0.3124 LR: 6.538e-02 Score 90.009 Data time: 2.2471, Total iter time: 5.7629 +thomas 04/08 16:41:49 ===> Epoch[151](45200/301): Loss 0.3065 LR: 6.535e-02 Score 90.463 Data time: 2.2986, Total iter time: 5.8833 +thomas 04/08 16:45:48 ===> Epoch[151](45240/301): Loss 0.3021 LR: 6.532e-02 Score 90.436 Data time: 2.3636, Total iter time: 5.9147 +thomas 04/08 16:50:13 ===> Epoch[151](45280/301): Loss 0.2877 LR: 6.529e-02 Score 90.884 Data time: 2.6168, Total iter time: 6.5261 +thomas 04/08 16:54:20 ===> Epoch[151](45320/301): Loss 0.3012 LR: 6.526e-02 Score 90.603 Data time: 2.4039, Total iter time: 6.1072 +thomas 04/08 16:58:04 ===> Epoch[151](45360/301): Loss 0.3230 LR: 6.522e-02 Score 89.917 Data time: 2.1908, Total iter time: 5.5203 +thomas 04/08 17:02:00 ===> Epoch[151](45400/301): Loss 0.2968 LR: 6.519e-02 Score 90.657 Data time: 2.2560, Total iter time: 5.8207 +thomas 04/08 17:06:00 ===> Epoch[151](45440/301): Loss 0.3090 LR: 6.516e-02 Score 89.983 Data time: 2.3314, Total iter time: 5.9103 +thomas 04/08 17:09:58 ===> Epoch[152](45480/301): Loss 0.2999 LR: 6.513e-02 Score 90.530 Data time: 2.3199, Total iter time: 5.8718 +thomas 04/08 17:14:00 ===> Epoch[152](45520/301): Loss 0.2874 LR: 6.510e-02 Score 90.961 Data time: 2.3595, Total iter time: 5.9686 +thomas 04/08 17:18:09 ===> Epoch[152](45560/301): Loss 0.2902 LR: 6.507e-02 Score 91.045 Data time: 2.4580, Total iter time: 6.1424 +thomas 04/08 17:22:05 ===> Epoch[152](45600/301): Loss 0.2726 LR: 6.504e-02 Score 91.435 Data time: 2.2872, Total iter time: 5.8152 +thomas 04/08 17:25:55 ===> Epoch[152](45640/301): Loss 0.2991 LR: 6.500e-02 Score 90.684 Data time: 2.2081, Total iter time: 5.6928 +thomas 04/08 17:30:02 ===> Epoch[152](45680/301): Loss 0.3116 LR: 6.497e-02 Score 90.064 Data time: 2.3771, Total iter time: 6.1004 +thomas 04/08 17:34:01 ===> Epoch[152](45720/301): Loss 0.3016 LR: 6.494e-02 Score 90.563 Data time: 2.3549, Total iter time: 5.8920 +thomas 04/08 17:38:06 ===> Epoch[153](45760/301): Loss 0.3028 LR: 6.491e-02 Score 90.651 Data time: 2.3968, Total iter time: 6.0581 +thomas 04/08 17:42:18 ===> Epoch[153](45800/301): Loss 0.3028 LR: 6.488e-02 Score 90.467 Data time: 2.4541, Total iter time: 6.2171 +thomas 04/08 17:46:07 ===> Epoch[153](45840/301): Loss 0.2942 LR: 6.485e-02 Score 90.571 Data time: 2.2339, Total iter time: 5.6316 +thomas 04/08 17:50:00 ===> Epoch[153](45880/301): Loss 0.2821 LR: 6.482e-02 Score 91.192 Data time: 2.2535, Total iter time: 5.7465 +thomas 04/08 17:54:08 ===> Epoch[153](45920/301): Loss 0.2673 LR: 6.478e-02 Score 91.585 Data time: 2.4186, Total iter time: 6.1196 +thomas 04/08 17:58:01 ===> Epoch[153](45960/301): Loss 0.2670 LR: 6.475e-02 Score 92.012 Data time: 2.2810, Total iter time: 5.7816 +thomas 04/08 18:02:06 ===> Epoch[153](46000/301): Loss 0.2522 LR: 6.472e-02 Score 92.219 Data time: 2.3943, Total iter time: 6.0455 +thomas 04/08 18:02:08 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/08 18:02:08 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/08 18:04:17 101/312: Data time: 0.0029, Iter time: 0.6647 Loss 0.077 (AVG: 0.591) Score 98.285 (AVG: 84.244) mIOU 58.441 mAP 71.617 mAcc 69.926 +IOU: 78.069 96.177 49.317 70.031 81.434 66.909 63.721 45.811 34.780 73.093 15.112 40.306 41.103 70.505 41.245 51.797 72.719 51.407 81.875 43.407 +mAP: 79.257 96.566 53.091 76.585 87.458 82.093 73.196 61.961 48.819 63.191 36.180 50.351 71.885 84.174 66.911 92.794 88.855 85.648 88.437 44.879 +mAcc: 91.157 98.667 71.412 78.376 83.889 96.070 67.933 70.735 38.852 81.783 16.292 74.621 76.531 75.760 55.684 56.118 72.911 52.840 85.169 53.725 + +thomas 04/08 18:06:09 201/312: Data time: 0.0023, Iter time: 0.6261 Loss 0.377 (AVG: 0.574) Score 89.589 (AVG: 84.698) mIOU 60.257 mAP 72.868 mAcc 71.021 +IOU: 77.037 96.337 53.170 73.871 84.724 70.014 66.177 44.470 40.930 69.663 13.236 49.573 52.174 68.254 53.210 40.372 80.525 55.235 72.430 43.742 +mAP: 79.814 97.261 56.835 79.041 88.976 85.391 74.760 62.467 55.181 65.119 35.582 58.777 72.056 80.612 67.541 90.002 93.529 84.654 78.019 51.748 +mAcc: 89.906 98.734 73.033 83.995 87.713 96.259 70.736 71.007 45.711 77.144 15.242 78.855 77.746 74.361 66.384 43.762 80.861 56.888 76.761 55.327 + +thomas 04/08 18:08:15 301/312: Data time: 0.1733, Iter time: 0.6572 Loss 0.272 (AVG: 0.576) Score 90.432 (AVG: 84.792) mIOU 60.638 mAP 71.835 mAcc 70.963 +IOU: 77.277 96.248 53.665 72.259 84.650 68.133 66.739 46.527 38.905 70.689 9.438 55.643 58.159 66.637 51.472 43.412 78.365 54.241 76.914 43.394 +mAP: 79.147 97.536 56.599 77.075 87.816 81.920 72.090 63.541 53.644 64.583 31.356 62.274 69.782 77.295 65.885 87.428 92.390 82.199 80.488 53.647 +mAcc: 90.510 98.679 73.250 82.588 87.549 95.173 70.914 71.669 42.870 78.936 10.394 79.871 79.518 72.962 65.478 46.420 78.974 56.755 80.639 56.103 + +thomas 04/08 18:08:30 312/312: Data time: 0.0024, Iter time: 0.6471 Loss 0.601 (AVG: 0.583) Score 90.040 (AVG: 84.678) mIOU 60.503 mAP 71.745 mAcc 70.643 +IOU: 77.042 96.210 54.373 71.076 84.770 67.761 66.931 46.793 39.120 70.615 9.249 54.269 58.722 66.785 50.444 43.412 78.365 53.798 76.914 43.420 +mAP: 78.725 97.597 56.676 76.157 87.734 82.003 72.335 64.051 53.396 64.583 30.679 61.169 69.872 77.438 66.363 87.428 92.390 82.272 80.488 53.551 +mAcc: 90.494 98.664 73.950 82.154 87.664 95.207 71.193 71.863 43.073 78.936 10.166 74.975 79.981 72.857 63.540 46.420 78.974 56.229 80.639 55.881 + +thomas 04/08 18:08:30 Finished test. Elapsed time: 381.7685 +thomas 04/08 18:08:30 Current best mIoU: 61.872 at iter 38000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/08 18:12:22 ===> Epoch[153](46040/301): Loss 0.3015 LR: 6.469e-02 Score 90.618 Data time: 2.2487, Total iter time: 5.7249 +thomas 04/08 18:16:29 ===> Epoch[154](46080/301): Loss 0.2751 LR: 6.466e-02 Score 91.038 Data time: 2.3713, Total iter time: 6.1096 +thomas 04/08 18:20:32 ===> Epoch[154](46120/301): Loss 0.2824 LR: 6.463e-02 Score 90.982 Data time: 2.3692, Total iter time: 5.9793 +thomas 04/08 18:24:22 ===> Epoch[154](46160/301): Loss 0.2874 LR: 6.460e-02 Score 90.530 Data time: 2.2583, Total iter time: 5.6685 +thomas 04/08 18:28:24 ===> Epoch[154](46200/301): Loss 0.2999 LR: 6.456e-02 Score 90.722 Data time: 2.3861, Total iter time: 5.9902 +thomas 04/08 18:32:24 ===> Epoch[154](46240/301): Loss 0.2903 LR: 6.453e-02 Score 91.170 Data time: 2.2993, Total iter time: 5.9048 +thomas 04/08 18:36:27 ===> Epoch[154](46280/301): Loss 0.2706 LR: 6.450e-02 Score 91.453 Data time: 2.3413, Total iter time: 6.0126 +thomas 04/08 18:40:21 ===> Epoch[154](46320/301): Loss 0.2698 LR: 6.447e-02 Score 91.362 Data time: 2.2656, Total iter time: 5.7756 +thomas 04/08 18:44:14 ===> Epoch[155](46360/301): Loss 0.2612 LR: 6.444e-02 Score 91.517 Data time: 2.2761, Total iter time: 5.7448 +thomas 04/08 18:48:02 ===> Epoch[155](46400/301): Loss 0.3127 LR: 6.441e-02 Score 90.232 Data time: 2.2333, Total iter time: 5.6356 +thomas 04/08 18:52:04 ===> Epoch[155](46440/301): Loss 0.2950 LR: 6.437e-02 Score 90.587 Data time: 2.3861, Total iter time: 5.9581 +thomas 04/08 18:56:04 ===> Epoch[155](46480/301): Loss 0.2841 LR: 6.434e-02 Score 91.193 Data time: 2.3504, Total iter time: 5.9263 +thomas 04/08 19:00:03 ===> Epoch[155](46520/301): Loss 0.2765 LR: 6.431e-02 Score 91.313 Data time: 2.3276, Total iter time: 5.9065 +thomas 04/08 19:04:06 ===> Epoch[155](46560/301): Loss 0.2685 LR: 6.428e-02 Score 91.754 Data time: 2.3879, Total iter time: 6.0090 +thomas 04/08 19:07:59 ===> Epoch[155](46600/301): Loss 0.2554 LR: 6.425e-02 Score 91.839 Data time: 2.2794, Total iter time: 5.7414 +thomas 04/08 19:11:53 ===> Epoch[155](46640/301): Loss 0.2784 LR: 6.422e-02 Score 91.040 Data time: 2.2926, Total iter time: 5.7861 +thomas 04/08 19:16:07 ===> Epoch[156](46680/301): Loss 0.2810 LR: 6.419e-02 Score 91.027 Data time: 2.4374, Total iter time: 6.2603 +thomas 04/08 19:20:19 ===> Epoch[156](46720/301): Loss 0.2894 LR: 6.415e-02 Score 90.802 Data time: 2.4893, Total iter time: 6.2122 +thomas 04/08 19:24:09 ===> Epoch[156](46760/301): Loss 0.2664 LR: 6.412e-02 Score 91.609 Data time: 2.2506, Total iter time: 5.6704 +thomas 04/08 19:28:21 ===> Epoch[156](46800/301): Loss 0.2675 LR: 6.409e-02 Score 91.486 Data time: 2.4499, Total iter time: 6.2205 +thomas 04/08 19:32:25 ===> Epoch[156](46840/301): Loss 0.3324 LR: 6.406e-02 Score 89.552 Data time: 2.3753, Total iter time: 6.0342 +thomas 04/08 19:36:22 ===> Epoch[156](46880/301): Loss 0.2864 LR: 6.403e-02 Score 90.887 Data time: 2.3394, Total iter time: 5.8546 +thomas 04/08 19:40:36 ===> Epoch[156](46920/301): Loss 0.2907 LR: 6.400e-02 Score 90.913 Data time: 2.4649, Total iter time: 6.2615 +thomas 04/08 19:44:31 ===> Epoch[157](46960/301): Loss 0.2840 LR: 6.397e-02 Score 90.987 Data time: 2.3189, Total iter time: 5.7935 +thomas 04/08 19:48:14 ===> Epoch[157](47000/301): Loss 0.2689 LR: 6.393e-02 Score 91.568 Data time: 2.1489, Total iter time: 5.4862 +thomas 04/08 19:48:15 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/08 19:48:15 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/08 19:50:19 101/312: Data time: 0.0030, Iter time: 1.2169 Loss 0.263 (AVG: 0.587) Score 91.555 (AVG: 84.282) mIOU 62.324 mAP 72.170 mAcc 73.078 +IOU: 78.079 95.548 59.905 59.425 82.948 65.890 67.719 40.731 41.102 66.259 12.688 66.413 58.083 72.299 68.080 62.472 66.963 50.258 87.897 43.727 +mAP: 78.828 97.110 62.296 55.865 86.271 88.919 71.165 57.046 51.029 79.332 40.316 59.875 61.617 78.129 78.622 88.011 85.648 75.726 91.554 56.039 +mAcc: 92.131 97.943 77.924 67.060 88.222 99.578 75.018 47.521 42.993 87.351 16.119 85.978 71.771 80.437 77.584 78.282 67.053 54.148 92.408 62.046 + +thomas 04/08 19:52:19 201/312: Data time: 0.0026, Iter time: 0.5745 Loss 0.904 (AVG: 0.601) Score 65.384 (AVG: 84.148) mIOU 60.053 mAP 69.990 mAcc 70.441 +IOU: 77.260 96.015 58.724 64.448 82.992 59.813 71.505 40.067 34.801 72.085 16.173 59.994 56.134 67.758 56.643 49.565 69.293 49.208 78.669 39.906 +mAP: 78.272 97.130 62.069 64.459 86.793 81.921 73.100 55.950 47.552 70.203 39.096 57.372 64.703 76.730 64.386 86.416 86.583 78.847 76.623 51.597 +mAcc: 91.105 98.195 77.942 72.666 87.824 98.617 79.365 46.396 36.780 91.712 21.682 78.040 68.423 74.847 64.428 54.271 69.503 52.626 84.035 60.360 + +thomas 04/08 19:54:23 301/312: Data time: 0.0031, Iter time: 0.5349 Loss 0.444 (AVG: 0.606) Score 89.507 (AVG: 83.825) mIOU 58.969 mAP 69.674 mAcc 69.624 +IOU: 77.560 95.736 55.266 63.492 84.327 58.408 69.481 38.049 34.611 69.295 14.887 60.712 57.909 66.328 46.619 48.583 69.424 51.109 79.536 38.043 +mAP: 79.303 96.819 58.043 65.358 88.389 80.675 71.856 55.873 46.626 69.070 39.185 59.919 66.706 76.482 58.152 85.176 85.160 79.462 80.606 50.625 +mAcc: 91.693 98.061 74.217 71.463 89.235 97.035 79.600 43.575 37.159 89.377 21.021 79.394 69.000 71.815 53.519 56.532 69.827 54.823 86.598 58.545 + +thomas 04/08 19:54:38 312/312: Data time: 0.0029, Iter time: 0.2700 Loss 0.030 (AVG: 0.603) Score 99.787 (AVG: 83.875) mIOU 59.162 mAP 69.809 mAcc 69.744 +IOU: 77.470 95.773 54.589 64.226 83.631 58.738 69.192 37.944 36.812 71.533 14.567 60.682 57.065 66.217 46.471 49.286 69.749 51.201 80.082 38.014 +mAP: 79.024 96.891 58.536 65.876 88.270 80.421 72.172 55.957 47.296 69.376 38.395 59.919 66.145 77.143 58.152 85.205 85.429 79.695 81.170 51.107 +mAcc: 91.508 98.076 73.315 72.165 88.322 97.079 79.656 43.409 39.686 89.888 20.219 79.394 67.712 72.351 53.519 57.272 70.153 54.962 86.971 59.225 + +thomas 04/08 19:54:38 Finished test. Elapsed time: 382.1726 +thomas 04/08 19:54:38 Current best mIoU: 61.872 at iter 38000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/08 19:58:40 ===> Epoch[157](47040/301): Loss 0.3047 LR: 6.390e-02 Score 90.640 Data time: 2.3296, Total iter time: 6.0017 +thomas 04/08 20:02:36 ===> Epoch[157](47080/301): Loss 0.3154 LR: 6.387e-02 Score 90.160 Data time: 2.3081, Total iter time: 5.8212 +thomas 04/08 20:06:42 ===> Epoch[157](47120/301): Loss 0.3155 LR: 6.384e-02 Score 90.406 Data time: 2.4006, Total iter time: 6.0678 +thomas 04/08 20:10:30 ===> Epoch[157](47160/301): Loss 0.2978 LR: 6.381e-02 Score 90.482 Data time: 2.2473, Total iter time: 5.6294 +thomas 04/08 20:14:28 ===> Epoch[157](47200/301): Loss 0.2692 LR: 6.378e-02 Score 91.509 Data time: 2.3384, Total iter time: 5.8759 +thomas 04/08 20:18:38 ===> Epoch[157](47240/301): Loss 0.2710 LR: 6.374e-02 Score 91.314 Data time: 2.4273, Total iter time: 6.1747 +thomas 04/08 20:22:34 ===> Epoch[158](47280/301): Loss 0.3169 LR: 6.371e-02 Score 90.169 Data time: 2.2745, Total iter time: 5.8510 +thomas 04/08 20:26:34 ===> Epoch[158](47320/301): Loss 0.3008 LR: 6.368e-02 Score 90.631 Data time: 2.3425, Total iter time: 5.9123 +thomas 04/08 20:30:34 ===> Epoch[158](47360/301): Loss 0.3308 LR: 6.365e-02 Score 89.624 Data time: 2.3704, Total iter time: 5.9432 +thomas 04/08 20:34:24 ===> Epoch[158](47400/301): Loss 0.2711 LR: 6.362e-02 Score 91.692 Data time: 2.2307, Total iter time: 5.6708 +thomas 04/08 20:38:30 ===> Epoch[158](47440/301): Loss 0.3002 LR: 6.359e-02 Score 90.589 Data time: 2.4082, Total iter time: 6.0685 +thomas 04/08 20:42:19 ===> Epoch[158](47480/301): Loss 0.2820 LR: 6.356e-02 Score 91.089 Data time: 2.2447, Total iter time: 5.6467 +thomas 04/08 20:45:56 ===> Epoch[158](47520/301): Loss 0.2917 LR: 6.352e-02 Score 90.609 Data time: 2.0862, Total iter time: 5.3631 +thomas 04/08 20:49:50 ===> Epoch[159](47560/301): Loss 0.2735 LR: 6.349e-02 Score 91.421 Data time: 2.2523, Total iter time: 5.7565 +thomas 04/08 20:54:01 ===> Epoch[159](47600/301): Loss 0.2924 LR: 6.346e-02 Score 90.780 Data time: 2.4325, Total iter time: 6.2171 +thomas 04/08 20:58:03 ===> Epoch[159](47640/301): Loss 0.3038 LR: 6.343e-02 Score 90.542 Data time: 2.3478, Total iter time: 5.9567 +thomas 04/08 21:02:03 ===> Epoch[159](47680/301): Loss 0.3271 LR: 6.340e-02 Score 90.295 Data time: 2.3225, Total iter time: 5.9178 +thomas 04/08 21:06:09 ===> Epoch[159](47720/301): Loss 0.3145 LR: 6.337e-02 Score 90.074 Data time: 2.4094, Total iter time: 6.0685 +thomas 04/08 21:09:47 ===> Epoch[159](47760/301): Loss 0.2934 LR: 6.333e-02 Score 90.697 Data time: 2.0869, Total iter time: 5.3903 +thomas 04/08 21:13:54 ===> Epoch[159](47800/301): Loss 0.3000 LR: 6.330e-02 Score 90.374 Data time: 2.3908, Total iter time: 6.0926 +thomas 04/08 21:17:51 ===> Epoch[159](47840/301): Loss 0.2501 LR: 6.327e-02 Score 91.997 Data time: 2.3158, Total iter time: 5.8312 +thomas 04/08 21:22:01 ===> Epoch[160](47880/301): Loss 0.3012 LR: 6.324e-02 Score 90.744 Data time: 2.4259, Total iter time: 6.1713 +thomas 04/08 21:26:00 ===> Epoch[160](47920/301): Loss 0.2957 LR: 6.321e-02 Score 90.574 Data time: 2.3112, Total iter time: 5.9219 +thomas 04/08 21:29:58 ===> Epoch[160](47960/301): Loss 0.3274 LR: 6.318e-02 Score 89.628 Data time: 2.3005, Total iter time: 5.8685 +thomas 04/08 21:34:09 ===> Epoch[160](48000/301): Loss 0.2878 LR: 6.314e-02 Score 90.900 Data time: 2.4071, Total iter time: 6.2046 +thomas 04/08 21:34:11 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/08 21:34:11 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/08 21:36:11 101/312: Data time: 0.0029, Iter time: 0.4231 Loss 1.344 (AVG: 0.654) Score 66.742 (AVG: 83.424) mIOU 56.024 mAP 69.659 mAcc 68.118 +IOU: 75.174 96.504 45.400 47.020 86.011 59.928 72.312 37.545 29.478 75.439 7.815 48.772 59.325 68.669 38.955 38.604 71.614 35.976 80.856 45.090 +mAP: 80.085 98.065 46.046 64.634 90.862 79.480 71.824 55.882 41.666 73.195 26.314 53.342 66.421 70.679 71.982 89.122 88.338 79.647 94.421 51.176 +mAcc: 89.806 99.013 57.703 79.221 87.899 98.071 82.275 47.969 30.650 96.261 12.241 74.781 81.781 72.218 62.366 44.605 72.901 37.203 81.481 53.913 + +thomas 04/08 21:38:03 201/312: Data time: 0.0024, Iter time: 0.8707 Loss 0.802 (AVG: 0.646) Score 78.595 (AVG: 83.574) mIOU 57.279 mAP 70.216 mAcc 68.611 +IOU: 75.564 96.177 52.677 50.951 86.455 63.858 71.659 39.007 27.630 67.701 4.832 54.738 60.086 59.021 44.104 43.875 77.250 44.758 80.761 44.479 +mAP: 79.195 97.851 53.016 65.330 89.466 81.333 73.157 54.747 45.430 69.056 24.272 59.292 67.310 68.017 64.454 92.141 92.609 82.858 89.198 55.579 +mAcc: 89.684 99.061 63.465 80.691 88.812 95.790 82.710 50.276 28.751 91.986 6.671 81.243 80.393 62.739 59.862 48.248 78.183 46.596 81.361 55.707 + +thomas 04/08 21:40:07 301/312: Data time: 0.0025, Iter time: 0.6848 Loss 0.753 (AVG: 0.649) Score 79.841 (AVG: 83.554) mIOU 56.882 mAP 69.798 mAcc 68.369 +IOU: 75.670 95.967 54.415 53.891 86.058 63.415 70.679 39.164 26.737 72.520 9.494 54.067 59.075 56.969 47.473 27.436 75.511 47.830 79.045 42.230 +mAP: 77.135 97.713 55.835 67.595 90.114 82.060 73.516 53.161 42.695 70.166 30.507 59.128 66.369 68.625 66.187 83.832 93.164 81.240 82.621 54.295 +mAcc: 90.011 98.922 65.316 82.957 88.479 96.719 82.662 49.983 27.677 92.251 14.045 84.233 78.756 61.455 63.626 28.627 77.692 49.577 79.856 54.524 + +thomas 04/08 21:40:24 312/312: Data time: 0.0030, Iter time: 0.9545 Loss 0.917 (AVG: 0.657) Score 79.076 (AVG: 83.358) mIOU 56.626 mAP 69.475 mAcc 68.142 +IOU: 75.458 95.983 54.822 53.665 86.113 63.454 70.094 39.143 26.081 71.549 9.695 54.559 59.640 58.552 46.317 26.173 74.691 46.198 78.372 41.953 +mAP: 76.993 97.699 55.863 67.170 89.871 82.516 72.134 53.526 42.272 69.213 32.071 58.885 66.673 70.374 64.339 83.862 91.896 80.194 80.455 53.496 +mAcc: 90.023 98.929 65.487 82.591 88.549 96.593 82.432 49.714 27.162 91.967 13.966 84.475 79.377 62.938 63.200 27.133 77.004 47.758 79.168 54.382 + +thomas 04/08 21:40:24 Finished test. Elapsed time: 372.9255 +thomas 04/08 21:40:24 Current best mIoU: 61.872 at iter 38000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/08 21:44:11 ===> Epoch[160](48040/301): Loss 0.2712 LR: 6.311e-02 Score 91.592 Data time: 2.2151, Total iter time: 5.6122 +thomas 04/08 21:48:08 ===> Epoch[160](48080/301): Loss 0.2744 LR: 6.308e-02 Score 91.349 Data time: 2.2608, Total iter time: 5.8408 +thomas 04/08 21:52:10 ===> Epoch[160](48120/301): Loss 0.3036 LR: 6.305e-02 Score 90.556 Data time: 2.3208, Total iter time: 5.9836 +thomas 04/08 21:56:09 ===> Epoch[160](48160/301): Loss 0.2797 LR: 6.302e-02 Score 91.015 Data time: 2.2727, Total iter time: 5.8970 +thomas 04/08 21:59:55 ===> Epoch[161](48200/301): Loss 0.3389 LR: 6.299e-02 Score 89.604 Data time: 2.1751, Total iter time: 5.5812 +thomas 04/08 22:03:47 ===> Epoch[161](48240/301): Loss 0.2477 LR: 6.296e-02 Score 92.018 Data time: 2.2504, Total iter time: 5.7282 +thomas 04/08 22:07:50 ===> Epoch[161](48280/301): Loss 0.3073 LR: 6.292e-02 Score 90.447 Data time: 2.3764, Total iter time: 6.0052 +thomas 04/08 22:11:51 ===> Epoch[161](48320/301): Loss 0.2887 LR: 6.289e-02 Score 90.914 Data time: 2.3502, Total iter time: 5.9450 +thomas 04/08 22:15:39 ===> Epoch[161](48360/301): Loss 0.3021 LR: 6.286e-02 Score 90.569 Data time: 2.2018, Total iter time: 5.6291 +thomas 04/08 22:19:39 ===> Epoch[161](48400/301): Loss 0.2742 LR: 6.283e-02 Score 91.494 Data time: 2.3010, Total iter time: 5.9219 +thomas 04/08 22:23:27 ===> Epoch[161](48440/301): Loss 0.2978 LR: 6.280e-02 Score 90.760 Data time: 2.1882, Total iter time: 5.6302 +thomas 04/08 22:27:35 ===> Epoch[162](48480/301): Loss 0.3541 LR: 6.277e-02 Score 88.752 Data time: 2.4120, Total iter time: 6.1076 +thomas 04/08 22:31:29 ===> Epoch[162](48520/301): Loss 0.3392 LR: 6.273e-02 Score 89.545 Data time: 2.2738, Total iter time: 5.7952 +thomas 04/08 22:35:20 ===> Epoch[162](48560/301): Loss 0.2966 LR: 6.270e-02 Score 90.658 Data time: 2.2595, Total iter time: 5.7113 +thomas 04/08 22:39:18 ===> Epoch[162](48600/301): Loss 0.2744 LR: 6.267e-02 Score 91.436 Data time: 2.3015, Total iter time: 5.8528 +thomas 04/08 22:43:06 ===> Epoch[162](48640/301): Loss 0.2334 LR: 6.264e-02 Score 92.601 Data time: 2.1891, Total iter time: 5.6395 +thomas 04/08 22:46:55 ===> Epoch[162](48680/301): Loss 0.2525 LR: 6.261e-02 Score 91.823 Data time: 2.1906, Total iter time: 5.6456 +thomas 04/08 22:50:35 ===> Epoch[162](48720/301): Loss 0.2597 LR: 6.258e-02 Score 91.731 Data time: 2.1585, Total iter time: 5.4368 +thomas 04/08 22:54:31 ===> Epoch[162](48760/301): Loss 0.2919 LR: 6.254e-02 Score 90.817 Data time: 2.2761, Total iter time: 5.8000 +thomas 04/08 22:58:35 ===> Epoch[163](48800/301): Loss 0.2969 LR: 6.251e-02 Score 90.947 Data time: 2.3593, Total iter time: 6.0191 +thomas 04/08 23:02:27 ===> Epoch[163](48840/301): Loss 0.2938 LR: 6.248e-02 Score 90.824 Data time: 2.2724, Total iter time: 5.7194 +thomas 04/08 23:06:26 ===> Epoch[163](48880/301): Loss 0.3248 LR: 6.245e-02 Score 89.937 Data time: 2.3057, Total iter time: 5.9058 +thomas 04/08 23:10:14 ===> Epoch[163](48920/301): Loss 0.2858 LR: 6.242e-02 Score 90.993 Data time: 2.1943, Total iter time: 5.6346 +thomas 04/08 23:14:29 ===> Epoch[163](48960/301): Loss 0.2784 LR: 6.239e-02 Score 91.188 Data time: 2.4950, Total iter time: 6.2867 +thomas 04/08 23:18:35 ===> Epoch[163](49000/301): Loss 0.2909 LR: 6.236e-02 Score 91.038 Data time: 2.3900, Total iter time: 6.0638 +thomas 04/08 23:18:36 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/08 23:18:37 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/08 23:20:48 101/312: Data time: 0.0027, Iter time: 0.7376 Loss 0.486 (AVG: 0.656) Score 86.537 (AVG: 82.742) mIOU 57.398 mAP 69.883 mAcc 67.393 +IOU: 74.188 95.834 43.601 67.571 87.401 76.082 65.335 40.817 31.150 62.578 15.081 50.243 62.661 59.559 31.654 37.714 87.053 38.219 76.423 44.799 +mAP: 77.608 97.276 61.127 75.957 88.711 82.273 73.138 58.667 47.403 62.794 45.851 59.174 66.269 69.911 51.950 84.265 90.976 79.087 76.110 49.103 +mAcc: 88.650 98.685 65.254 80.808 91.855 95.521 68.551 55.369 33.216 90.695 22.686 69.707 82.150 66.363 37.646 37.921 88.004 39.166 77.260 58.351 + +thomas 04/08 23:22:43 201/312: Data time: 0.0026, Iter time: 0.6285 Loss 0.373 (AVG: 0.629) Score 92.807 (AVG: 83.541) mIOU 59.564 mAP 71.732 mAcc 70.313 +IOU: 76.433 95.726 47.421 70.514 88.264 72.911 65.423 39.620 32.948 59.400 17.986 52.418 59.819 70.517 48.322 35.926 88.331 43.211 80.445 45.640 +mAP: 77.750 97.100 59.841 73.064 89.667 80.869 74.204 54.197 50.971 67.127 50.329 59.646 67.380 79.157 62.001 86.096 91.082 79.505 81.594 53.061 +mAcc: 89.319 98.730 69.540 80.506 92.438 94.077 69.053 55.420 34.609 90.589 27.690 73.657 81.490 76.509 58.896 38.186 89.124 43.840 82.435 60.143 + +thomas 04/08 23:24:46 301/312: Data time: 0.0026, Iter time: 0.7056 Loss 0.534 (AVG: 0.594) Score 82.378 (AVG: 84.296) mIOU 60.466 mAP 70.966 mAcc 71.028 +IOU: 77.190 96.019 50.461 74.098 89.013 73.211 66.960 44.206 31.154 59.459 17.475 59.209 59.568 68.124 46.902 35.995 89.571 45.790 81.056 43.866 +mAP: 78.617 97.534 59.798 75.562 89.962 79.561 73.806 58.149 47.001 65.174 44.600 60.858 67.499 77.045 58.242 79.291 92.944 78.876 82.854 51.953 +mAcc: 89.175 98.836 72.569 84.017 92.672 94.563 70.772 59.566 32.751 90.881 27.964 78.057 80.025 73.918 56.897 37.752 90.705 46.517 82.600 60.316 + +thomas 04/08 23:24:59 312/312: Data time: 0.0027, Iter time: 0.2889 Loss 0.096 (AVG: 0.588) Score 97.467 (AVG: 84.457) mIOU 60.739 mAP 71.266 mAcc 71.266 +IOU: 77.360 96.107 50.338 74.079 89.117 75.332 66.968 44.140 30.573 58.878 17.105 59.242 59.322 67.929 48.273 37.095 90.084 46.649 82.094 44.086 +mAP: 79.001 97.572 59.951 75.562 90.107 80.770 74.033 58.616 46.540 65.094 43.434 60.879 67.353 77.045 59.536 80.568 93.267 79.362 83.713 52.922 +mAcc: 89.140 98.859 72.681 84.017 92.800 95.201 70.886 59.155 32.276 90.947 27.567 77.961 80.101 73.918 58.210 38.810 91.166 47.442 83.602 60.591 + +thomas 04/08 23:24:59 Finished test. Elapsed time: 382.0639 +thomas 04/08 23:24:59 Current best mIoU: 61.872 at iter 38000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/08 23:28:41 ===> Epoch[163](49040/301): Loss 0.2771 LR: 6.232e-02 Score 91.072 Data time: 2.1499, Total iter time: 5.4904 +thomas 04/08 23:32:39 ===> Epoch[164](49080/301): Loss 0.3054 LR: 6.229e-02 Score 90.233 Data time: 2.2832, Total iter time: 5.8781 +thomas 04/08 23:36:27 ===> Epoch[164](49120/301): Loss 0.2804 LR: 6.226e-02 Score 91.130 Data time: 2.2030, Total iter time: 5.6189 +thomas 04/08 23:40:16 ===> Epoch[164](49160/301): Loss 0.3109 LR: 6.223e-02 Score 89.990 Data time: 2.2415, Total iter time: 5.6491 +thomas 04/08 23:44:22 ===> Epoch[164](49200/301): Loss 0.2814 LR: 6.220e-02 Score 90.908 Data time: 2.4067, Total iter time: 6.0666 +thomas 04/08 23:48:21 ===> Epoch[164](49240/301): Loss 0.2796 LR: 6.217e-02 Score 91.088 Data time: 2.3312, Total iter time: 5.9235 +thomas 04/08 23:52:24 ===> Epoch[164](49280/301): Loss 0.3000 LR: 6.213e-02 Score 90.672 Data time: 2.3229, Total iter time: 5.9954 +thomas 04/08 23:56:16 ===> Epoch[164](49320/301): Loss 0.2954 LR: 6.210e-02 Score 90.455 Data time: 2.2141, Total iter time: 5.7300 +thomas 04/09 00:00:16 ===> Epoch[164](49360/301): Loss 0.2962 LR: 6.207e-02 Score 90.513 Data time: 2.3044, Total iter time: 5.9193 +thomas 04/09 00:04:14 ===> Epoch[165](49400/301): Loss 0.2884 LR: 6.204e-02 Score 90.514 Data time: 2.3032, Total iter time: 5.8765 +thomas 04/09 00:08:15 ===> Epoch[165](49440/301): Loss 0.2496 LR: 6.201e-02 Score 92.253 Data time: 2.3729, Total iter time: 5.9461 +thomas 04/09 00:12:26 ===> Epoch[165](49480/301): Loss 0.2787 LR: 6.198e-02 Score 90.987 Data time: 2.4437, Total iter time: 6.1978 +thomas 04/09 00:16:25 ===> Epoch[165](49520/301): Loss 0.3042 LR: 6.194e-02 Score 90.749 Data time: 2.2975, Total iter time: 5.8986 +thomas 04/09 00:20:11 ===> Epoch[165](49560/301): Loss 0.2903 LR: 6.191e-02 Score 90.765 Data time: 2.1555, Total iter time: 5.5853 +thomas 04/09 00:24:03 ===> Epoch[165](49600/301): Loss 0.2565 LR: 6.188e-02 Score 91.715 Data time: 2.2002, Total iter time: 5.7270 +thomas 04/09 00:27:48 ===> Epoch[165](49640/301): Loss 0.2439 LR: 6.185e-02 Score 92.380 Data time: 2.1623, Total iter time: 5.5336 +thomas 04/09 00:31:44 ===> Epoch[166](49680/301): Loss 0.2786 LR: 6.182e-02 Score 91.245 Data time: 2.3039, Total iter time: 5.8102 +thomas 04/09 00:35:32 ===> Epoch[166](49720/301): Loss 0.2787 LR: 6.179e-02 Score 91.459 Data time: 2.1892, Total iter time: 5.6206 +thomas 04/09 00:39:28 ===> Epoch[166](49760/301): Loss 0.2659 LR: 6.175e-02 Score 91.659 Data time: 2.2666, Total iter time: 5.8328 +thomas 04/09 00:43:18 ===> Epoch[166](49800/301): Loss 0.2897 LR: 6.172e-02 Score 90.848 Data time: 2.2002, Total iter time: 5.6848 +thomas 04/09 00:47:07 ===> Epoch[166](49840/301): Loss 0.2832 LR: 6.169e-02 Score 91.063 Data time: 2.1835, Total iter time: 5.6425 +thomas 04/09 00:51:18 ===> Epoch[166](49880/301): Loss 0.2957 LR: 6.166e-02 Score 90.502 Data time: 2.3846, Total iter time: 6.1786 +thomas 04/09 00:55:22 ===> Epoch[166](49920/301): Loss 0.2889 LR: 6.163e-02 Score 90.653 Data time: 2.4002, Total iter time: 6.0174 +thomas 04/09 00:59:10 ===> Epoch[166](49960/301): Loss 0.2720 LR: 6.160e-02 Score 91.573 Data time: 2.1771, Total iter time: 5.6217 +thomas 04/09 01:03:00 ===> Epoch[167](50000/301): Loss 0.2936 LR: 6.156e-02 Score 91.175 Data time: 2.2053, Total iter time: 5.6864 +thomas 04/09 01:03:02 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/09 01:03:02 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/09 01:05:07 101/312: Data time: 0.0031, Iter time: 0.4071 Loss 0.076 (AVG: 0.669) Score 98.135 (AVG: 82.528) mIOU 56.541 mAP 69.154 mAcc 67.954 +IOU: 78.222 89.537 51.632 46.366 86.562 73.657 62.371 38.705 27.815 65.040 7.816 55.502 53.531 63.303 19.670 49.807 83.841 60.054 71.257 46.126 +mAP: 78.190 95.071 59.707 71.313 85.435 86.557 62.651 58.241 44.229 71.342 30.761 56.078 64.025 81.976 43.855 73.901 95.037 84.603 86.386 53.715 +mAcc: 93.345 92.477 63.389 75.761 88.343 90.428 74.699 46.076 28.417 91.398 8.349 61.244 81.857 83.149 31.254 58.896 85.301 63.987 75.083 65.623 + +thomas 04/09 01:06:56 201/312: Data time: 0.0879, Iter time: 0.8112 Loss 0.731 (AVG: 0.635) Score 77.537 (AVG: 83.590) mIOU 58.926 mAP 70.894 mAcc 70.272 +IOU: 78.461 90.456 56.903 49.568 87.444 73.718 66.463 43.286 28.659 71.085 9.975 60.948 55.047 61.952 44.794 50.607 85.501 54.265 70.174 39.212 +mAP: 78.669 95.607 60.667 74.365 87.624 80.826 68.280 60.366 47.457 69.741 33.107 57.762 67.579 83.289 61.734 78.235 95.810 83.693 81.658 51.404 +mAcc: 92.349 93.764 69.427 78.154 89.282 89.653 77.190 52.526 29.467 90.542 10.461 67.590 81.621 83.512 61.686 56.337 86.819 60.312 75.331 59.414 + +thomas 04/09 01:08:51 301/312: Data time: 0.0024, Iter time: 1.1476 Loss 0.955 (AVG: 0.644) Score 75.015 (AVG: 83.364) mIOU 58.980 mAP 70.996 mAcc 70.559 +IOU: 78.328 90.809 56.970 45.771 86.764 71.515 63.730 43.618 29.378 70.930 9.405 59.914 54.835 58.834 49.932 48.534 85.958 57.142 72.381 44.853 +mAP: 78.351 95.497 60.144 69.976 87.906 81.720 67.406 60.702 47.489 70.335 31.803 57.815 68.191 83.927 65.523 83.165 94.659 83.902 78.570 52.844 +mAcc: 92.560 94.115 67.941 73.637 89.055 88.954 73.923 53.429 30.184 91.937 9.803 67.814 83.576 82.752 67.677 52.542 86.897 63.585 76.317 64.490 + +thomas 04/09 01:09:02 312/312: Data time: 0.0024, Iter time: 0.3600 Loss 0.988 (AVG: 0.640) Score 79.340 (AVG: 83.500) mIOU 58.974 mAP 70.838 mAcc 70.464 +IOU: 78.635 90.821 56.676 44.793 86.858 72.249 63.677 43.514 29.328 70.739 9.405 59.910 55.491 58.332 49.295 48.534 86.059 57.822 72.381 44.954 +mAP: 78.763 95.571 59.763 67.979 88.141 81.169 68.154 60.171 48.200 70.335 31.803 57.815 68.095 83.342 63.389 83.165 94.758 84.087 78.570 53.488 +mAcc: 92.607 94.079 67.932 73.141 89.169 88.484 74.474 53.368 30.124 91.937 9.803 67.814 82.491 82.702 66.377 52.542 86.985 64.211 76.317 64.721 + +thomas 04/09 01:09:02 Finished test. Elapsed time: 360.4787 +thomas 04/09 01:09:02 Current best mIoU: 61.872 at iter 38000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/09 01:12:59 ===> Epoch[167](50040/301): Loss 0.2759 LR: 6.153e-02 Score 91.374 Data time: 2.2691, Total iter time: 5.8407 +thomas 04/09 01:16:49 ===> Epoch[167](50080/301): Loss 0.3054 LR: 6.150e-02 Score 90.425 Data time: 2.2348, Total iter time: 5.6884 +thomas 04/09 01:20:45 ===> Epoch[167](50120/301): Loss 0.2969 LR: 6.147e-02 Score 90.609 Data time: 2.3310, Total iter time: 5.8250 +thomas 04/09 01:24:43 ===> Epoch[167](50160/301): Loss 0.2982 LR: 6.144e-02 Score 90.771 Data time: 2.2700, Total iter time: 5.8648 +thomas 04/09 01:28:30 ===> Epoch[167](50200/301): Loss 0.3006 LR: 6.141e-02 Score 90.665 Data time: 2.1611, Total iter time: 5.6033 +thomas 04/09 01:32:17 ===> Epoch[167](50240/301): Loss 0.2511 LR: 6.137e-02 Score 92.299 Data time: 2.1912, Total iter time: 5.6063 +thomas 04/09 01:36:29 ===> Epoch[168](50280/301): Loss 0.2584 LR: 6.134e-02 Score 91.706 Data time: 2.4132, Total iter time: 6.2067 +thomas 04/09 01:40:20 ===> Epoch[168](50320/301): Loss 0.2795 LR: 6.131e-02 Score 90.803 Data time: 2.2370, Total iter time: 5.7086 +thomas 04/09 01:44:16 ===> Epoch[168](50360/301): Loss 0.2705 LR: 6.128e-02 Score 91.454 Data time: 2.3786, Total iter time: 5.8310 +thomas 04/09 01:48:13 ===> Epoch[168](50400/301): Loss 0.2937 LR: 6.125e-02 Score 90.736 Data time: 2.2991, Total iter time: 5.8580 +thomas 04/09 01:52:09 ===> Epoch[168](50440/301): Loss 0.2596 LR: 6.122e-02 Score 91.805 Data time: 2.2681, Total iter time: 5.8256 +thomas 04/09 01:56:01 ===> Epoch[168](50480/301): Loss 0.2852 LR: 6.118e-02 Score 90.959 Data time: 2.2098, Total iter time: 5.7271 +thomas 04/09 01:59:55 ===> Epoch[168](50520/301): Loss 0.2505 LR: 6.115e-02 Score 91.994 Data time: 2.2369, Total iter time: 5.7696 +thomas 04/09 02:04:07 ===> Epoch[168](50560/301): Loss 0.2673 LR: 6.112e-02 Score 91.283 Data time: 2.5055, Total iter time: 6.2258 +thomas 04/09 02:08:29 ===> Epoch[169](50600/301): Loss 0.2971 LR: 6.109e-02 Score 90.650 Data time: 2.5821, Total iter time: 6.4697 +thomas 04/09 02:12:34 ===> Epoch[169](50640/301): Loss 0.2909 LR: 6.106e-02 Score 91.183 Data time: 2.3827, Total iter time: 6.0343 +thomas 04/09 02:16:34 ===> Epoch[169](50680/301): Loss 0.2695 LR: 6.103e-02 Score 91.217 Data time: 2.3208, Total iter time: 5.9420 +thomas 04/09 02:20:25 ===> Epoch[169](50720/301): Loss 0.3004 LR: 6.099e-02 Score 90.366 Data time: 2.2289, Total iter time: 5.6929 +thomas 04/09 02:24:15 ===> Epoch[169](50760/301): Loss 0.2953 LR: 6.096e-02 Score 90.778 Data time: 2.2274, Total iter time: 5.6787 +thomas 04/09 02:28:34 ===> Epoch[169](50800/301): Loss 0.2685 LR: 6.093e-02 Score 91.898 Data time: 2.5866, Total iter time: 6.3898 +thomas 04/09 02:32:45 ===> Epoch[169](50840/301): Loss 0.2580 LR: 6.090e-02 Score 91.679 Data time: 2.4986, Total iter time: 6.1908 +thomas 04/09 02:36:41 ===> Epoch[170](50880/301): Loss 0.2927 LR: 6.087e-02 Score 90.921 Data time: 2.2834, Total iter time: 5.8069 +thomas 04/09 02:40:32 ===> Epoch[170](50920/301): Loss 0.3033 LR: 6.084e-02 Score 90.495 Data time: 2.2367, Total iter time: 5.7068 +thomas 04/09 02:44:16 ===> Epoch[170](50960/301): Loss 0.2904 LR: 6.080e-02 Score 90.885 Data time: 2.1776, Total iter time: 5.5418 +thomas 04/09 02:48:22 ===> Epoch[170](51000/301): Loss 0.2697 LR: 6.077e-02 Score 91.267 Data time: 2.3863, Total iter time: 6.0717 +thomas 04/09 02:48:24 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/09 02:48:24 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/09 02:50:27 101/312: Data time: 0.0033, Iter time: 0.3961 Loss 0.779 (AVG: 0.597) Score 79.050 (AVG: 84.569) mIOU 58.856 mAP 72.037 mAcc 68.921 +IOU: 75.981 96.697 52.488 77.981 81.033 76.172 63.974 50.496 13.303 79.835 14.190 62.675 58.406 56.615 50.843 21.992 86.239 48.073 78.805 31.324 +mAP: 78.204 98.185 63.365 76.011 87.259 77.957 77.619 65.716 40.732 61.914 47.570 62.165 70.413 72.952 71.652 82.347 96.940 76.485 83.543 49.704 +mAcc: 91.227 98.884 59.527 89.975 83.433 95.566 69.462 70.267 14.273 88.391 15.569 84.701 75.047 58.502 69.993 21.995 86.700 51.395 79.600 73.907 + +thomas 04/09 02:52:40 201/312: Data time: 0.0040, Iter time: 0.5534 Loss 2.597 (AVG: 0.627) Score 57.430 (AVG: 83.454) mIOU 57.858 mAP 70.759 mAcc 67.737 +IOU: 75.351 96.268 55.158 67.832 83.193 72.767 63.889 46.897 19.179 75.145 15.485 60.854 59.265 59.700 48.596 19.152 80.008 45.121 77.517 35.772 +mAP: 77.920 98.118 64.167 69.495 87.640 80.871 70.994 61.886 44.327 63.973 41.610 62.032 70.262 76.690 64.485 72.848 91.569 76.937 88.562 50.800 +mAcc: 90.688 98.814 61.667 80.529 85.678 94.742 69.512 67.100 19.960 88.977 16.480 84.517 74.880 61.758 67.415 19.630 80.965 48.335 78.142 64.947 + +thomas 04/09 02:54:55 301/312: Data time: 0.0027, Iter time: 1.3923 Loss 0.783 (AVG: 0.651) Score 78.645 (AVG: 83.099) mIOU 57.568 mAP 70.543 mAcc 67.420 +IOU: 74.730 96.101 54.780 68.456 84.198 72.502 64.701 43.128 21.433 71.042 10.270 60.726 57.247 57.767 46.303 24.466 75.122 51.980 79.221 37.196 +mAP: 77.469 97.906 62.621 72.063 88.127 80.946 71.743 60.387 43.159 62.926 37.508 63.865 69.375 75.964 62.367 75.922 90.176 80.849 87.335 50.145 +mAcc: 90.116 98.753 62.572 80.497 86.790 95.298 71.295 61.826 22.233 87.728 10.828 84.075 71.833 60.484 63.714 25.490 75.700 55.010 79.998 64.156 + +thomas 04/09 02:55:06 312/312: Data time: 0.0025, Iter time: 0.2584 Loss 0.027 (AVG: 0.656) Score 99.922 (AVG: 82.999) mIOU 57.007 mAP 70.285 mAcc 66.940 +IOU: 74.735 96.020 54.168 68.744 83.890 71.239 64.224 42.868 21.733 70.611 11.061 59.746 56.532 57.660 45.800 20.943 75.188 51.116 77.307 36.559 +mAP: 77.386 97.938 62.672 71.774 88.294 80.946 71.619 59.817 43.413 62.811 39.594 63.865 68.545 76.648 61.054 75.959 90.612 80.880 82.020 49.858 +mAcc: 90.134 98.720 62.240 80.668 86.419 95.298 70.692 61.639 22.610 86.375 11.692 84.075 71.067 60.377 63.245 21.673 75.754 53.924 78.040 64.153 + +thomas 04/09 02:55:06 Finished test. Elapsed time: 402.3124 +thomas 04/09 02:55:06 Current best mIoU: 61.872 at iter 38000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/09 02:59:15 ===> Epoch[170](51040/301): Loss 0.2957 LR: 6.074e-02 Score 90.461 Data time: 2.4795, Total iter time: 6.1580 +thomas 04/09 03:03:01 ===> Epoch[170](51080/301): Loss 0.2980 LR: 6.071e-02 Score 90.580 Data time: 2.1921, Total iter time: 5.5628 +thomas 04/09 03:06:55 ===> Epoch[170](51120/301): Loss 0.2766 LR: 6.068e-02 Score 91.131 Data time: 2.2841, Total iter time: 5.7849 +thomas 04/09 03:10:46 ===> Epoch[170](51160/301): Loss 0.2543 LR: 6.065e-02 Score 91.985 Data time: 2.2179, Total iter time: 5.7077 +thomas 04/09 03:14:58 ===> Epoch[171](51200/301): Loss 0.2739 LR: 6.061e-02 Score 91.356 Data time: 2.4346, Total iter time: 6.2000 +thomas 04/09 03:19:05 ===> Epoch[171](51240/301): Loss 0.2842 LR: 6.058e-02 Score 90.895 Data time: 2.4414, Total iter time: 6.1037 +thomas 04/09 03:23:15 ===> Epoch[171](51280/301): Loss 0.2886 LR: 6.055e-02 Score 90.607 Data time: 2.4745, Total iter time: 6.1937 +thomas 04/09 03:27:09 ===> Epoch[171](51320/301): Loss 0.2864 LR: 6.052e-02 Score 91.380 Data time: 2.2777, Total iter time: 5.7616 +thomas 04/09 03:31:13 ===> Epoch[171](51360/301): Loss 0.2607 LR: 6.049e-02 Score 91.639 Data time: 2.3509, Total iter time: 6.0348 +thomas 04/09 03:35:04 ===> Epoch[171](51400/301): Loss 0.2521 LR: 6.045e-02 Score 91.915 Data time: 2.2009, Total iter time: 5.6845 +thomas 04/09 03:39:11 ===> Epoch[171](51440/301): Loss 0.2928 LR: 6.042e-02 Score 90.902 Data time: 2.4049, Total iter time: 6.0908 +thomas 04/09 03:43:16 ===> Epoch[172](51480/301): Loss 0.3152 LR: 6.039e-02 Score 90.199 Data time: 2.4511, Total iter time: 6.0613 +thomas 04/09 03:47:30 ===> Epoch[172](51520/301): Loss 0.2635 LR: 6.036e-02 Score 91.577 Data time: 2.4751, Total iter time: 6.2686 +thomas 04/09 03:51:16 ===> Epoch[172](51560/301): Loss 0.2808 LR: 6.033e-02 Score 91.096 Data time: 2.1677, Total iter time: 5.5904 +thomas 04/09 03:55:06 ===> Epoch[172](51600/301): Loss 0.2847 LR: 6.030e-02 Score 90.889 Data time: 2.1953, Total iter time: 5.6680 +thomas 04/09 03:58:41 ===> Epoch[172](51640/301): Loss 0.2568 LR: 6.026e-02 Score 91.868 Data time: 2.0976, Total iter time: 5.3199 +thomas 04/09 04:02:44 ===> Epoch[172](51680/301): Loss 0.2537 LR: 6.023e-02 Score 91.959 Data time: 2.3194, Total iter time: 5.9832 +thomas 04/09 04:06:50 ===> Epoch[172](51720/301): Loss 0.2693 LR: 6.020e-02 Score 91.495 Data time: 2.4395, Total iter time: 6.0822 +thomas 04/09 04:10:53 ===> Epoch[172](51760/301): Loss 0.2787 LR: 6.017e-02 Score 91.106 Data time: 2.3497, Total iter time: 5.9864 +thomas 04/09 04:14:42 ===> Epoch[173](51800/301): Loss 0.2544 LR: 6.014e-02 Score 91.718 Data time: 2.1912, Total iter time: 5.6339 +thomas 04/09 04:18:28 ===> Epoch[173](51840/301): Loss 0.2534 LR: 6.011e-02 Score 92.117 Data time: 2.2010, Total iter time: 5.5848 +thomas 04/09 04:22:21 ===> Epoch[173](51880/301): Loss 0.2943 LR: 6.007e-02 Score 90.820 Data time: 2.2434, Total iter time: 5.7552 +thomas 04/09 04:26:36 ===> Epoch[173](51920/301): Loss 0.3013 LR: 6.004e-02 Score 90.988 Data time: 2.4615, Total iter time: 6.3043 +thomas 04/09 04:30:41 ===> Epoch[173](51960/301): Loss 0.2759 LR: 6.001e-02 Score 91.207 Data time: 2.4401, Total iter time: 6.0369 +thomas 04/09 04:34:40 ===> Epoch[173](52000/301): Loss 0.2941 LR: 5.998e-02 Score 90.726 Data time: 2.3775, Total iter time: 5.9158 +thomas 04/09 04:34:42 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/09 04:34:42 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/09 04:36:41 101/312: Data time: 0.0028, Iter time: 0.4072 Loss 0.274 (AVG: 0.600) Score 88.096 (AVG: 84.187) mIOU 55.718 mAP 69.192 mAcc 68.397 +IOU: 77.469 95.646 54.748 66.622 85.429 73.983 67.019 43.429 37.192 70.137 20.966 60.724 53.553 59.738 34.487 43.987 45.444 54.808 26.699 42.273 +mAP: 78.706 96.162 58.306 74.886 86.745 84.509 76.628 60.293 48.271 77.444 40.342 65.123 71.410 76.467 68.028 67.558 73.168 82.907 41.334 55.556 +mAcc: 89.935 98.982 76.033 84.212 90.720 97.997 71.450 61.526 41.617 75.558 27.287 71.875 90.250 83.163 81.549 45.245 45.730 57.405 27.348 50.052 + +thomas 04/09 04:38:38 201/312: Data time: 0.0024, Iter time: 0.6045 Loss 0.576 (AVG: 0.585) Score 77.003 (AVG: 84.216) mIOU 57.963 mAP 71.051 mAcc 70.776 +IOU: 77.936 95.451 51.379 67.875 86.775 77.514 65.455 43.269 43.212 64.741 17.376 56.374 51.795 64.125 29.151 55.733 53.522 44.982 75.366 37.236 +mAP: 80.304 96.275 53.566 75.437 88.949 86.835 74.095 60.470 51.994 70.083 41.587 60.495 68.172 78.211 64.049 79.771 78.000 77.753 81.384 53.585 +mAcc: 89.415 98.851 72.299 84.808 92.120 97.286 70.557 60.993 48.928 71.277 22.975 69.850 85.157 84.889 79.026 60.631 53.871 47.284 80.168 45.142 + +thomas 04/09 04:40:34 301/312: Data time: 0.0028, Iter time: 0.4659 Loss 0.229 (AVG: 0.564) Score 91.243 (AVG: 84.627) mIOU 58.694 mAP 70.867 mAcc 71.228 +IOU: 78.024 95.584 52.977 68.922 87.598 77.005 67.374 43.583 45.676 66.558 15.065 55.646 55.758 61.119 32.844 55.373 51.980 49.331 74.597 38.856 +mAP: 79.815 95.932 58.377 74.291 89.265 85.371 73.459 58.987 49.951 71.134 39.104 59.312 68.035 76.695 62.925 80.231 78.637 79.927 82.719 53.177 +mAcc: 89.157 98.895 73.842 84.614 92.750 96.179 72.706 60.753 52.892 71.670 20.082 70.046 86.299 82.232 76.491 63.073 52.267 51.855 81.050 47.696 + +thomas 04/09 04:40:46 312/312: Data time: 0.0034, Iter time: 0.4530 Loss 0.703 (AVG: 0.565) Score 74.124 (AVG: 84.468) mIOU 58.792 mAP 70.884 mAcc 71.276 +IOU: 77.678 95.622 52.676 69.282 87.478 76.112 67.421 43.373 45.149 66.465 15.439 55.928 56.262 62.677 33.043 55.339 52.724 49.844 74.488 38.849 +mAP: 79.337 95.944 58.332 73.263 89.332 85.423 73.388 59.451 49.488 71.134 38.996 59.972 68.137 77.547 62.983 80.231 79.053 80.535 82.684 52.441 +mAcc: 88.926 98.901 73.610 84.983 92.636 96.207 72.832 60.139 52.280 71.670 20.653 70.278 86.446 83.168 75.726 63.073 53.013 52.401 80.766 47.806 + +thomas 04/09 04:40:46 Finished test. Elapsed time: 364.5160 +thomas 04/09 04:40:46 Current best mIoU: 61.872 at iter 38000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/09 04:44:31 ===> Epoch[173](52040/301): Loss 0.2567 LR: 5.995e-02 Score 91.915 Data time: 2.1641, Total iter time: 5.5501 +thomas 04/09 04:48:33 ===> Epoch[174](52080/301): Loss 0.2567 LR: 5.992e-02 Score 91.938 Data time: 2.3258, Total iter time: 5.9674 +thomas 04/09 04:52:52 ===> Epoch[174](52120/301): Loss 0.2606 LR: 5.988e-02 Score 91.690 Data time: 2.5199, Total iter time: 6.3927 +thomas 04/09 04:56:50 ===> Epoch[174](52160/301): Loss 0.2705 LR: 5.985e-02 Score 91.689 Data time: 2.3670, Total iter time: 5.8881 +thomas 04/09 05:00:33 ===> Epoch[174](52200/301): Loss 0.2930 LR: 5.982e-02 Score 90.711 Data time: 2.1666, Total iter time: 5.4971 +thomas 04/09 05:04:37 ===> Epoch[174](52240/301): Loss 0.3496 LR: 5.979e-02 Score 88.790 Data time: 2.3177, Total iter time: 6.0161 +thomas 04/09 05:08:19 ===> Epoch[174](52280/301): Loss 0.2889 LR: 5.976e-02 Score 90.650 Data time: 2.1292, Total iter time: 5.4985 +thomas 04/09 05:12:26 ===> Epoch[174](52320/301): Loss 0.2871 LR: 5.972e-02 Score 90.865 Data time: 2.3657, Total iter time: 6.0838 +thomas 04/09 05:16:26 ===> Epoch[174](52360/301): Loss 0.2780 LR: 5.969e-02 Score 91.311 Data time: 2.3818, Total iter time: 5.9320 +thomas 04/09 05:20:41 ===> Epoch[175](52400/301): Loss 0.2945 LR: 5.966e-02 Score 90.798 Data time: 2.4871, Total iter time: 6.2963 +thomas 04/09 05:24:31 ===> Epoch[175](52440/301): Loss 0.2733 LR: 5.963e-02 Score 91.454 Data time: 2.2310, Total iter time: 5.6833 +thomas 04/09 05:28:28 ===> Epoch[175](52480/301): Loss 0.2961 LR: 5.960e-02 Score 90.775 Data time: 2.2765, Total iter time: 5.8496 +thomas 04/09 05:32:10 ===> Epoch[175](52520/301): Loss 0.3115 LR: 5.957e-02 Score 90.159 Data time: 2.1347, Total iter time: 5.4885 +thomas 04/09 05:36:06 ===> Epoch[175](52560/301): Loss 0.2872 LR: 5.953e-02 Score 90.644 Data time: 2.2762, Total iter time: 5.8149 +thomas 04/09 05:40:06 ===> Epoch[175](52600/301): Loss 0.2634 LR: 5.950e-02 Score 91.417 Data time: 2.3770, Total iter time: 5.9304 +thomas 04/09 05:44:10 ===> Epoch[175](52640/301): Loss 0.2553 LR: 5.947e-02 Score 91.774 Data time: 2.4378, Total iter time: 6.0154 +thomas 04/09 05:48:31 ===> Epoch[176](52680/301): Loss 0.3096 LR: 5.944e-02 Score 90.360 Data time: 2.5299, Total iter time: 6.4409 +thomas 04/09 05:52:24 ===> Epoch[176](52720/301): Loss 0.2947 LR: 5.941e-02 Score 90.554 Data time: 2.2011, Total iter time: 5.7480 +thomas 04/09 05:56:08 ===> Epoch[176](52760/301): Loss 0.2396 LR: 5.938e-02 Score 92.213 Data time: 2.1606, Total iter time: 5.5403 +thomas 04/09 06:00:01 ===> Epoch[176](52800/301): Loss 0.2794 LR: 5.934e-02 Score 91.293 Data time: 2.2501, Total iter time: 5.7540 +thomas 04/09 06:03:50 ===> Epoch[176](52840/301): Loss 0.2908 LR: 5.931e-02 Score 90.730 Data time: 2.2761, Total iter time: 5.6540 +thomas 04/09 06:07:48 ===> Epoch[176](52880/301): Loss 0.2653 LR: 5.928e-02 Score 91.521 Data time: 2.3373, Total iter time: 5.8653 +thomas 04/09 06:11:51 ===> Epoch[176](52920/301): Loss 0.2629 LR: 5.925e-02 Score 91.699 Data time: 2.3634, Total iter time: 6.0005 +thomas 04/09 06:15:49 ===> Epoch[176](52960/301): Loss 0.2441 LR: 5.922e-02 Score 92.193 Data time: 2.2958, Total iter time: 5.8776 +thomas 04/09 06:19:31 ===> Epoch[177](53000/301): Loss 0.2895 LR: 5.918e-02 Score 90.739 Data time: 2.1147, Total iter time: 5.4608 +thomas 04/09 06:19:32 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/09 06:19:32 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/09 06:21:31 101/312: Data time: 0.0025, Iter time: 0.7994 Loss 0.163 (AVG: 0.652) Score 95.082 (AVG: 83.355) mIOU 58.151 mAP 69.711 mAcc 68.709 +IOU: 75.422 96.277 54.850 63.362 80.949 64.307 68.083 41.980 32.859 70.359 5.797 59.953 52.937 65.019 15.664 48.467 79.454 65.447 86.347 35.494 +mAP: 77.829 98.006 55.145 68.078 82.331 83.146 63.351 65.444 47.893 71.669 24.379 67.837 64.284 82.049 45.442 86.794 92.301 86.541 76.011 55.680 +mAcc: 92.901 98.445 61.309 78.821 83.078 97.010 73.298 60.210 34.724 86.791 5.918 89.736 66.746 73.328 17.116 58.402 85.404 67.139 93.890 49.915 + +thomas 04/09 06:23:34 201/312: Data time: 0.0024, Iter time: 0.6950 Loss 0.348 (AVG: 0.665) Score 89.736 (AVG: 82.983) mIOU 57.011 mAP 69.544 mAcc 67.075 +IOU: 74.943 96.005 52.607 65.237 80.374 65.941 67.231 41.130 26.514 69.653 11.300 53.822 56.331 66.377 21.073 42.747 79.947 55.725 76.921 36.352 +mAP: 77.845 97.432 54.330 72.738 83.782 81.179 67.534 63.685 48.505 67.237 37.718 64.767 60.679 80.500 44.373 81.269 93.257 81.742 76.325 55.992 +mAcc: 93.452 98.341 59.259 79.437 82.665 94.597 73.769 58.822 27.676 87.340 11.596 78.940 69.625 73.890 22.744 49.533 86.315 56.641 87.545 49.313 + +thomas 04/09 06:25:32 301/312: Data time: 0.0026, Iter time: 0.8522 Loss 0.346 (AVG: 0.660) Score 91.461 (AVG: 82.898) mIOU 56.490 mAP 70.014 mAcc 67.116 +IOU: 75.045 95.926 52.738 60.913 79.364 64.697 66.542 40.645 26.631 69.218 11.385 53.651 54.096 64.661 31.670 41.967 76.970 50.333 76.436 36.920 +mAP: 78.642 97.084 57.141 69.979 83.090 83.375 69.839 61.575 47.228 67.376 39.686 60.525 62.990 79.762 49.449 81.784 90.176 82.639 82.336 55.610 +mAcc: 93.525 98.212 59.805 78.925 81.990 95.291 71.931 58.049 27.894 85.703 12.074 79.664 70.557 71.735 34.732 51.143 84.692 51.098 85.697 49.605 + +thomas 04/09 06:25:47 312/312: Data time: 0.0032, Iter time: 0.8232 Loss 1.120 (AVG: 0.664) Score 71.778 (AVG: 82.785) mIOU 56.407 mAP 69.844 mAcc 67.095 +IOU: 75.024 95.853 52.483 60.701 78.279 63.074 65.566 40.494 26.863 68.961 11.866 53.364 54.189 63.564 30.957 45.044 77.881 49.948 76.937 37.103 +mAP: 78.681 97.111 57.763 69.487 82.594 81.572 69.168 60.905 47.097 67.376 40.758 60.784 62.416 78.873 48.324 83.668 90.719 82.463 81.136 55.988 +mAcc: 93.501 98.160 59.791 78.698 80.873 94.690 71.180 57.920 28.144 85.703 12.541 79.864 70.499 70.389 33.912 54.126 85.303 50.826 85.662 50.120 + +thomas 04/09 06:25:47 Finished test. Elapsed time: 374.8573 +thomas 04/09 06:25:47 Current best mIoU: 61.872 at iter 38000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/09 06:30:02 ===> Epoch[177](53040/301): Loss 0.3070 LR: 5.915e-02 Score 90.743 Data time: 2.4870, Total iter time: 6.2766 +thomas 04/09 06:34:10 ===> Epoch[177](53080/301): Loss 0.2689 LR: 5.912e-02 Score 91.420 Data time: 2.4321, Total iter time: 6.1312 +thomas 04/09 06:38:10 ===> Epoch[177](53120/301): Loss 0.2883 LR: 5.909e-02 Score 90.971 Data time: 2.3036, Total iter time: 5.9051 +thomas 04/09 06:42:01 ===> Epoch[177](53160/301): Loss 0.2451 LR: 5.906e-02 Score 92.229 Data time: 2.2320, Total iter time: 5.7201 +thomas 04/09 06:45:44 ===> Epoch[177](53200/301): Loss 0.2451 LR: 5.903e-02 Score 92.050 Data time: 2.1461, Total iter time: 5.4868 +thomas 04/09 06:49:42 ===> Epoch[177](53240/301): Loss 0.2584 LR: 5.899e-02 Score 91.643 Data time: 2.3019, Total iter time: 5.8726 +thomas 04/09 06:54:02 ===> Epoch[178](53280/301): Loss 0.2558 LR: 5.896e-02 Score 92.051 Data time: 2.5747, Total iter time: 6.4050 +thomas 04/09 06:58:05 ===> Epoch[178](53320/301): Loss 0.2709 LR: 5.893e-02 Score 91.581 Data time: 2.3910, Total iter time: 6.0061 +thomas 04/09 07:02:02 ===> Epoch[178](53360/301): Loss 0.2875 LR: 5.890e-02 Score 91.176 Data time: 2.2710, Total iter time: 5.8300 +thomas 04/09 07:05:52 ===> Epoch[178](53400/301): Loss 0.2872 LR: 5.887e-02 Score 90.850 Data time: 2.2367, Total iter time: 5.6760 +thomas 04/09 07:09:38 ===> Epoch[178](53440/301): Loss 0.2855 LR: 5.883e-02 Score 90.803 Data time: 2.1732, Total iter time: 5.5582 +thomas 04/09 07:13:49 ===> Epoch[178](53480/301): Loss 0.2550 LR: 5.880e-02 Score 91.871 Data time: 2.4566, Total iter time: 6.1940 +thomas 04/09 07:18:07 ===> Epoch[178](53520/301): Loss 0.2365 LR: 5.877e-02 Score 92.318 Data time: 2.5859, Total iter time: 6.3682 +thomas 04/09 07:22:05 ===> Epoch[178](53560/301): Loss 0.2573 LR: 5.874e-02 Score 91.577 Data time: 2.3487, Total iter time: 5.8819 +thomas 04/09 07:25:52 ===> Epoch[179](53600/301): Loss 0.2899 LR: 5.871e-02 Score 90.963 Data time: 2.2324, Total iter time: 5.6134 +thomas 04/09 07:29:34 ===> Epoch[179](53640/301): Loss 0.2749 LR: 5.868e-02 Score 91.389 Data time: 2.1359, Total iter time: 5.4799 +thomas 04/09 07:33:36 ===> Epoch[179](53680/301): Loss 0.2708 LR: 5.864e-02 Score 91.496 Data time: 2.3547, Total iter time: 5.9812 +thomas 04/09 07:37:39 ===> Epoch[179](53720/301): Loss 0.2611 LR: 5.861e-02 Score 91.724 Data time: 2.4018, Total iter time: 6.0039 +thomas 04/09 07:41:44 ===> Epoch[179](53760/301): Loss 0.2883 LR: 5.858e-02 Score 90.980 Data time: 2.4477, Total iter time: 6.0539 +thomas 04/09 07:46:01 ===> Epoch[179](53800/301): Loss 0.2954 LR: 5.855e-02 Score 90.831 Data time: 2.5367, Total iter time: 6.3432 +thomas 04/09 07:49:56 ===> Epoch[179](53840/301): Loss 0.2707 LR: 5.852e-02 Score 91.337 Data time: 2.2941, Total iter time: 5.8072 +thomas 04/09 07:53:42 ===> Epoch[180](53880/301): Loss 0.3156 LR: 5.848e-02 Score 89.958 Data time: 2.1762, Total iter time: 5.5705 +thomas 04/09 07:57:37 ===> Epoch[180](53920/301): Loss 0.2771 LR: 5.845e-02 Score 91.695 Data time: 2.2711, Total iter time: 5.7907 +thomas 04/09 08:01:32 ===> Epoch[180](53960/301): Loss 0.2874 LR: 5.842e-02 Score 90.848 Data time: 2.2877, Total iter time: 5.8108 +thomas 04/09 08:05:49 ===> Epoch[180](54000/301): Loss 0.2788 LR: 5.839e-02 Score 91.180 Data time: 2.5179, Total iter time: 6.3429 +thomas 04/09 08:05:51 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/09 08:05:51 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/09 08:07:51 101/312: Data time: 0.0039, Iter time: 0.8462 Loss 0.617 (AVG: 0.582) Score 84.458 (AVG: 83.383) mIOU 59.914 mAP 71.050 mAcc 71.682 +IOU: 75.054 95.914 55.551 71.052 87.148 73.472 56.593 43.528 39.538 69.174 10.175 58.326 61.146 67.524 64.383 67.426 50.286 34.180 83.632 34.179 +mAP: 76.959 97.168 56.732 72.733 90.933 79.799 73.450 65.786 53.893 79.605 39.782 56.390 74.053 76.391 58.401 94.078 78.254 71.728 82.590 42.279 +mAcc: 84.036 98.970 67.441 75.269 94.492 97.848 63.059 72.063 43.548 95.105 10.384 63.371 80.701 90.002 73.722 95.002 50.384 35.476 85.845 56.929 + +thomas 04/09 08:10:03 201/312: Data time: 0.0031, Iter time: 0.5741 Loss 0.812 (AVG: 0.644) Score 77.989 (AVG: 82.549) mIOU 57.553 mAP 68.861 mAcc 68.645 +IOU: 74.575 95.952 54.904 67.162 86.489 77.050 58.751 43.741 31.590 70.462 8.871 55.406 57.493 55.104 50.487 66.116 53.498 32.151 80.853 30.402 +mAP: 74.963 97.015 57.150 66.515 89.875 84.875 70.931 65.907 47.073 67.626 35.491 55.476 65.744 78.939 58.172 89.426 82.548 66.141 82.490 40.860 +mAcc: 84.127 98.827 67.491 72.418 93.638 97.899 66.989 67.452 34.709 92.580 8.958 63.362 74.611 89.891 55.122 79.202 53.660 33.355 82.967 55.648 + +thomas 04/09 08:12:02 301/312: Data time: 0.0023, Iter time: 0.7557 Loss 1.309 (AVG: 0.641) Score 70.156 (AVG: 82.599) mIOU 57.244 mAP 68.639 mAcc 67.854 +IOU: 74.584 96.046 55.219 68.266 85.976 78.495 60.758 43.222 35.450 69.095 7.242 58.243 58.448 52.197 42.217 61.946 50.891 39.907 76.010 30.675 +mAP: 75.564 97.111 59.115 68.372 89.206 82.491 69.439 63.380 50.104 65.300 36.413 58.506 67.109 77.241 50.820 89.097 84.439 72.404 75.486 41.179 +mAcc: 84.275 98.872 66.442 73.062 93.204 94.610 69.643 67.259 38.782 89.119 7.301 65.502 75.947 84.251 46.358 72.667 51.083 41.669 77.951 59.073 + +thomas 04/09 08:12:13 312/312: Data time: 0.0029, Iter time: 0.3173 Loss 0.245 (AVG: 0.640) Score 89.117 (AVG: 82.643) mIOU 57.369 mAP 68.715 mAcc 67.968 +IOU: 74.640 96.015 55.188 67.271 86.058 78.772 61.540 42.919 35.166 69.182 7.111 58.243 57.543 54.031 41.893 63.530 51.739 39.935 76.085 30.514 +mAP: 75.812 97.131 59.349 68.243 89.341 82.744 69.774 63.222 49.863 65.633 35.286 58.506 67.109 77.125 49.926 89.562 85.144 72.953 76.163 41.406 +mAcc: 84.239 98.877 66.415 71.917 93.285 94.773 70.115 67.186 38.634 88.576 7.172 65.502 75.947 85.406 45.975 74.694 51.925 41.734 78.022 58.973 + +thomas 04/09 08:12:13 Finished test. Elapsed time: 382.6353 +thomas 04/09 08:12:13 Current best mIoU: 61.872 at iter 38000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/09 08:16:09 ===> Epoch[180](54040/301): Loss 0.2429 LR: 5.836e-02 Score 92.267 Data time: 2.2903, Total iter time: 5.8266 +thomas 04/09 08:19:57 ===> Epoch[180](54080/301): Loss 0.2657 LR: 5.833e-02 Score 91.457 Data time: 2.2096, Total iter time: 5.6175 +thomas 04/09 08:23:40 ===> Epoch[180](54120/301): Loss 0.2419 LR: 5.829e-02 Score 92.242 Data time: 2.1883, Total iter time: 5.5032 +thomas 04/09 08:27:41 ===> Epoch[180](54160/301): Loss 0.2779 LR: 5.826e-02 Score 90.901 Data time: 2.3399, Total iter time: 5.9350 +thomas 04/09 08:31:57 ===> Epoch[181](54200/301): Loss 0.2742 LR: 5.823e-02 Score 91.341 Data time: 2.4956, Total iter time: 6.3156 +thomas 04/09 08:36:01 ===> Epoch[181](54240/301): Loss 0.2626 LR: 5.820e-02 Score 91.676 Data time: 2.4044, Total iter time: 6.0384 +thomas 04/09 08:39:57 ===> Epoch[181](54280/301): Loss 0.2417 LR: 5.817e-02 Score 92.243 Data time: 2.2782, Total iter time: 5.8266 +thomas 04/09 08:43:53 ===> Epoch[181](54320/301): Loss 0.2629 LR: 5.813e-02 Score 92.007 Data time: 2.3055, Total iter time: 5.8169 +thomas 04/09 08:47:54 ===> Epoch[181](54360/301): Loss 0.2806 LR: 5.810e-02 Score 91.323 Data time: 2.3777, Total iter time: 5.9637 +thomas 04/09 08:52:02 ===> Epoch[181](54400/301): Loss 0.2761 LR: 5.807e-02 Score 91.240 Data time: 2.4392, Total iter time: 6.1265 +thomas 04/09 08:56:14 ===> Epoch[181](54440/301): Loss 0.2703 LR: 5.804e-02 Score 91.447 Data time: 2.4093, Total iter time: 6.2156 +thomas 04/09 09:00:12 ===> Epoch[181](54480/301): Loss 0.2865 LR: 5.801e-02 Score 90.978 Data time: 2.3400, Total iter time: 5.8783 +thomas 04/09 09:04:12 ===> Epoch[182](54520/301): Loss 0.2740 LR: 5.797e-02 Score 91.389 Data time: 2.3250, Total iter time: 5.9223 +thomas 04/09 09:08:28 ===> Epoch[182](54560/301): Loss 0.2844 LR: 5.794e-02 Score 90.949 Data time: 2.4907, Total iter time: 6.3466 +thomas 04/09 09:12:33 ===> Epoch[182](54600/301): Loss 0.2663 LR: 5.791e-02 Score 91.517 Data time: 2.3988, Total iter time: 6.0389 +thomas 04/09 09:16:50 ===> Epoch[182](54640/301): Loss 0.3035 LR: 5.788e-02 Score 90.603 Data time: 2.5142, Total iter time: 6.3320 +thomas 04/09 09:21:03 ===> Epoch[182](54680/301): Loss 0.2520 LR: 5.785e-02 Score 92.029 Data time: 2.5086, Total iter time: 6.2526 +thomas 04/09 09:25:17 ===> Epoch[182](54720/301): Loss 0.2532 LR: 5.782e-02 Score 91.693 Data time: 2.4597, Total iter time: 6.2717 +thomas 04/09 09:29:17 ===> Epoch[182](54760/301): Loss 0.2755 LR: 5.778e-02 Score 91.250 Data time: 2.2967, Total iter time: 5.9209 +thomas 04/09 09:33:19 ===> Epoch[183](54800/301): Loss 0.2359 LR: 5.775e-02 Score 92.431 Data time: 2.3535, Total iter time: 5.9716 +thomas 04/09 09:37:11 ===> Epoch[183](54840/301): Loss 0.2708 LR: 5.772e-02 Score 91.225 Data time: 2.2758, Total iter time: 5.7187 +thomas 04/09 09:41:02 ===> Epoch[183](54880/301): Loss 0.2423 LR: 5.769e-02 Score 92.326 Data time: 2.2751, Total iter time: 5.7274 +thomas 04/09 09:45:05 ===> Epoch[183](54920/301): Loss 0.3106 LR: 5.766e-02 Score 90.592 Data time: 2.3722, Total iter time: 5.9995 +thomas 04/09 09:49:13 ===> Epoch[183](54960/301): Loss 0.2826 LR: 5.762e-02 Score 91.396 Data time: 2.4096, Total iter time: 6.1252 +thomas 04/09 09:53:25 ===> Epoch[183](55000/301): Loss 0.2957 LR: 5.759e-02 Score 90.926 Data time: 2.4702, Total iter time: 6.2272 +thomas 04/09 09:53:27 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/09 09:53:27 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/09 09:55:40 101/312: Data time: 0.0027, Iter time: 0.6916 Loss 0.728 (AVG: 0.606) Score 85.780 (AVG: 83.852) mIOU 58.389 mAP 69.282 mAcc 68.944 +IOU: 75.979 96.286 54.595 58.508 88.720 69.695 73.037 42.592 28.105 62.027 6.207 47.487 56.814 45.236 49.641 53.967 90.471 53.983 74.795 39.626 +mAP: 77.232 96.747 56.423 71.120 87.414 81.325 74.860 61.337 48.080 60.188 25.048 56.145 54.084 74.623 65.553 91.537 97.219 73.713 79.843 53.153 +mAcc: 87.869 98.970 76.668 84.274 93.313 94.683 85.781 55.566 29.404 66.284 7.760 53.366 68.541 69.262 57.900 62.731 92.054 56.220 75.720 62.523 + +thomas 04/09 09:57:49 201/312: Data time: 0.2083, Iter time: 1.3872 Loss 0.545 (AVG: 0.600) Score 84.895 (AVG: 83.920) mIOU 58.280 mAP 70.346 mAcc 69.131 +IOU: 77.100 96.389 53.625 57.801 86.899 69.163 68.872 42.744 30.259 64.595 9.301 45.793 55.964 49.938 42.858 58.948 84.092 49.479 84.010 37.772 +mAP: 78.200 97.314 56.824 73.990 89.073 81.424 74.257 61.332 49.385 64.076 30.612 55.464 58.982 78.136 59.920 91.011 89.855 75.110 84.903 57.056 +mAcc: 88.998 98.879 76.308 82.500 91.553 95.359 81.602 53.442 31.111 71.182 12.726 51.188 67.678 79.801 49.924 68.376 85.129 51.163 84.943 60.763 + +thomas 04/09 10:00:02 301/312: Data time: 0.0966, Iter time: 0.7286 Loss 0.695 (AVG: 0.602) Score 78.841 (AVG: 83.722) mIOU 58.660 mAP 69.552 mAcc 68.944 +IOU: 76.288 96.277 53.328 60.716 88.108 70.779 69.745 40.648 27.464 67.493 10.942 46.982 56.611 56.076 45.024 59.468 83.562 44.673 80.804 38.217 +mAP: 76.591 96.888 56.806 74.675 89.590 80.214 73.923 57.049 46.261 66.130 33.902 54.186 59.581 74.937 59.819 89.317 89.274 74.446 81.283 56.164 +mAcc: 88.524 98.817 75.505 84.540 92.634 94.133 82.397 50.778 28.058 72.894 15.319 52.521 68.024 81.503 51.541 66.780 84.493 46.004 81.673 62.739 + +thomas 04/09 10:00:18 312/312: Data time: 0.0024, Iter time: 0.6753 Loss 0.586 (AVG: 0.612) Score 81.471 (AVG: 83.443) mIOU 58.489 mAP 69.434 mAcc 68.773 +IOU: 76.097 96.285 52.828 60.495 88.052 71.286 69.051 40.131 27.138 67.384 11.060 45.966 56.243 56.979 44.581 59.464 83.759 44.576 80.776 37.627 +mAP: 76.278 96.850 55.967 73.635 89.257 80.522 73.420 56.841 45.411 66.498 34.227 54.913 59.674 75.738 59.189 89.317 89.468 74.599 81.579 55.295 +mAcc: 88.284 98.816 74.761 84.541 92.632 94.245 82.371 49.817 27.750 72.761 15.495 51.561 67.462 82.251 50.843 66.780 84.696 45.891 81.686 62.825 + +thomas 04/09 10:00:18 Finished test. Elapsed time: 411.1384 +thomas 04/09 10:00:18 Current best mIoU: 61.872 at iter 38000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/09 10:04:32 ===> Epoch[183](55040/301): Loss 0.2712 LR: 5.756e-02 Score 91.370 Data time: 2.4538, Total iter time: 6.2659 +thomas 04/09 10:08:42 ===> Epoch[183](55080/301): Loss 0.2367 LR: 5.753e-02 Score 92.436 Data time: 2.4412, Total iter time: 6.1868 +thomas 04/09 10:12:58 ===> Epoch[184](55120/301): Loss 0.2642 LR: 5.750e-02 Score 91.520 Data time: 2.5289, Total iter time: 6.3332 +thomas 04/09 10:17:14 ===> Epoch[184](55160/301): Loss 0.2701 LR: 5.746e-02 Score 91.476 Data time: 2.4575, Total iter time: 6.3150 +thomas 04/09 10:21:39 ===> Epoch[184](55200/301): Loss 0.2759 LR: 5.743e-02 Score 91.051 Data time: 2.6039, Total iter time: 6.5529 +thomas 04/09 10:25:51 ===> Epoch[184](55240/301): Loss 0.2939 LR: 5.740e-02 Score 90.606 Data time: 2.4222, Total iter time: 6.2089 +thomas 04/09 10:30:02 ===> Epoch[184](55280/301): Loss 0.2652 LR: 5.737e-02 Score 91.586 Data time: 2.4262, Total iter time: 6.1983 +thomas 04/09 10:34:14 ===> Epoch[184](55320/301): Loss 0.2453 LR: 5.734e-02 Score 92.252 Data time: 2.4661, Total iter time: 6.2380 +thomas 04/09 10:38:36 ===> Epoch[184](55360/301): Loss 0.2533 LR: 5.730e-02 Score 91.872 Data time: 2.5187, Total iter time: 6.4717 +thomas 04/09 10:42:56 ===> Epoch[185](55400/301): Loss 0.2328 LR: 5.727e-02 Score 92.465 Data time: 2.5163, Total iter time: 6.4126 +thomas 04/09 10:47:08 ===> Epoch[185](55440/301): Loss 0.2666 LR: 5.724e-02 Score 91.289 Data time: 2.4634, Total iter time: 6.2189 +thomas 04/09 10:51:24 ===> Epoch[185](55480/301): Loss 0.2774 LR: 5.721e-02 Score 91.490 Data time: 2.5040, Total iter time: 6.3479 +thomas 04/09 10:55:37 ===> Epoch[185](55520/301): Loss 0.2678 LR: 5.718e-02 Score 91.479 Data time: 2.4532, Total iter time: 6.2382 +thomas 04/09 10:59:44 ===> Epoch[185](55560/301): Loss 0.2608 LR: 5.715e-02 Score 91.941 Data time: 2.4094, Total iter time: 6.0960 +thomas 04/09 11:04:12 ===> Epoch[185](55600/301): Loss 0.2356 LR: 5.711e-02 Score 92.519 Data time: 2.5443, Total iter time: 6.6158 +thomas 04/09 11:08:28 ===> Epoch[185](55640/301): Loss 0.2815 LR: 5.708e-02 Score 91.053 Data time: 2.5192, Total iter time: 6.3310 +thomas 04/09 11:12:34 ===> Epoch[185](55680/301): Loss 0.2753 LR: 5.705e-02 Score 91.220 Data time: 2.4258, Total iter time: 6.0789 +thomas 04/09 11:16:42 ===> Epoch[186](55720/301): Loss 0.2861 LR: 5.702e-02 Score 90.985 Data time: 2.3656, Total iter time: 6.1266 +thomas 04/09 11:20:56 ===> Epoch[186](55760/301): Loss 0.2923 LR: 5.699e-02 Score 90.711 Data time: 2.4898, Total iter time: 6.2750 +thomas 04/09 11:25:11 ===> Epoch[186](55800/301): Loss 0.2206 LR: 5.695e-02 Score 93.121 Data time: 2.4912, Total iter time: 6.3025 +thomas 04/09 11:29:29 ===> Epoch[186](55840/301): Loss 0.2421 LR: 5.692e-02 Score 92.302 Data time: 2.5193, Total iter time: 6.3650 +thomas 04/09 11:33:46 ===> Epoch[186](55880/301): Loss 0.2523 LR: 5.689e-02 Score 92.100 Data time: 2.5222, Total iter time: 6.3454 +thomas 04/09 11:37:50 ===> Epoch[186](55920/301): Loss 0.2417 LR: 5.686e-02 Score 92.394 Data time: 2.3592, Total iter time: 6.0135 +thomas 04/09 11:42:07 ===> Epoch[186](55960/301): Loss 0.2581 LR: 5.683e-02 Score 91.993 Data time: 2.4752, Total iter time: 6.3482 +thomas 04/09 11:46:31 ===> Epoch[187](56000/301): Loss 0.2626 LR: 5.679e-02 Score 91.447 Data time: 2.5650, Total iter time: 6.5021 +thomas 04/09 11:46:32 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/09 11:46:32 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/09 11:48:47 101/312: Data time: 0.0030, Iter time: 0.5725 Loss 1.606 (AVG: 0.636) Score 67.609 (AVG: 83.475) mIOU 60.756 mAP 70.940 mAcc 72.168 +IOU: 73.563 96.823 44.512 71.429 88.669 80.099 68.630 48.389 9.328 69.227 20.724 57.846 57.958 65.674 44.361 54.490 90.776 62.970 71.016 38.626 +mAP: 78.177 96.102 55.026 77.033 90.930 82.205 75.523 64.631 37.807 68.847 36.851 62.832 73.132 77.707 56.553 84.857 97.751 83.419 65.911 53.516 +mAcc: 81.753 98.865 81.293 85.970 94.019 86.083 74.362 70.294 9.408 81.241 24.867 85.682 80.812 72.220 49.789 57.229 91.993 66.368 72.346 78.758 + +thomas 04/09 11:51:08 201/312: Data time: 0.0028, Iter time: 0.4998 Loss 0.131 (AVG: 0.703) Score 96.102 (AVG: 81.988) mIOU 59.514 mAP 69.333 mAcc 71.053 +IOU: 72.706 96.667 38.698 71.643 89.354 80.908 68.272 45.548 12.599 61.803 12.043 56.936 59.355 54.773 46.737 50.919 84.254 63.375 83.958 39.734 +mAP: 77.345 96.878 49.232 71.130 90.230 83.185 73.279 63.076 38.455 66.927 33.591 60.877 68.242 68.503 52.935 82.771 94.998 85.296 77.977 51.733 +mAcc: 81.519 98.746 80.382 83.197 93.904 89.126 76.088 62.282 12.897 80.207 14.544 83.103 80.558 57.999 54.060 62.344 84.945 67.876 85.842 71.449 + +thomas 04/09 11:53:13 301/312: Data time: 0.0025, Iter time: 0.4859 Loss 1.126 (AVG: 0.706) Score 65.104 (AVG: 81.928) mIOU 59.431 mAP 69.251 mAcc 70.863 +IOU: 72.887 96.335 38.344 71.686 89.313 79.744 68.437 43.861 14.991 65.378 12.401 56.350 55.996 57.298 49.217 52.770 84.853 60.006 83.554 35.209 +mAP: 77.873 96.838 48.496 72.538 88.793 81.685 72.364 61.629 39.627 67.315 33.373 57.526 67.060 67.044 58.156 85.302 93.960 84.131 81.260 50.046 +mAcc: 81.854 98.638 80.152 80.880 93.534 88.663 76.340 62.626 15.268 82.519 14.623 80.544 79.733 61.073 57.854 64.958 85.551 63.165 86.514 62.762 + +thomas 04/09 11:53:28 312/312: Data time: 0.0027, Iter time: 1.2908 Loss 0.786 (AVG: 0.706) Score 70.067 (AVG: 81.840) mIOU 59.436 mAP 69.368 mAcc 70.824 +IOU: 72.878 96.322 37.229 72.045 89.466 79.744 68.353 44.152 15.201 65.480 13.286 56.329 55.925 57.548 48.247 53.913 83.183 59.246 84.122 36.054 +mAP: 77.765 96.900 48.429 72.874 88.918 81.685 72.299 61.665 39.843 67.225 34.446 58.063 66.993 66.985 57.899 84.779 93.838 84.372 81.845 50.530 +mAcc: 81.833 98.635 80.063 81.670 93.613 88.663 76.466 63.112 15.471 81.961 15.603 80.704 79.163 61.378 56.852 65.994 83.952 62.168 86.968 62.216 + +thomas 04/09 11:53:28 Finished test. Elapsed time: 415.7978 +thomas 04/09 11:53:28 Current best mIoU: 61.872 at iter 38000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/09 11:57:52 ===> Epoch[187](56040/301): Loss 0.2441 LR: 5.676e-02 Score 92.253 Data time: 2.5671, Total iter time: 6.5157 +thomas 04/09 12:01:48 ===> Epoch[187](56080/301): Loss 0.2660 LR: 5.673e-02 Score 91.922 Data time: 2.2809, Total iter time: 5.8200 +thomas 04/09 12:06:01 ===> Epoch[187](56120/301): Loss 0.2566 LR: 5.670e-02 Score 91.812 Data time: 2.4239, Total iter time: 6.2395 +thomas 04/09 12:10:22 ===> Epoch[187](56160/301): Loss 0.2550 LR: 5.667e-02 Score 91.740 Data time: 2.5833, Total iter time: 6.4455 +thomas 04/09 12:14:43 ===> Epoch[187](56200/301): Loss 0.2694 LR: 5.663e-02 Score 91.466 Data time: 2.5496, Total iter time: 6.4485 +thomas 04/09 12:19:14 ===> Epoch[187](56240/301): Loss 0.2388 LR: 5.660e-02 Score 92.361 Data time: 2.6601, Total iter time: 6.6860 +thomas 04/09 12:23:20 ===> Epoch[187](56280/301): Loss 0.2919 LR: 5.657e-02 Score 91.045 Data time: 2.4295, Total iter time: 6.0779 +thomas 04/09 12:27:32 ===> Epoch[188](56320/301): Loss 0.2774 LR: 5.654e-02 Score 91.138 Data time: 2.3881, Total iter time: 6.2252 +thomas 04/09 12:31:55 ===> Epoch[188](56360/301): Loss 0.2410 LR: 5.651e-02 Score 92.103 Data time: 2.5915, Total iter time: 6.5122 +thomas 04/09 12:35:52 ===> Epoch[188](56400/301): Loss 0.2176 LR: 5.647e-02 Score 93.115 Data time: 2.2969, Total iter time: 5.8450 +thomas 04/09 12:40:11 ===> Epoch[188](56440/301): Loss 0.2646 LR: 5.644e-02 Score 91.659 Data time: 2.5386, Total iter time: 6.3761 +thomas 04/09 12:44:17 ===> Epoch[188](56480/301): Loss 0.2451 LR: 5.641e-02 Score 92.131 Data time: 2.4066, Total iter time: 6.0732 +thomas 04/09 12:48:37 ===> Epoch[188](56520/301): Loss 0.2581 LR: 5.638e-02 Score 91.703 Data time: 2.5395, Total iter time: 6.4090 +thomas 04/09 12:52:26 ===> Epoch[188](56560/301): Loss 0.2553 LR: 5.635e-02 Score 91.902 Data time: 2.1978, Total iter time: 5.6743 +thomas 04/09 12:56:50 ===> Epoch[189](56600/301): Loss 0.2612 LR: 5.631e-02 Score 91.360 Data time: 2.5574, Total iter time: 6.5087 +thomas 04/09 13:01:17 ===> Epoch[189](56640/301): Loss 0.2375 LR: 5.628e-02 Score 92.282 Data time: 2.5991, Total iter time: 6.6077 +thomas 04/09 13:05:29 ===> Epoch[189](56680/301): Loss 0.2876 LR: 5.625e-02 Score 91.420 Data time: 2.4884, Total iter time: 6.2200 +thomas 04/09 13:09:47 ===> Epoch[189](56720/301): Loss 0.2610 LR: 5.622e-02 Score 91.887 Data time: 2.5224, Total iter time: 6.3954 +thomas 04/09 13:14:05 ===> Epoch[189](56760/301): Loss 0.2472 LR: 5.619e-02 Score 92.136 Data time: 2.4526, Total iter time: 6.3657 +thomas 04/09 13:18:16 ===> Epoch[189](56800/301): Loss 0.2245 LR: 5.615e-02 Score 92.676 Data time: 2.3636, Total iter time: 6.2000 +thomas 04/09 13:22:40 ===> Epoch[189](56840/301): Loss 0.2699 LR: 5.612e-02 Score 91.334 Data time: 2.5692, Total iter time: 6.5081 +thomas 04/09 13:26:50 ===> Epoch[189](56880/301): Loss 0.2524 LR: 5.609e-02 Score 91.823 Data time: 2.4965, Total iter time: 6.1891 +thomas 04/09 13:31:37 ===> Epoch[190](56920/301): Loss 0.2478 LR: 5.606e-02 Score 91.944 Data time: 2.8178, Total iter time: 7.0778 +thomas 04/09 13:35:36 ===> Epoch[190](56960/301): Loss 0.2733 LR: 5.603e-02 Score 91.485 Data time: 2.3303, Total iter time: 5.9100 +thomas 04/09 13:39:38 ===> Epoch[190](57000/301): Loss 0.2541 LR: 5.599e-02 Score 92.055 Data time: 2.3376, Total iter time: 5.9673 +thomas 04/09 13:39:39 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/09 13:39:39 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/09 13:41:51 101/312: Data time: 0.0028, Iter time: 0.5927 Loss 0.384 (AVG: 0.568) Score 91.218 (AVG: 85.527) mIOU 59.441 mAP 69.498 mAcc 68.681 +IOU: 81.242 96.189 54.347 74.694 84.945 83.796 68.754 50.116 36.398 57.750 7.680 46.504 54.544 68.594 35.975 57.135 77.823 30.160 73.389 48.785 +mAP: 80.555 95.120 56.443 72.780 89.484 82.532 71.270 63.014 49.684 67.600 27.775 52.021 68.211 82.222 62.415 75.948 89.033 65.784 82.019 56.042 +mAcc: 93.069 98.691 81.836 80.244 90.382 92.433 80.799 62.374 38.517 81.972 8.373 48.730 80.350 77.856 55.557 59.856 77.968 30.851 73.946 59.823 + +thomas 04/09 13:44:02 201/312: Data time: 0.1459, Iter time: 0.8371 Loss 0.191 (AVG: 0.564) Score 94.653 (AVG: 85.306) mIOU 59.334 mAP 69.385 mAcc 68.886 +IOU: 80.347 95.995 50.439 76.699 87.495 82.580 68.409 44.527 38.028 67.890 11.228 38.053 55.056 70.759 37.093 53.603 78.442 36.495 67.764 45.779 +mAP: 80.766 95.399 52.796 75.172 89.349 81.325 71.995 60.697 47.429 65.546 35.397 50.329 66.929 82.668 61.976 77.361 89.115 67.241 84.042 52.162 +mAcc: 92.461 98.501 77.501 83.682 92.415 94.436 79.865 55.059 40.386 90.539 12.754 39.910 82.769 77.133 58.644 59.987 78.890 37.424 68.121 57.234 + +thomas 04/09 13:46:12 301/312: Data time: 0.0024, Iter time: 0.3256 Loss 0.448 (AVG: 0.574) Score 85.118 (AVG: 85.285) mIOU 58.972 mAP 69.978 mAcc 68.380 +IOU: 80.023 95.852 50.381 74.553 89.027 82.130 67.835 43.799 36.506 70.145 11.432 41.273 53.266 68.968 37.768 47.664 80.781 34.722 70.612 42.709 +mAP: 80.762 94.897 53.740 73.990 90.378 83.173 72.594 60.259 48.603 69.021 37.695 52.314 66.181 79.846 63.529 80.152 88.892 66.002 83.230 54.293 +mAcc: 92.409 98.565 74.023 80.479 93.211 94.513 78.303 55.355 38.879 91.467 12.903 43.284 83.894 75.703 58.675 52.486 81.399 35.614 70.962 55.480 + +thomas 04/09 13:46:24 312/312: Data time: 0.0029, Iter time: 0.4751 Loss 0.530 (AVG: 0.585) Score 87.784 (AVG: 85.040) mIOU 58.418 mAP 69.502 mAcc 67.882 +IOU: 79.865 95.851 50.016 74.611 88.831 82.017 67.537 44.086 35.603 69.263 12.019 41.252 52.855 67.987 36.720 42.647 79.217 35.928 69.217 42.830 +mAP: 80.521 94.869 53.652 73.064 89.858 83.173 71.564 59.778 47.928 68.746 38.147 52.440 66.300 79.774 62.291 80.029 89.057 66.698 77.931 54.220 +mAcc: 92.298 98.581 73.631 80.490 93.229 94.513 78.109 55.987 37.838 91.468 13.544 43.273 82.692 76.022 58.098 45.848 79.786 36.793 69.609 55.841 + +thomas 04/09 13:46:24 Finished test. Elapsed time: 404.9551 +thomas 04/09 13:46:24 Current best mIoU: 61.872 at iter 38000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/09 13:50:57 ===> Epoch[190](57040/301): Loss 0.2776 LR: 5.596e-02 Score 91.432 Data time: 2.6524, Total iter time: 6.7314 +thomas 04/09 13:55:14 ===> Epoch[190](57080/301): Loss 0.2785 LR: 5.593e-02 Score 91.409 Data time: 2.5377, Total iter time: 6.3634 +thomas 04/09 13:59:39 ===> Epoch[190](57120/301): Loss 0.2523 LR: 5.590e-02 Score 91.899 Data time: 2.5591, Total iter time: 6.5305 +thomas 04/09 14:03:47 ===> Epoch[190](57160/301): Loss 0.2684 LR: 5.587e-02 Score 91.785 Data time: 2.3897, Total iter time: 6.1170 +thomas 04/09 14:07:50 ===> Epoch[191](57200/301): Loss 0.2626 LR: 5.583e-02 Score 91.523 Data time: 2.3494, Total iter time: 6.0024 +thomas 04/09 14:11:54 ===> Epoch[191](57240/301): Loss 0.2621 LR: 5.580e-02 Score 91.744 Data time: 2.3862, Total iter time: 6.0290 +thomas 04/09 14:16:24 ===> Epoch[191](57280/301): Loss 0.2621 LR: 5.577e-02 Score 91.720 Data time: 2.7359, Total iter time: 6.6733 +thomas 04/09 14:20:38 ===> Epoch[191](57320/301): Loss 0.2204 LR: 5.574e-02 Score 92.914 Data time: 2.4836, Total iter time: 6.2658 +thomas 04/09 14:24:52 ===> Epoch[191](57360/301): Loss 0.2463 LR: 5.571e-02 Score 92.251 Data time: 2.4487, Total iter time: 6.2704 +thomas 04/09 14:28:56 ===> Epoch[191](57400/301): Loss 0.2168 LR: 5.567e-02 Score 92.941 Data time: 2.3334, Total iter time: 6.0166 +thomas 04/09 14:33:10 ===> Epoch[191](57440/301): Loss 0.2485 LR: 5.564e-02 Score 92.052 Data time: 2.4291, Total iter time: 6.2755 +thomas 04/09 14:37:38 ===> Epoch[191](57480/301): Loss 0.2647 LR: 5.561e-02 Score 91.838 Data time: 2.6317, Total iter time: 6.6284 +thomas 04/09 14:42:09 ===> Epoch[192](57520/301): Loss 0.2613 LR: 5.558e-02 Score 91.661 Data time: 2.7268, Total iter time: 6.6798 +thomas 04/09 14:46:15 ===> Epoch[192](57560/301): Loss 0.2399 LR: 5.555e-02 Score 92.398 Data time: 2.4139, Total iter time: 6.0781 +thomas 04/09 14:50:15 ===> Epoch[192](57600/301): Loss 0.2408 LR: 5.551e-02 Score 92.384 Data time: 2.2910, Total iter time: 5.9163 +thomas 04/09 14:54:30 ===> Epoch[192](57640/301): Loss 0.2591 LR: 5.548e-02 Score 91.597 Data time: 2.4587, Total iter time: 6.3024 +thomas 04/09 14:58:43 ===> Epoch[192](57680/301): Loss 0.2648 LR: 5.545e-02 Score 91.835 Data time: 2.4604, Total iter time: 6.2382 +thomas 04/09 15:03:10 ===> Epoch[192](57720/301): Loss 0.2508 LR: 5.542e-02 Score 92.215 Data time: 2.6073, Total iter time: 6.5723 +thomas 04/09 15:07:45 ===> Epoch[192](57760/301): Loss 0.2410 LR: 5.539e-02 Score 92.330 Data time: 2.7573, Total iter time: 6.7962 +thomas 04/09 15:12:04 ===> Epoch[193](57800/301): Loss 0.2203 LR: 5.535e-02 Score 92.809 Data time: 2.4874, Total iter time: 6.3993 +thomas 04/09 15:16:06 ===> Epoch[193](57840/301): Loss 0.2626 LR: 5.532e-02 Score 91.634 Data time: 2.3483, Total iter time: 5.9849 +thomas 04/09 15:19:48 ===> Epoch[193](57880/301): Loss 0.2257 LR: 5.529e-02 Score 92.694 Data time: 2.1150, Total iter time: 5.4723 +thomas 04/09 15:24:04 ===> Epoch[193](57920/301): Loss 0.2450 LR: 5.526e-02 Score 92.481 Data time: 2.5028, Total iter time: 6.3443 +thomas 04/09 15:28:30 ===> Epoch[193](57960/301): Loss 0.2647 LR: 5.523e-02 Score 91.752 Data time: 2.6661, Total iter time: 6.5565 +thomas 04/09 15:32:54 ===> Epoch[193](58000/301): Loss 0.2449 LR: 5.519e-02 Score 91.978 Data time: 2.6232, Total iter time: 6.5230 +thomas 04/09 15:32:55 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/09 15:32:56 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/09 15:35:07 101/312: Data time: 0.0031, Iter time: 0.5755 Loss 0.251 (AVG: 0.612) Score 91.858 (AVG: 83.153) mIOU 56.302 mAP 69.773 mAcc 66.642 +IOU: 74.424 96.396 47.054 69.341 91.822 78.291 63.577 43.166 45.543 77.511 12.874 46.993 56.241 49.200 14.405 10.396 84.423 54.243 70.247 39.885 +mAP: 77.760 97.917 55.482 69.709 90.169 79.669 77.938 68.971 52.538 75.364 35.577 59.393 58.696 67.052 30.726 73.463 95.801 82.353 86.961 59.925 +mAcc: 84.547 99.111 70.292 81.804 93.839 93.077 82.639 76.694 50.770 86.803 18.518 53.535 75.495 78.414 15.881 10.458 84.883 56.129 70.863 49.079 + +thomas 04/09 15:37:13 201/312: Data time: 0.0028, Iter time: 1.0008 Loss 0.899 (AVG: 0.604) Score 77.083 (AVG: 83.634) mIOU 57.927 mAP 70.909 mAcc 68.175 +IOU: 74.786 96.087 51.015 69.048 90.078 81.519 65.778 43.594 38.936 73.917 16.825 48.865 55.369 54.822 35.416 10.523 84.702 52.429 75.002 39.836 +mAP: 77.512 97.679 53.048 73.055 91.862 82.706 77.815 67.349 51.082 72.326 44.168 57.624 59.882 75.961 50.763 73.982 94.241 82.010 80.182 54.928 +mAcc: 85.065 98.989 73.066 81.834 92.446 93.760 83.217 78.217 42.213 83.612 24.081 53.261 77.590 81.948 37.844 10.786 85.096 54.545 75.735 50.192 + +thomas 04/09 15:39:22 301/312: Data time: 0.0025, Iter time: 0.4600 Loss 0.274 (AVG: 0.595) Score 91.675 (AVG: 83.610) mIOU 57.351 mAP 69.890 mAcc 67.565 +IOU: 75.089 96.126 51.528 67.666 89.447 79.859 66.378 41.415 38.867 69.217 13.802 52.591 57.024 54.506 34.684 10.055 79.769 51.710 77.124 40.171 +mAP: 77.711 97.411 52.230 71.539 91.398 85.118 72.909 63.099 49.932 72.143 36.912 57.201 62.523 75.125 50.802 72.943 91.211 81.298 82.170 54.123 +mAcc: 85.193 98.912 73.925 79.882 92.454 92.463 82.059 74.972 43.263 80.388 18.982 58.050 81.420 80.662 36.687 10.269 80.067 53.849 77.882 49.918 + +thomas 04/09 15:39:35 312/312: Data time: 0.0028, Iter time: 0.3291 Loss 0.231 (AVG: 0.588) Score 93.587 (AVG: 83.715) mIOU 57.364 mAP 69.624 mAcc 67.507 +IOU: 75.150 96.141 50.964 68.588 89.612 80.206 66.410 41.471 38.352 69.815 13.823 52.247 57.334 54.500 35.263 10.055 78.365 51.691 77.124 40.181 +mAP: 77.649 97.444 51.830 71.475 91.240 84.311 72.549 63.032 49.735 71.929 36.595 57.261 62.500 73.987 51.449 72.943 89.697 81.298 82.170 53.385 +mAcc: 85.301 98.909 73.086 80.912 92.620 92.631 82.282 74.439 42.638 80.893 18.931 57.629 81.141 80.827 37.213 10.269 78.653 53.849 77.882 50.045 + +thomas 04/09 15:39:35 Finished test. Elapsed time: 399.0574 +thomas 04/09 15:39:35 Current best mIoU: 61.872 at iter 38000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/09 15:43:53 ===> Epoch[193](58040/301): Loss 0.2508 LR: 5.516e-02 Score 91.891 Data time: 2.5182, Total iter time: 6.3869 +thomas 04/09 15:48:00 ===> Epoch[193](58080/301): Loss 0.2436 LR: 5.513e-02 Score 92.191 Data time: 2.4093, Total iter time: 6.0973 +thomas 04/09 15:52:17 ===> Epoch[194](58120/301): Loss 0.2928 LR: 5.510e-02 Score 90.996 Data time: 2.5403, Total iter time: 6.3614 +thomas 04/09 15:55:54 ===> Epoch[194](58160/301): Loss 0.2557 LR: 5.507e-02 Score 91.881 Data time: 2.1237, Total iter time: 5.3401 +thomas 04/09 15:59:39 ===> Epoch[194](58200/301): Loss 0.2622 LR: 5.503e-02 Score 91.526 Data time: 2.1970, Total iter time: 5.5498 +thomas 04/09 16:03:21 ===> Epoch[194](58240/301): Loss 0.2927 LR: 5.500e-02 Score 90.914 Data time: 2.1702, Total iter time: 5.4912 +thomas 04/09 16:07:06 ===> Epoch[194](58280/301): Loss 0.2740 LR: 5.497e-02 Score 91.314 Data time: 2.2053, Total iter time: 5.5502 +thomas 04/09 16:10:52 ===> Epoch[194](58320/301): Loss 0.2547 LR: 5.494e-02 Score 91.875 Data time: 2.1815, Total iter time: 5.5750 +thomas 04/09 16:14:42 ===> Epoch[194](58360/301): Loss 0.2465 LR: 5.491e-02 Score 92.117 Data time: 2.2630, Total iter time: 5.6600 +thomas 04/09 16:18:32 ===> Epoch[195](58400/301): Loss 0.2579 LR: 5.487e-02 Score 91.785 Data time: 2.2704, Total iter time: 5.6903 +thomas 04/09 16:22:24 ===> Epoch[195](58440/301): Loss 0.2456 LR: 5.484e-02 Score 92.130 Data time: 2.2591, Total iter time: 5.7074 +thomas 04/09 16:25:58 ===> Epoch[195](58480/301): Loss 0.2480 LR: 5.481e-02 Score 92.183 Data time: 2.1071, Total iter time: 5.2865 +thomas 04/09 16:30:48 ===> Epoch[195](58520/301): Loss 0.2698 LR: 5.478e-02 Score 91.304 Data time: 3.0345, Total iter time: 7.1781 +thomas 04/09 16:35:59 ===> Epoch[195](58560/301): Loss 0.2807 LR: 5.475e-02 Score 91.299 Data time: 3.3450, Total iter time: 7.6556 +thomas 04/09 16:41:19 ===> Epoch[195](58600/301): Loss 0.2518 LR: 5.471e-02 Score 92.046 Data time: 3.4792, Total iter time: 7.9182 +thomas 04/09 16:46:24 ===> Epoch[195](58640/301): Loss 0.2349 LR: 5.468e-02 Score 92.503 Data time: 3.2795, Total iter time: 7.5469 +thomas 04/09 16:51:13 ===> Epoch[195](58680/301): Loss 0.2272 LR: 5.465e-02 Score 92.523 Data time: 3.1277, Total iter time: 7.1485 +thomas 04/09 16:56:16 ===> Epoch[196](58720/301): Loss 0.2722 LR: 5.462e-02 Score 91.403 Data time: 3.2789, Total iter time: 7.4699 +thomas 04/09 17:01:23 ===> Epoch[196](58760/301): Loss 0.2768 LR: 5.458e-02 Score 91.157 Data time: 3.3223, Total iter time: 7.5877 +thomas 04/09 17:06:25 ===> Epoch[196](58800/301): Loss 0.2405 LR: 5.455e-02 Score 92.221 Data time: 3.2767, Total iter time: 7.4688 +thomas 04/09 17:11:08 ===> Epoch[196](58840/301): Loss 0.2269 LR: 5.452e-02 Score 92.858 Data time: 3.0386, Total iter time: 6.9973 +thomas 04/09 17:15:42 ===> Epoch[196](58880/301): Loss 0.2180 LR: 5.449e-02 Score 93.062 Data time: 2.9253, Total iter time: 6.7762 +thomas 04/09 17:20:36 ===> Epoch[196](58920/301): Loss 0.2336 LR: 5.446e-02 Score 92.522 Data time: 3.2041, Total iter time: 7.2818 +thomas 04/09 17:25:31 ===> Epoch[196](58960/301): Loss 0.2389 LR: 5.442e-02 Score 92.183 Data time: 3.1962, Total iter time: 7.2828 +thomas 04/09 17:30:44 ===> Epoch[197](59000/301): Loss 0.2543 LR: 5.439e-02 Score 91.951 Data time: 3.3932, Total iter time: 7.7170 +thomas 04/09 17:30:46 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/09 17:30:46 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/09 17:33:35 101/312: Data time: 0.0025, Iter time: 0.4844 Loss 0.554 (AVG: 0.562) Score 81.955 (AVG: 84.761) mIOU 62.873 mAP 72.132 mAcc 71.991 +IOU: 77.064 95.421 53.708 76.117 88.715 75.467 67.914 43.373 47.679 74.265 15.158 48.653 63.214 76.326 54.874 48.292 75.442 59.822 79.151 36.806 +mAP: 74.617 97.463 63.748 70.910 90.022 80.750 72.924 58.542 51.531 65.234 50.156 56.980 69.481 84.700 74.922 86.171 88.868 87.269 68.967 49.376 +mAcc: 92.140 98.868 78.636 84.615 91.609 93.413 78.672 57.803 50.763 78.635 17.045 64.340 84.040 88.988 57.161 54.774 75.635 62.529 83.849 46.306 + +thomas 04/09 17:36:11 201/312: Data time: 0.0023, Iter time: 0.8019 Loss 0.311 (AVG: 0.532) Score 87.665 (AVG: 85.608) mIOU 62.533 mAP 70.960 mAcc 71.669 +IOU: 78.042 95.775 53.597 76.479 89.636 78.664 68.708 43.911 37.936 71.844 11.737 41.822 64.503 67.431 50.124 55.723 78.392 60.834 84.937 40.571 +mAP: 76.382 97.253 60.096 71.094 90.821 82.549 73.581 61.040 49.681 61.932 42.337 55.497 69.481 75.145 68.505 83.250 90.615 85.770 74.537 49.636 +mAcc: 92.666 98.900 77.909 83.448 92.967 94.272 78.572 57.865 40.366 76.146 13.843 62.168 83.116 80.557 52.727 64.479 78.531 64.207 89.139 51.513 + +thomas 04/09 17:38:43 301/312: Data time: 0.0024, Iter time: 1.6983 Loss 0.981 (AVG: 0.538) Score 68.231 (AVG: 85.410) mIOU 62.569 mAP 70.748 mAcc 71.921 +IOU: 78.318 95.818 56.584 75.350 88.663 79.377 68.225 43.136 39.280 71.268 11.032 50.955 61.474 60.252 49.761 55.656 81.629 58.381 85.645 40.579 +mAP: 76.830 97.185 59.471 72.953 90.378 83.169 71.276 60.393 49.804 64.877 38.233 57.637 69.090 70.241 65.940 83.583 92.028 83.004 78.309 50.563 +mAcc: 92.377 98.951 78.572 82.248 92.099 93.990 77.462 57.233 42.124 76.569 13.468 67.208 82.467 77.454 53.070 66.656 81.746 62.361 90.152 52.207 + +thomas 04/09 17:38:59 312/312: Data time: 0.0049, Iter time: 0.4588 Loss 0.262 (AVG: 0.542) Score 93.009 (AVG: 85.282) mIOU 62.239 mAP 70.854 mAcc 71.654 +IOU: 78.303 95.842 56.517 75.208 88.656 79.194 67.943 42.983 39.394 70.779 10.838 51.524 60.602 60.995 46.260 56.348 80.314 58.403 85.075 39.596 +mAP: 77.071 97.259 60.010 73.352 90.272 83.153 70.156 60.292 49.392 64.882 37.833 59.597 69.216 70.885 65.244 84.125 92.027 83.072 78.926 50.317 +mAcc: 92.364 98.963 79.584 82.457 92.132 94.012 77.332 57.018 42.285 76.044 13.181 67.215 82.577 78.218 49.183 67.327 80.433 62.793 89.446 50.508 + +thomas 04/09 17:38:59 Finished test. Elapsed time: 493.3108 +thomas 04/09 17:39:01 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/09 17:39:01 Current best mIoU: 62.239 at iter 59000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/09 17:43:46 ===> Epoch[197](59040/301): Loss 0.2683 LR: 5.436e-02 Score 91.555 Data time: 3.0487, Total iter time: 7.0419 +thomas 04/09 17:48:55 ===> Epoch[197](59080/301): Loss 0.2368 LR: 5.433e-02 Score 92.400 Data time: 3.3910, Total iter time: 7.6467 +thomas 04/09 17:54:00 ===> Epoch[197](59120/301): Loss 0.2415 LR: 5.430e-02 Score 92.117 Data time: 3.2703, Total iter time: 7.5300 +thomas 04/09 17:58:30 ===> Epoch[197](59160/301): Loss 0.2855 LR: 5.426e-02 Score 90.860 Data time: 2.9053, Total iter time: 6.6900 +thomas 04/09 18:03:36 ===> Epoch[197](59200/301): Loss 0.2303 LR: 5.423e-02 Score 92.595 Data time: 3.2900, Total iter time: 7.5528 +thomas 04/09 18:08:35 ===> Epoch[197](59240/301): Loss 0.2439 LR: 5.420e-02 Score 92.037 Data time: 3.2181, Total iter time: 7.4019 +thomas 04/09 18:13:44 ===> Epoch[197](59280/301): Loss 0.2424 LR: 5.417e-02 Score 92.230 Data time: 3.3654, Total iter time: 7.6282 +thomas 04/09 18:18:50 ===> Epoch[198](59320/301): Loss 0.2601 LR: 5.414e-02 Score 91.983 Data time: 3.3075, Total iter time: 7.5596 +thomas 04/09 18:23:50 ===> Epoch[198](59360/301): Loss 0.2562 LR: 5.410e-02 Score 91.806 Data time: 3.2104, Total iter time: 7.3900 +thomas 04/09 18:28:37 ===> Epoch[198](59400/301): Loss 0.2569 LR: 5.407e-02 Score 91.745 Data time: 3.0960, Total iter time: 7.1114 +thomas 04/09 18:34:02 ===> Epoch[198](59440/301): Loss 0.2758 LR: 5.404e-02 Score 91.397 Data time: 3.4591, Total iter time: 8.0316 +thomas 04/09 18:39:13 ===> Epoch[198](59480/301): Loss 0.2711 LR: 5.401e-02 Score 91.347 Data time: 3.3229, Total iter time: 7.6638 +thomas 04/09 18:44:11 ===> Epoch[198](59520/301): Loss 0.2375 LR: 5.397e-02 Score 92.474 Data time: 3.1434, Total iter time: 7.3657 +thomas 04/09 18:49:20 ===> Epoch[198](59560/301): Loss 0.2471 LR: 5.394e-02 Score 92.079 Data time: 3.2652, Total iter time: 7.6270 +thomas 04/09 18:54:02 ===> Epoch[199](59600/301): Loss 0.2680 LR: 5.391e-02 Score 91.173 Data time: 3.0302, Total iter time: 6.9645 +thomas 04/09 18:58:55 ===> Epoch[199](59640/301): Loss 0.2782 LR: 5.388e-02 Score 90.990 Data time: 3.1905, Total iter time: 7.2385 +thomas 04/09 19:03:40 ===> Epoch[199](59680/301): Loss 0.2991 LR: 5.385e-02 Score 90.777 Data time: 3.1220, Total iter time: 7.0330 +thomas 04/09 19:08:28 ===> Epoch[199](59720/301): Loss 0.2579 LR: 5.381e-02 Score 91.729 Data time: 3.0792, Total iter time: 7.1275 +thomas 04/09 19:13:24 ===> Epoch[199](59760/301): Loss 0.2507 LR: 5.378e-02 Score 91.705 Data time: 3.2471, Total iter time: 7.3327 +thomas 04/09 19:18:20 ===> Epoch[199](59800/301): Loss 0.2378 LR: 5.375e-02 Score 92.394 Data time: 3.2209, Total iter time: 7.3119 +thomas 04/09 19:23:10 ===> Epoch[199](59840/301): Loss 0.2471 LR: 5.372e-02 Score 92.114 Data time: 3.1519, Total iter time: 7.1935 +thomas 04/09 19:27:29 ===> Epoch[199](59880/301): Loss 0.2598 LR: 5.369e-02 Score 91.446 Data time: 2.7763, Total iter time: 6.4022 +thomas 04/09 19:32:44 ===> Epoch[200](59920/301): Loss 0.2618 LR: 5.365e-02 Score 91.650 Data time: 3.3483, Total iter time: 7.7741 +thomas 04/09 19:37:36 ===> Epoch[200](59960/301): Loss 0.2478 LR: 5.362e-02 Score 92.208 Data time: 3.1081, Total iter time: 7.2029 +thomas 04/09 19:42:47 ===> Epoch[200](60000/301): Loss 0.2641 LR: 5.359e-02 Score 91.397 Data time: 3.3704, Total iter time: 7.6938 +thomas 04/09 19:42:48 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/09 19:42:48 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/09 19:45:34 101/312: Data time: 0.0100, Iter time: 0.5365 Loss 1.540 (AVG: 0.651) Score 49.458 (AVG: 83.002) mIOU 60.008 mAP 70.512 mAcc 70.632 +IOU: 77.195 96.346 45.055 68.157 82.332 71.145 70.119 43.395 41.871 74.474 19.405 48.872 63.028 39.068 45.451 41.178 90.369 57.155 90.835 34.716 +mAP: 79.361 96.120 59.592 60.019 85.074 79.296 69.165 59.595 49.817 73.982 49.082 55.928 63.289 69.132 59.174 79.774 94.481 84.351 92.883 50.130 +mAcc: 87.316 98.608 77.960 78.514 90.184 94.112 81.181 65.172 55.457 80.711 22.512 66.361 79.196 39.517 59.482 44.653 91.909 61.432 92.761 45.602 + +thomas 04/09 19:48:12 201/312: Data time: 0.0031, Iter time: 1.5194 Loss 0.786 (AVG: 0.605) Score 80.684 (AVG: 83.921) mIOU 59.814 mAP 70.711 mAcc 69.987 +IOU: 77.514 96.377 45.090 73.518 85.820 74.427 70.869 44.791 39.250 74.708 18.037 54.545 60.758 35.751 44.729 40.592 89.862 49.294 85.650 34.704 +mAP: 79.054 96.319 56.457 71.726 88.005 78.999 68.101 59.119 53.009 68.643 44.035 56.307 68.870 70.581 53.509 84.513 94.816 79.826 90.251 52.073 +mAcc: 88.292 98.748 79.064 83.425 93.493 93.198 81.576 64.511 49.751 81.168 21.379 75.918 77.628 36.308 54.370 44.588 91.329 53.022 87.592 44.385 + +thomas 04/09 19:50:39 301/312: Data time: 0.0175, Iter time: 0.8674 Loss 0.654 (AVG: 0.617) Score 83.578 (AVG: 83.924) mIOU 59.393 mAP 70.392 mAcc 69.333 +IOU: 76.770 96.132 46.547 73.284 87.007 76.171 69.951 43.937 41.157 73.596 17.396 56.258 60.630 34.371 48.638 26.169 89.337 51.118 82.311 37.079 +mAP: 78.002 96.726 58.682 73.390 88.552 80.960 69.191 59.613 52.399 69.644 41.075 59.817 66.666 68.447 58.863 78.582 94.995 78.595 80.566 53.076 +mAcc: 88.313 98.665 78.981 84.107 93.969 93.039 80.924 65.039 50.609 81.378 20.558 76.434 77.280 35.087 58.718 27.403 90.936 54.790 84.330 46.101 + +thomas 04/09 19:50:54 312/312: Data time: 0.0024, Iter time: 0.3178 Loss 0.151 (AVG: 0.608) Score 94.306 (AVG: 84.087) mIOU 59.493 mAP 70.764 mAcc 69.381 +IOU: 76.964 96.184 46.212 73.488 87.095 76.117 70.371 44.548 41.468 73.520 18.084 56.159 60.422 34.995 46.509 26.670 89.418 51.471 82.679 37.476 +mAP: 78.481 96.791 58.384 73.802 88.807 80.960 70.181 60.415 52.743 69.644 42.364 60.318 66.608 69.018 59.968 78.947 94.617 78.586 81.017 53.620 +mAcc: 88.313 98.678 78.593 84.314 93.974 93.039 81.266 65.824 50.972 81.378 21.359 76.537 76.860 35.727 55.381 27.897 90.969 55.084 84.737 46.728 + +thomas 04/09 19:50:54 Finished test. Elapsed time: 485.5568 +thomas 04/09 19:50:54 Current best mIoU: 62.239 at iter 59000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/09 19:55:55 ===> Epoch[200](60040/301): Loss 0.2514 LR: 5.356e-02 Score 92.289 Data time: 3.2098, Total iter time: 7.4511 +thomas 04/09 20:00:46 ===> Epoch[200](60080/301): Loss 0.2471 LR: 5.352e-02 Score 92.146 Data time: 3.1495, Total iter time: 7.1818 +thomas 04/09 20:05:48 ===> Epoch[200](60120/301): Loss 0.2767 LR: 5.349e-02 Score 91.242 Data time: 3.2771, Total iter time: 7.4571 +thomas 04/09 20:10:34 ===> Epoch[200](60160/301): Loss 0.2532 LR: 5.346e-02 Score 92.083 Data time: 3.0948, Total iter time: 7.0748 +thomas 04/09 20:15:32 ===> Epoch[200](60200/301): Loss 0.2467 LR: 5.343e-02 Score 91.966 Data time: 3.1727, Total iter time: 7.3550 +thomas 04/09 20:20:25 ===> Epoch[201](60240/301): Loss 0.2548 LR: 5.340e-02 Score 91.959 Data time: 3.1831, Total iter time: 7.2388 +thomas 04/09 20:25:23 ===> Epoch[201](60280/301): Loss 0.2299 LR: 5.336e-02 Score 92.730 Data time: 3.2439, Total iter time: 7.3312 +thomas 04/09 20:30:03 ===> Epoch[201](60320/301): Loss 0.2348 LR: 5.333e-02 Score 92.511 Data time: 2.9939, Total iter time: 6.9283 +thomas 04/09 20:35:09 ===> Epoch[201](60360/301): Loss 0.2458 LR: 5.330e-02 Score 92.255 Data time: 3.2913, Total iter time: 7.5505 +thomas 04/09 20:40:02 ===> Epoch[201](60400/301): Loss 0.2358 LR: 5.327e-02 Score 92.473 Data time: 3.1975, Total iter time: 7.2409 +thomas 04/09 20:45:00 ===> Epoch[201](60440/301): Loss 0.2566 LR: 5.324e-02 Score 92.038 Data time: 3.2421, Total iter time: 7.3683 +thomas 04/09 20:49:52 ===> Epoch[201](60480/301): Loss 0.2508 LR: 5.320e-02 Score 91.904 Data time: 3.1170, Total iter time: 7.2045 +thomas 04/09 20:54:26 ===> Epoch[202](60520/301): Loss 0.2428 LR: 5.317e-02 Score 92.119 Data time: 2.9613, Total iter time: 6.7838 +thomas 04/09 20:59:28 ===> Epoch[202](60560/301): Loss 0.2877 LR: 5.314e-02 Score 91.266 Data time: 3.2162, Total iter time: 7.4555 +thomas 04/09 21:04:19 ===> Epoch[202](60600/301): Loss 0.2512 LR: 5.311e-02 Score 92.009 Data time: 3.1556, Total iter time: 7.1704 +thomas 04/09 21:09:28 ===> Epoch[202](60640/301): Loss 0.2686 LR: 5.307e-02 Score 91.523 Data time: 3.3639, Total iter time: 7.6450 +thomas 04/09 21:13:56 ===> Epoch[202](60680/301): Loss 0.2215 LR: 5.304e-02 Score 92.706 Data time: 2.9025, Total iter time: 6.6331 +thomas 04/09 21:18:51 ===> Epoch[202](60720/301): Loss 0.2264 LR: 5.301e-02 Score 92.627 Data time: 3.1648, Total iter time: 7.2760 +thomas 04/09 21:23:43 ===> Epoch[202](60760/301): Loss 0.2365 LR: 5.298e-02 Score 92.565 Data time: 3.2049, Total iter time: 7.1928 +thomas 04/09 21:28:52 ===> Epoch[202](60800/301): Loss 0.2433 LR: 5.295e-02 Score 92.330 Data time: 3.3976, Total iter time: 7.6446 +thomas 04/09 21:33:54 ===> Epoch[203](60840/301): Loss 0.2351 LR: 5.291e-02 Score 92.468 Data time: 3.3071, Total iter time: 7.4670 +thomas 04/09 21:38:48 ===> Epoch[203](60880/301): Loss 0.2560 LR: 5.288e-02 Score 91.819 Data time: 3.1310, Total iter time: 7.2416 +thomas 04/09 21:43:52 ===> Epoch[203](60920/301): Loss 0.2798 LR: 5.285e-02 Score 91.115 Data time: 3.3235, Total iter time: 7.5092 +thomas 04/09 21:48:38 ===> Epoch[203](60960/301): Loss 0.2434 LR: 5.282e-02 Score 92.044 Data time: 3.1654, Total iter time: 7.0819 +thomas 04/09 21:53:40 ===> Epoch[203](61000/301): Loss 0.2276 LR: 5.278e-02 Score 92.595 Data time: 3.2207, Total iter time: 7.4493 +thomas 04/09 21:53:42 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/09 21:53:42 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/09 21:56:26 101/312: Data time: 0.0026, Iter time: 0.8534 Loss 0.104 (AVG: 0.588) Score 97.071 (AVG: 83.917) mIOU 58.652 mAP 70.895 mAcc 70.029 +IOU: 76.295 95.948 45.582 79.197 87.559 69.014 71.098 47.812 17.849 65.491 25.658 47.814 48.901 56.315 31.039 49.593 83.396 48.354 89.640 36.485 +mAP: 80.311 97.854 53.242 81.095 86.076 88.116 69.164 64.791 41.067 61.503 50.020 44.591 71.360 75.144 60.786 87.751 92.697 76.892 80.764 54.684 +mAcc: 87.974 98.854 74.265 83.454 91.954 95.247 79.039 66.873 18.210 75.776 30.941 56.520 81.639 78.919 41.948 57.688 88.532 50.987 90.745 51.014 + +thomas 04/09 21:58:59 201/312: Data time: 0.0046, Iter time: 0.8111 Loss 1.049 (AVG: 0.544) Score 78.164 (AVG: 84.930) mIOU 60.819 mAP 71.681 mAcc 71.508 +IOU: 77.498 96.105 45.210 81.146 88.060 76.539 69.989 47.329 22.122 68.970 23.560 44.785 59.332 67.687 32.223 51.652 82.980 54.906 81.715 44.571 +mAP: 80.149 97.760 55.009 83.165 88.143 83.184 69.619 62.214 46.297 66.517 43.182 49.567 74.106 80.522 59.844 86.926 91.669 80.503 81.983 53.264 +mAcc: 88.320 98.932 74.573 88.216 92.013 94.886 78.192 68.614 22.544 80.626 29.731 52.859 85.356 88.451 45.162 56.601 85.910 57.471 83.075 58.618 + +thomas 04/09 22:01:35 301/312: Data time: 0.0028, Iter time: 1.1940 Loss 1.039 (AVG: 0.567) Score 78.223 (AVG: 84.621) mIOU 61.047 mAP 71.962 mAcc 71.521 +IOU: 77.101 96.123 47.511 76.054 88.253 75.184 71.686 47.979 22.367 69.872 16.375 50.071 60.371 64.845 40.234 52.512 83.587 54.633 81.339 44.849 +mAP: 79.650 97.564 58.321 76.212 89.221 83.351 71.210 65.053 47.650 67.333 40.987 54.534 73.252 80.392 62.012 86.499 90.651 81.797 80.392 53.167 +mAcc: 88.272 98.982 78.581 82.440 92.129 93.955 80.571 68.057 23.010 81.519 19.795 57.772 85.035 85.959 53.626 56.529 85.842 57.485 82.632 58.218 + +thomas 04/09 22:01:57 312/312: Data time: 0.0030, Iter time: 0.6797 Loss 0.931 (AVG: 0.563) Score 83.436 (AVG: 84.724) mIOU 61.413 mAP 72.026 mAcc 71.880 +IOU: 77.118 96.094 48.875 75.877 88.373 75.401 71.674 47.663 23.127 69.658 16.527 51.355 60.720 65.514 40.484 53.233 83.904 55.766 81.819 45.086 +mAP: 79.662 97.590 58.670 75.327 88.978 83.583 71.160 65.004 47.179 67.333 40.602 54.701 72.965 80.880 62.650 86.500 91.108 82.181 80.971 53.485 +mAcc: 88.284 98.951 79.418 82.175 92.257 94.113 80.710 67.729 23.757 81.519 19.901 59.008 84.913 86.486 54.474 57.462 86.153 58.570 83.140 58.573 + +thomas 04/09 22:01:57 Finished test. Elapsed time: 495.2446 +thomas 04/09 22:01:57 Current best mIoU: 62.239 at iter 59000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/09 22:06:58 ===> Epoch[203](61040/301): Loss 0.2134 LR: 5.275e-02 Score 93.082 Data time: 3.2666, Total iter time: 7.4277 +thomas 04/09 22:12:10 ===> Epoch[203](61080/301): Loss 0.2480 LR: 5.272e-02 Score 91.946 Data time: 3.3969, Total iter time: 7.7173 +thomas 04/09 22:17:11 ===> Epoch[204](61120/301): Loss 0.2181 LR: 5.269e-02 Score 93.048 Data time: 3.2831, Total iter time: 7.4259 +thomas 04/09 22:22:06 ===> Epoch[204](61160/301): Loss 0.2321 LR: 5.266e-02 Score 92.587 Data time: 3.2279, Total iter time: 7.2670 +thomas 04/09 22:26:46 ===> Epoch[204](61200/301): Loss 0.2479 LR: 5.262e-02 Score 92.108 Data time: 3.0463, Total iter time: 6.9318 +thomas 04/09 22:31:46 ===> Epoch[204](61240/301): Loss 0.2682 LR: 5.259e-02 Score 91.662 Data time: 3.2216, Total iter time: 7.3995 +thomas 04/09 22:37:13 ===> Epoch[204](61280/301): Loss 0.2582 LR: 5.256e-02 Score 91.940 Data time: 3.5274, Total iter time: 8.0827 +thomas 04/09 22:42:06 ===> Epoch[204](61320/301): Loss 0.2511 LR: 5.253e-02 Score 91.989 Data time: 3.1559, Total iter time: 7.2509 +thomas 04/09 22:47:13 ===> Epoch[204](61360/301): Loss 0.2471 LR: 5.249e-02 Score 92.116 Data time: 3.3057, Total iter time: 7.5688 +thomas 04/09 22:52:23 ===> Epoch[204](61400/301): Loss 0.2599 LR: 5.246e-02 Score 91.785 Data time: 3.3628, Total iter time: 7.6758 +thomas 04/09 22:57:17 ===> Epoch[205](61440/301): Loss 0.2541 LR: 5.243e-02 Score 92.037 Data time: 3.1950, Total iter time: 7.2663 +thomas 04/09 23:02:10 ===> Epoch[205](61480/301): Loss 0.2447 LR: 5.240e-02 Score 92.065 Data time: 3.1868, Total iter time: 7.2409 +thomas 04/09 23:06:57 ===> Epoch[205](61520/301): Loss 0.2709 LR: 5.237e-02 Score 91.652 Data time: 3.0986, Total iter time: 7.0675 +thomas 04/09 23:11:19 ===> Epoch[205](61560/301): Loss 0.2409 LR: 5.233e-02 Score 92.423 Data time: 2.8252, Total iter time: 6.4878 +thomas 04/09 23:16:09 ===> Epoch[205](61600/301): Loss 0.2423 LR: 5.230e-02 Score 92.185 Data time: 3.1364, Total iter time: 7.1427 +thomas 04/09 23:20:55 ===> Epoch[205](61640/301): Loss 0.2352 LR: 5.227e-02 Score 92.476 Data time: 3.1299, Total iter time: 7.0841 +thomas 04/09 23:25:45 ===> Epoch[205](61680/301): Loss 0.2665 LR: 5.224e-02 Score 91.466 Data time: 3.1227, Total iter time: 7.1573 +thomas 04/09 23:30:50 ===> Epoch[206](61720/301): Loss 0.2810 LR: 5.220e-02 Score 91.160 Data time: 3.3411, Total iter time: 7.5335 +thomas 04/09 23:35:53 ===> Epoch[206](61760/301): Loss 0.2662 LR: 5.217e-02 Score 91.660 Data time: 3.2944, Total iter time: 7.4818 +thomas 04/09 23:40:45 ===> Epoch[206](61800/301): Loss 0.2271 LR: 5.214e-02 Score 92.767 Data time: 3.1603, Total iter time: 7.2154 +thomas 04/09 23:46:00 ===> Epoch[206](61840/301): Loss 0.2523 LR: 5.211e-02 Score 92.167 Data time: 3.4256, Total iter time: 7.7830 +thomas 04/09 23:50:55 ===> Epoch[206](61880/301): Loss 0.2380 LR: 5.208e-02 Score 92.529 Data time: 3.2637, Total iter time: 7.3117 +thomas 04/09 23:55:37 ===> Epoch[206](61920/301): Loss 0.2391 LR: 5.204e-02 Score 92.465 Data time: 3.0278, Total iter time: 6.9599 +thomas 04/10 00:00:31 ===> Epoch[206](61960/301): Loss 0.2340 LR: 5.201e-02 Score 92.628 Data time: 3.2048, Total iter time: 7.2538 +thomas 04/10 00:05:22 ===> Epoch[206](62000/301): Loss 0.2688 LR: 5.198e-02 Score 91.598 Data time: 3.1799, Total iter time: 7.1789 +thomas 04/10 00:05:23 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 00:05:23 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 00:07:58 101/312: Data time: 0.0050, Iter time: 0.5023 Loss 0.852 (AVG: 0.556) Score 75.862 (AVG: 85.257) mIOU 62.723 mAP 73.831 mAcc 73.421 +IOU: 78.783 96.298 47.979 62.308 89.078 79.667 71.679 47.378 30.029 71.504 15.509 60.135 61.465 61.744 47.820 55.388 86.801 55.294 90.301 45.305 +mAP: 78.625 97.059 61.985 63.625 90.492 84.720 73.835 65.983 49.773 69.386 52.796 58.449 63.118 91.499 55.550 97.178 90.768 82.851 91.866 57.067 +mAcc: 91.921 98.815 71.835 67.443 92.832 94.240 86.125 52.671 33.434 87.540 16.851 69.279 83.232 84.742 65.672 73.994 88.134 60.348 91.322 57.989 + +thomas 04/10 00:10:38 201/312: Data time: 0.0231, Iter time: 0.5404 Loss 0.397 (AVG: 0.541) Score 85.198 (AVG: 85.570) mIOU 62.402 mAP 72.356 mAcc 72.373 +IOU: 79.593 96.202 51.440 68.288 88.414 74.829 72.089 45.330 38.547 66.515 15.509 53.670 59.512 66.858 43.624 54.214 86.362 57.255 83.536 46.258 +mAP: 80.593 97.581 60.160 68.107 90.852 81.646 71.057 62.801 50.809 69.064 48.653 54.850 64.832 87.802 60.837 92.138 92.268 80.622 81.123 51.335 +mAcc: 92.063 98.798 76.533 76.977 92.597 92.862 84.908 50.351 42.528 85.148 16.497 62.924 78.721 87.223 54.440 60.113 87.285 63.832 84.595 59.068 + +thomas 04/10 00:13:18 301/312: Data time: 0.0139, Iter time: 0.4640 Loss 0.276 (AVG: 0.557) Score 92.484 (AVG: 85.451) mIOU 61.940 mAP 72.484 mAcc 71.807 +IOU: 79.325 96.103 51.312 71.003 88.397 74.526 71.819 44.303 39.192 65.224 13.439 56.000 58.365 64.422 49.700 44.634 85.800 55.903 84.496 44.844 +mAP: 80.469 97.267 61.482 72.368 90.538 81.249 72.698 61.158 49.498 66.877 44.138 57.329 65.054 85.086 67.202 90.918 93.033 81.259 81.241 50.808 +mAcc: 92.160 98.734 76.810 80.224 93.112 91.145 83.510 49.153 42.660 85.963 14.226 63.491 79.348 85.859 60.378 48.141 86.798 61.891 85.582 56.946 + +thomas 04/10 00:13:39 312/312: Data time: 0.0033, Iter time: 1.0302 Loss 0.731 (AVG: 0.554) Score 84.721 (AVG: 85.499) mIOU 61.868 mAP 72.344 mAcc 71.668 +IOU: 79.384 96.152 51.158 71.892 88.034 74.876 71.618 44.235 38.529 65.198 13.172 55.999 59.052 63.418 49.009 44.634 85.775 55.903 84.496 44.833 +mAP: 80.354 97.284 61.487 71.740 90.135 81.766 72.578 60.710 49.455 67.107 43.639 57.329 65.851 83.659 66.741 90.918 93.033 81.259 81.241 50.591 +mAcc: 92.202 98.727 76.848 81.109 92.886 91.114 83.296 49.222 41.828 86.090 14.102 63.491 79.101 84.224 59.470 48.141 86.798 61.891 85.582 57.237 + +thomas 04/10 00:13:39 Finished test. Elapsed time: 495.6021 +thomas 04/10 00:13:39 Current best mIoU: 62.239 at iter 59000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 00:18:27 ===> Epoch[207](62040/301): Loss 0.2524 LR: 5.195e-02 Score 91.855 Data time: 3.1169, Total iter time: 7.1270 +thomas 04/10 00:23:36 ===> Epoch[207](62080/301): Loss 0.2670 LR: 5.191e-02 Score 91.774 Data time: 3.3079, Total iter time: 7.6271 +thomas 04/10 00:28:39 ===> Epoch[207](62120/301): Loss 0.2733 LR: 5.188e-02 Score 91.376 Data time: 3.2932, Total iter time: 7.4938 +thomas 04/10 00:33:34 ===> Epoch[207](62160/301): Loss 0.2573 LR: 5.185e-02 Score 91.757 Data time: 3.1837, Total iter time: 7.2965 +thomas 04/10 00:38:06 ===> Epoch[207](62200/301): Loss 0.2141 LR: 5.182e-02 Score 92.920 Data time: 2.9419, Total iter time: 6.7172 +thomas 04/10 00:43:18 ===> Epoch[207](62240/301): Loss 0.2538 LR: 5.179e-02 Score 91.930 Data time: 3.3769, Total iter time: 7.7034 +thomas 04/10 00:48:23 ===> Epoch[207](62280/301): Loss 0.2570 LR: 5.175e-02 Score 92.007 Data time: 3.3190, Total iter time: 7.5433 +thomas 04/10 00:53:27 ===> Epoch[208](62320/301): Loss 0.2489 LR: 5.172e-02 Score 92.007 Data time: 3.3007, Total iter time: 7.4989 +thomas 04/10 00:58:26 ===> Epoch[208](62360/301): Loss 0.2443 LR: 5.169e-02 Score 92.206 Data time: 3.2785, Total iter time: 7.4045 +thomas 04/10 01:03:16 ===> Epoch[208](62400/301): Loss 0.2566 LR: 5.166e-02 Score 91.883 Data time: 3.1519, Total iter time: 7.1691 +thomas 04/10 01:08:08 ===> Epoch[208](62440/301): Loss 0.2488 LR: 5.162e-02 Score 92.184 Data time: 3.2046, Total iter time: 7.2211 +thomas 04/10 01:12:55 ===> Epoch[208](62480/301): Loss 0.2373 LR: 5.159e-02 Score 92.472 Data time: 3.1316, Total iter time: 7.0915 +thomas 04/10 01:17:56 ===> Epoch[208](62520/301): Loss 0.2422 LR: 5.156e-02 Score 92.292 Data time: 3.2556, Total iter time: 7.4506 +thomas 04/10 01:22:24 ===> Epoch[208](62560/301): Loss 0.2367 LR: 5.153e-02 Score 92.524 Data time: 2.8807, Total iter time: 6.6069 +thomas 04/10 01:27:41 ===> Epoch[208](62600/301): Loss 0.2518 LR: 5.149e-02 Score 92.113 Data time: 3.4277, Total iter time: 7.8352 +thomas 04/10 01:32:42 ===> Epoch[209](62640/301): Loss 0.2729 LR: 5.146e-02 Score 91.274 Data time: 3.2553, Total iter time: 7.4526 +thomas 04/10 01:37:30 ===> Epoch[209](62680/301): Loss 0.2434 LR: 5.143e-02 Score 91.979 Data time: 3.1096, Total iter time: 7.1063 +thomas 04/10 01:42:29 ===> Epoch[209](62720/301): Loss 0.2487 LR: 5.140e-02 Score 92.152 Data time: 3.2676, Total iter time: 7.4044 +thomas 04/10 01:47:12 ===> Epoch[209](62760/301): Loss 0.2403 LR: 5.137e-02 Score 92.368 Data time: 3.0161, Total iter time: 6.9919 +thomas 04/10 01:51:39 ===> Epoch[209](62800/301): Loss 0.2507 LR: 5.133e-02 Score 92.059 Data time: 2.9106, Total iter time: 6.6026 +thomas 04/10 01:56:57 ===> Epoch[209](62840/301): Loss 0.2422 LR: 5.130e-02 Score 92.069 Data time: 3.4279, Total iter time: 7.8468 +thomas 04/10 02:02:07 ===> Epoch[209](62880/301): Loss 0.2363 LR: 5.127e-02 Score 92.480 Data time: 3.3745, Total iter time: 7.6721 +thomas 04/10 02:07:04 ===> Epoch[210](62920/301): Loss 0.2361 LR: 5.124e-02 Score 92.247 Data time: 3.1801, Total iter time: 7.3324 +thomas 04/10 02:12:01 ===> Epoch[210](62960/301): Loss 0.2234 LR: 5.120e-02 Score 92.903 Data time: 3.2110, Total iter time: 7.3229 +thomas 04/10 02:17:01 ===> Epoch[210](63000/301): Loss 0.2237 LR: 5.117e-02 Score 92.623 Data time: 3.2847, Total iter time: 7.4323 +thomas 04/10 02:17:03 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 02:17:03 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 02:19:40 101/312: Data time: 0.0033, Iter time: 0.7829 Loss 0.244 (AVG: 0.567) Score 92.740 (AVG: 84.600) mIOU 61.637 mAP 73.510 mAcc 72.400 +IOU: 78.674 96.084 56.060 69.114 85.016 77.614 62.708 40.660 43.238 71.574 25.441 51.951 49.698 69.658 39.647 47.721 81.059 57.953 85.250 43.617 +mAP: 80.953 97.450 59.033 73.552 88.001 84.742 67.437 59.564 55.552 80.330 44.099 56.328 71.748 79.183 65.406 87.234 89.979 84.446 88.497 56.663 +mAcc: 91.515 98.764 73.882 87.066 88.509 96.591 68.577 49.745 51.129 95.471 35.144 63.186 90.505 73.274 46.684 51.124 81.435 63.243 88.232 53.918 + +thomas 04/10 02:22:32 201/312: Data time: 0.1071, Iter time: 1.7655 Loss 0.267 (AVG: 0.571) Score 91.193 (AVG: 84.791) mIOU 60.775 mAP 71.270 mAcc 71.178 +IOU: 77.266 96.348 56.090 67.221 88.279 79.244 64.485 41.916 45.730 71.695 16.658 50.395 47.623 61.458 36.820 51.844 76.455 54.623 84.697 46.649 +mAP: 77.976 98.008 58.696 70.571 88.303 83.626 69.050 57.416 53.951 71.976 36.756 54.546 69.335 72.059 61.212 87.760 88.900 82.734 85.561 56.969 +mAcc: 90.903 98.875 75.192 84.686 91.947 93.447 69.509 52.149 54.067 93.827 22.735 64.701 90.239 64.107 43.698 56.121 76.886 57.944 87.549 54.971 + +thomas 04/10 02:25:02 301/312: Data time: 0.0025, Iter time: 0.6295 Loss 0.433 (AVG: 0.562) Score 91.486 (AVG: 85.068) mIOU 61.329 mAP 71.648 mAcc 71.939 +IOU: 78.166 96.193 57.219 65.217 88.177 76.830 64.452 43.194 46.555 69.842 18.348 54.491 48.250 64.337 45.290 50.280 77.392 55.140 84.058 43.159 +mAP: 79.338 97.578 58.349 71.906 89.306 83.717 69.029 58.492 52.942 70.201 39.911 54.806 67.906 76.077 64.946 89.390 90.096 82.359 81.811 54.797 +mAcc: 90.850 98.754 76.666 84.460 91.499 93.487 69.335 54.693 54.407 93.276 25.081 67.771 90.680 68.331 53.704 52.902 77.752 58.381 86.516 50.238 + +thomas 04/10 02:25:17 312/312: Data time: 0.0025, Iter time: 0.9534 Loss 0.469 (AVG: 0.561) Score 86.032 (AVG: 85.130) mIOU 61.489 mAP 71.804 mAcc 71.967 +IOU: 78.138 96.219 56.736 66.036 88.258 76.775 64.704 42.813 46.292 70.085 18.752 54.895 49.216 63.421 46.045 50.280 77.600 55.962 84.058 43.499 +mAP: 79.122 97.623 57.921 72.550 89.541 83.346 69.285 58.221 53.100 70.690 40.399 54.801 68.400 76.370 65.314 89.390 90.288 82.571 81.811 55.348 +mAcc: 90.918 98.760 75.592 85.136 91.550 93.330 69.631 54.675 53.860 92.594 25.630 68.094 90.685 67.220 54.431 52.902 77.955 59.221 86.516 50.641 + +thomas 04/10 02:25:17 Finished test. Elapsed time: 494.2968 +thomas 04/10 02:25:18 Current best mIoU: 62.239 at iter 59000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 02:30:15 ===> Epoch[210](63040/301): Loss 0.2397 LR: 5.114e-02 Score 92.120 Data time: 3.2045, Total iter time: 7.3575 +thomas 04/10 02:35:23 ===> Epoch[210](63080/301): Loss 0.2433 LR: 5.111e-02 Score 92.284 Data time: 3.3774, Total iter time: 7.5942 +thomas 04/10 02:40:35 ===> Epoch[210](63120/301): Loss 0.2511 LR: 5.107e-02 Score 92.221 Data time: 3.3914, Total iter time: 7.7211 +thomas 04/10 02:45:38 ===> Epoch[210](63160/301): Loss 0.2183 LR: 5.104e-02 Score 93.115 Data time: 3.2639, Total iter time: 7.4854 +thomas 04/10 02:50:27 ===> Epoch[210](63200/301): Loss 0.2596 LR: 5.101e-02 Score 91.899 Data time: 3.1197, Total iter time: 7.1541 +thomas 04/10 02:55:34 ===> Epoch[211](63240/301): Loss 0.2764 LR: 5.098e-02 Score 90.872 Data time: 3.3619, Total iter time: 7.5932 +thomas 04/10 03:00:32 ===> Epoch[211](63280/301): Loss 0.2820 LR: 5.095e-02 Score 91.175 Data time: 3.2937, Total iter time: 7.3834 +thomas 04/10 03:05:42 ===> Epoch[211](63320/301): Loss 0.2427 LR: 5.091e-02 Score 92.421 Data time: 3.3059, Total iter time: 7.6533 +thomas 04/10 03:10:44 ===> Epoch[211](63360/301): Loss 0.2260 LR: 5.088e-02 Score 92.545 Data time: 3.3036, Total iter time: 7.4684 +thomas 04/10 03:15:59 ===> Epoch[211](63400/301): Loss 0.2059 LR: 5.085e-02 Score 93.264 Data time: 3.3967, Total iter time: 7.7983 +thomas 04/10 03:21:04 ===> Epoch[211](63440/301): Loss 0.2316 LR: 5.082e-02 Score 92.751 Data time: 3.2806, Total iter time: 7.5508 +thomas 04/10 03:26:38 ===> Epoch[211](63480/301): Loss 0.2227 LR: 5.078e-02 Score 92.923 Data time: 3.5505, Total iter time: 8.2520 +thomas 04/10 03:31:25 ===> Epoch[212](63520/301): Loss 0.2278 LR: 5.075e-02 Score 92.898 Data time: 3.1241, Total iter time: 7.1215 +thomas 04/10 03:36:11 ===> Epoch[212](63560/301): Loss 0.2341 LR: 5.072e-02 Score 92.384 Data time: 3.0288, Total iter time: 7.0476 +thomas 04/10 03:41:20 ===> Epoch[212](63600/301): Loss 0.2599 LR: 5.069e-02 Score 91.470 Data time: 3.3033, Total iter time: 7.6427 +thomas 04/10 03:46:21 ===> Epoch[212](63640/301): Loss 0.2529 LR: 5.065e-02 Score 91.927 Data time: 3.2893, Total iter time: 7.4650 +thomas 04/10 03:51:21 ===> Epoch[212](63680/301): Loss 0.2308 LR: 5.062e-02 Score 92.727 Data time: 3.2316, Total iter time: 7.4210 +thomas 04/10 03:56:24 ===> Epoch[212](63720/301): Loss 0.2488 LR: 5.059e-02 Score 92.190 Data time: 3.3136, Total iter time: 7.4745 +thomas 04/10 04:01:25 ===> Epoch[212](63760/301): Loss 0.2346 LR: 5.056e-02 Score 92.501 Data time: 3.2141, Total iter time: 7.4563 +thomas 04/10 04:06:34 ===> Epoch[212](63800/301): Loss 0.2082 LR: 5.052e-02 Score 93.407 Data time: 3.3549, Total iter time: 7.6350 +thomas 04/10 04:11:53 ===> Epoch[213](63840/301): Loss 0.2171 LR: 5.049e-02 Score 92.962 Data time: 3.4373, Total iter time: 7.8872 +thomas 04/10 04:17:10 ===> Epoch[213](63880/301): Loss 0.2353 LR: 5.046e-02 Score 92.369 Data time: 3.4288, Total iter time: 7.8355 +thomas 04/10 04:21:56 ===> Epoch[213](63920/301): Loss 0.2691 LR: 5.043e-02 Score 91.574 Data time: 3.0750, Total iter time: 7.0648 +thomas 04/10 04:26:55 ===> Epoch[213](63960/301): Loss 0.2362 LR: 5.040e-02 Score 92.195 Data time: 3.1962, Total iter time: 7.3918 +thomas 04/10 04:31:37 ===> Epoch[213](64000/301): Loss 0.2228 LR: 5.036e-02 Score 92.898 Data time: 3.0574, Total iter time: 6.9639 +thomas 04/10 04:31:38 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 04:31:39 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 04:34:25 101/312: Data time: 0.0503, Iter time: 1.3241 Loss 0.232 (AVG: 0.525) Score 94.386 (AVG: 86.398) mIOU 63.222 mAP 73.547 mAcc 73.325 +IOU: 80.601 95.893 59.720 73.263 88.285 73.511 64.821 37.941 35.167 74.711 15.977 56.404 54.764 69.069 57.303 42.025 84.718 58.057 86.379 55.843 +mAP: 82.348 97.358 58.188 78.826 88.856 79.582 73.141 57.440 56.445 69.905 41.531 50.425 61.181 84.833 74.760 90.803 93.761 86.628 87.266 57.652 +mAcc: 93.757 98.507 75.187 79.636 92.999 94.391 83.466 44.284 36.521 97.099 18.391 73.018 65.785 84.585 75.129 48.436 86.035 62.142 88.339 68.785 + +thomas 04/10 04:37:05 201/312: Data time: 0.0029, Iter time: 0.5870 Loss 0.547 (AVG: 0.543) Score 87.410 (AVG: 85.973) mIOU 62.034 mAP 73.315 mAcc 71.808 +IOU: 80.059 96.213 56.572 73.344 89.476 75.131 66.120 39.342 37.511 69.442 11.893 54.287 54.232 65.823 43.648 44.284 85.800 58.780 87.419 51.302 +mAP: 80.846 97.822 57.285 77.898 90.605 82.398 72.605 61.967 53.536 67.818 44.333 55.284 60.404 86.701 62.242 87.394 94.263 84.099 89.847 58.958 +mAcc: 93.404 98.723 71.274 81.620 94.067 95.131 85.291 45.323 39.072 91.494 13.958 69.200 63.660 87.580 52.526 49.021 87.659 63.765 89.004 64.399 + +thomas 04/10 04:39:58 301/312: Data time: 0.0036, Iter time: 1.0083 Loss 0.900 (AVG: 0.552) Score 77.854 (AVG: 85.866) mIOU 61.323 mAP 72.089 mAcc 71.011 +IOU: 79.902 96.285 57.426 71.889 88.311 71.477 66.001 41.881 36.614 73.266 14.628 55.145 54.247 62.427 44.058 34.826 85.120 58.617 84.544 49.788 +mAP: 79.813 97.754 58.725 74.572 90.442 83.308 69.624 59.828 51.441 67.411 43.597 55.629 60.674 83.582 61.403 86.573 94.176 83.852 83.554 55.827 +mAcc: 93.561 98.744 72.839 80.254 92.953 94.445 85.853 48.519 37.923 90.990 16.879 70.588 62.883 85.753 53.594 36.856 86.568 62.424 86.264 62.335 + +thomas 04/10 04:40:11 312/312: Data time: 0.0028, Iter time: 0.7417 Loss 0.504 (AVG: 0.550) Score 85.891 (AVG: 85.909) mIOU 61.335 mAP 72.229 mAcc 70.984 +IOU: 80.058 96.228 56.936 72.370 88.532 71.355 65.650 42.022 36.344 72.758 15.485 54.934 53.662 62.172 43.104 35.162 85.736 58.615 85.873 49.708 +mAP: 80.126 97.728 58.408 75.033 90.553 83.192 69.992 60.465 51.698 67.411 44.071 55.647 60.590 83.095 60.704 87.071 94.400 83.852 84.520 56.028 +mAcc: 93.655 98.749 72.342 80.354 93.105 93.901 86.104 48.646 37.657 90.990 17.847 70.320 61.773 85.265 52.045 37.626 87.143 62.424 87.499 62.228 + +thomas 04/10 04:40:11 Finished test. Elapsed time: 512.9025 +thomas 04/10 04:40:11 Current best mIoU: 62.239 at iter 59000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 04:45:29 ===> Epoch[213](64040/301): Loss 0.2434 LR: 5.033e-02 Score 92.056 Data time: 3.4127, Total iter time: 7.8355 +thomas 04/10 04:50:23 ===> Epoch[213](64080/301): Loss 0.2738 LR: 5.030e-02 Score 91.624 Data time: 3.1484, Total iter time: 7.2757 +thomas 04/10 04:55:37 ===> Epoch[214](64120/301): Loss 0.2415 LR: 5.027e-02 Score 92.171 Data time: 3.4347, Total iter time: 7.7676 +thomas 04/10 05:00:35 ===> Epoch[214](64160/301): Loss 0.2210 LR: 5.023e-02 Score 92.925 Data time: 3.1911, Total iter time: 7.3729 +thomas 04/10 05:05:19 ===> Epoch[214](64200/301): Loss 0.2340 LR: 5.020e-02 Score 92.260 Data time: 3.0470, Total iter time: 7.0347 +thomas 04/10 05:10:26 ===> Epoch[214](64240/301): Loss 0.2571 LR: 5.017e-02 Score 91.904 Data time: 3.3046, Total iter time: 7.5840 +thomas 04/10 05:15:32 ===> Epoch[214](64280/301): Loss 0.2390 LR: 5.014e-02 Score 92.472 Data time: 3.3032, Total iter time: 7.5763 +thomas 04/10 05:20:48 ===> Epoch[214](64320/301): Loss 0.2007 LR: 5.010e-02 Score 93.474 Data time: 3.3401, Total iter time: 7.8032 +thomas 04/10 05:25:46 ===> Epoch[214](64360/301): Loss 0.2311 LR: 5.007e-02 Score 92.738 Data time: 3.1842, Total iter time: 7.3694 +thomas 04/10 05:30:48 ===> Epoch[214](64400/301): Loss 0.2466 LR: 5.004e-02 Score 91.967 Data time: 3.2947, Total iter time: 7.4732 +thomas 04/10 05:35:34 ===> Epoch[215](64440/301): Loss 0.2258 LR: 5.001e-02 Score 92.993 Data time: 3.0667, Total iter time: 7.0879 +thomas 04/10 05:40:23 ===> Epoch[215](64480/301): Loss 0.2383 LR: 4.997e-02 Score 92.949 Data time: 3.1584, Total iter time: 7.1495 +thomas 04/10 05:45:21 ===> Epoch[215](64520/301): Loss 0.2534 LR: 4.994e-02 Score 91.883 Data time: 3.1967, Total iter time: 7.3417 +thomas 04/10 05:50:06 ===> Epoch[215](64560/301): Loss 0.2345 LR: 4.991e-02 Score 92.494 Data time: 3.0575, Total iter time: 7.0529 +thomas 04/10 05:55:03 ===> Epoch[215](64600/301): Loss 0.2202 LR: 4.988e-02 Score 92.852 Data time: 3.2110, Total iter time: 7.3273 +thomas 04/10 06:00:12 ===> Epoch[215](64640/301): Loss 0.2133 LR: 4.984e-02 Score 93.138 Data time: 3.3912, Total iter time: 7.6548 +thomas 04/10 06:05:14 ===> Epoch[215](64680/301): Loss 0.2338 LR: 4.981e-02 Score 92.593 Data time: 3.2800, Total iter time: 7.4733 +thomas 04/10 06:10:33 ===> Epoch[216](64720/301): Loss 0.2005 LR: 4.978e-02 Score 93.619 Data time: 3.3813, Total iter time: 7.8742 +thomas 04/10 06:15:37 ===> Epoch[216](64760/301): Loss 0.2340 LR: 4.975e-02 Score 92.556 Data time: 3.3337, Total iter time: 7.5220 +thomas 04/10 06:20:21 ===> Epoch[216](64800/301): Loss 0.2317 LR: 4.971e-02 Score 92.563 Data time: 3.0945, Total iter time: 7.0180 +thomas 04/10 06:25:03 ===> Epoch[216](64840/301): Loss 0.2522 LR: 4.968e-02 Score 92.216 Data time: 3.0447, Total iter time: 6.9687 +thomas 04/10 06:30:07 ===> Epoch[216](64880/301): Loss 0.2823 LR: 4.965e-02 Score 91.263 Data time: 3.3231, Total iter time: 7.5049 +thomas 04/10 06:34:57 ===> Epoch[216](64920/301): Loss 0.2738 LR: 4.962e-02 Score 91.366 Data time: 3.1293, Total iter time: 7.1456 +thomas 04/10 06:40:00 ===> Epoch[216](64960/301): Loss 0.2580 LR: 4.959e-02 Score 91.765 Data time: 3.2389, Total iter time: 7.4914 +thomas 04/10 06:44:52 ===> Epoch[216](65000/301): Loss 0.2631 LR: 4.955e-02 Score 91.778 Data time: 3.1941, Total iter time: 7.2174 +thomas 04/10 06:44:53 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 06:44:53 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 06:47:40 101/312: Data time: 0.0027, Iter time: 0.4425 Loss 1.382 (AVG: 0.750) Score 58.013 (AVG: 80.816) mIOU 49.859 mAP 66.358 mAcc 61.492 +IOU: 76.410 95.955 44.704 47.056 75.465 55.563 58.059 52.132 31.294 65.849 11.197 52.085 23.420 71.992 31.749 42.650 48.765 26.668 56.585 29.584 +mAP: 81.379 98.139 57.410 69.055 81.802 73.952 72.581 69.404 41.713 62.725 37.010 53.805 53.142 80.134 49.485 86.875 65.109 71.409 71.551 50.473 +mAcc: 87.512 98.181 73.673 81.494 78.330 96.254 70.787 66.348 33.535 89.388 13.400 71.927 24.576 74.145 41.639 47.832 48.889 26.789 56.936 48.200 + +thomas 04/10 06:50:15 201/312: Data time: 0.0030, Iter time: 0.6785 Loss 0.185 (AVG: 0.747) Score 94.654 (AVG: 81.557) mIOU 49.964 mAP 66.620 mAcc 61.517 +IOU: 77.846 95.532 48.664 51.876 79.413 55.180 60.806 46.137 26.893 67.571 12.550 56.471 24.158 68.189 40.534 18.901 54.089 31.247 55.060 28.163 +mAP: 80.910 97.564 55.314 70.358 85.148 75.222 72.667 62.906 37.917 70.935 34.868 57.051 54.733 77.892 56.417 79.478 77.486 74.963 62.553 48.018 +mAcc: 89.152 98.280 73.521 83.729 82.177 96.627 73.552 58.599 28.699 92.847 15.483 76.833 25.427 70.528 57.542 19.460 54.249 31.501 55.464 46.663 + +thomas 04/10 06:52:53 301/312: Data time: 0.0034, Iter time: 0.9214 Loss 0.103 (AVG: 0.761) Score 97.043 (AVG: 81.058) mIOU 49.546 mAP 66.704 mAcc 61.171 +IOU: 76.869 95.568 48.633 50.056 79.880 53.731 61.481 46.563 25.599 67.725 12.552 52.863 23.845 63.478 40.056 22.698 49.916 30.261 60.865 28.279 +mAP: 79.117 97.457 55.808 70.615 85.075 75.290 72.327 62.552 38.686 70.332 34.811 55.999 54.565 75.520 56.322 77.954 77.169 75.119 70.912 48.457 +mAcc: 88.531 98.298 71.724 85.410 82.609 94.707 74.293 60.698 27.253 91.269 14.889 75.928 24.815 66.159 53.223 23.508 50.146 30.476 61.421 48.066 + +thomas 04/10 06:53:07 312/312: Data time: 0.0046, Iter time: 1.6137 Loss 0.734 (AVG: 0.759) Score 79.659 (AVG: 81.129) mIOU 49.687 mAP 66.841 mAcc 61.317 +IOU: 76.885 95.566 49.139 50.679 80.058 54.066 61.400 46.335 24.877 68.122 12.527 53.443 23.891 62.066 42.056 23.565 50.131 29.801 60.621 28.515 +mAP: 78.968 97.448 56.656 71.138 85.336 75.824 72.231 62.723 37.874 70.552 34.552 56.466 54.227 75.394 58.106 77.770 76.954 75.656 71.028 47.915 +mAcc: 88.549 98.287 72.253 85.925 82.852 94.802 73.904 60.411 26.620 91.389 14.996 76.742 24.992 64.642 55.524 24.394 50.351 30.007 61.168 48.532 + +thomas 04/10 06:53:07 Finished test. Elapsed time: 493.4850 +thomas 04/10 06:53:07 Current best mIoU: 62.239 at iter 59000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 06:58:07 ===> Epoch[217](65040/301): Loss 0.2525 LR: 4.952e-02 Score 92.131 Data time: 3.2580, Total iter time: 7.4162 +thomas 04/10 07:03:08 ===> Epoch[217](65080/301): Loss 0.2316 LR: 4.949e-02 Score 92.269 Data time: 3.2712, Total iter time: 7.4384 +thomas 04/10 07:07:53 ===> Epoch[217](65120/301): Loss 0.2393 LR: 4.946e-02 Score 92.204 Data time: 3.1356, Total iter time: 7.0553 +thomas 04/10 07:12:58 ===> Epoch[217](65160/301): Loss 0.2226 LR: 4.942e-02 Score 92.985 Data time: 3.2982, Total iter time: 7.5307 +thomas 04/10 07:17:57 ===> Epoch[217](65200/301): Loss 0.2391 LR: 4.939e-02 Score 92.129 Data time: 3.1992, Total iter time: 7.3992 +thomas 04/10 07:23:12 ===> Epoch[217](65240/301): Loss 0.2164 LR: 4.936e-02 Score 92.853 Data time: 3.3777, Total iter time: 7.7821 +thomas 04/10 07:28:17 ===> Epoch[217](65280/301): Loss 0.2128 LR: 4.933e-02 Score 93.517 Data time: 3.3087, Total iter time: 7.5259 +thomas 04/10 07:32:58 ===> Epoch[218](65320/301): Loss 0.2177 LR: 4.929e-02 Score 93.050 Data time: 3.0682, Total iter time: 6.9535 +thomas 04/10 07:38:05 ===> Epoch[218](65360/301): Loss 0.2296 LR: 4.926e-02 Score 92.441 Data time: 3.3281, Total iter time: 7.6118 +thomas 04/10 07:42:50 ===> Epoch[218](65400/301): Loss 0.2827 LR: 4.923e-02 Score 91.405 Data time: 3.0681, Total iter time: 7.0333 +thomas 04/10 07:47:48 ===> Epoch[218](65440/301): Loss 0.2761 LR: 4.920e-02 Score 91.071 Data time: 3.2017, Total iter time: 7.3671 +thomas 04/10 07:53:06 ===> Epoch[218](65480/301): Loss 0.2515 LR: 4.916e-02 Score 92.159 Data time: 3.4484, Total iter time: 7.8513 +thomas 04/10 07:57:58 ===> Epoch[218](65520/301): Loss 0.2397 LR: 4.913e-02 Score 92.152 Data time: 3.1627, Total iter time: 7.2070 +thomas 04/10 08:02:34 ===> Epoch[218](65560/301): Loss 0.2571 LR: 4.910e-02 Score 91.880 Data time: 2.9622, Total iter time: 6.8171 +thomas 04/10 08:07:40 ===> Epoch[218](65600/301): Loss 0.2504 LR: 4.907e-02 Score 92.252 Data time: 3.3224, Total iter time: 7.5707 +thomas 04/10 08:12:37 ===> Epoch[219](65640/301): Loss 0.2268 LR: 4.903e-02 Score 92.657 Data time: 3.2041, Total iter time: 7.3481 +thomas 04/10 08:17:39 ===> Epoch[219](65680/301): Loss 0.2095 LR: 4.900e-02 Score 92.999 Data time: 3.2767, Total iter time: 7.4850 +thomas 04/10 08:22:20 ===> Epoch[219](65720/301): Loss 0.2431 LR: 4.897e-02 Score 92.355 Data time: 3.0676, Total iter time: 6.9349 +thomas 04/10 08:27:28 ===> Epoch[219](65760/301): Loss 0.2304 LR: 4.894e-02 Score 92.664 Data time: 3.3129, Total iter time: 7.6201 +thomas 04/10 08:32:40 ===> Epoch[219](65800/301): Loss 0.2221 LR: 4.890e-02 Score 92.756 Data time: 3.3395, Total iter time: 7.7107 +thomas 04/10 08:37:38 ===> Epoch[219](65840/301): Loss 0.2304 LR: 4.887e-02 Score 92.609 Data time: 3.2441, Total iter time: 7.3646 +thomas 04/10 08:42:42 ===> Epoch[219](65880/301): Loss 0.2460 LR: 4.884e-02 Score 92.014 Data time: 3.3030, Total iter time: 7.5209 +thomas 04/10 08:47:44 ===> Epoch[220](65920/301): Loss 0.2461 LR: 4.881e-02 Score 92.106 Data time: 3.2675, Total iter time: 7.4606 +thomas 04/10 08:52:31 ===> Epoch[220](65960/301): Loss 0.2364 LR: 4.877e-02 Score 92.619 Data time: 3.1329, Total iter time: 7.0842 +thomas 04/10 08:57:34 ===> Epoch[220](66000/301): Loss 0.2494 LR: 4.874e-02 Score 91.842 Data time: 3.2456, Total iter time: 7.4760 +thomas 04/10 08:57:35 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 08:57:35 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 09:00:10 101/312: Data time: 0.0030, Iter time: 0.8922 Loss 0.458 (AVG: 0.569) Score 88.526 (AVG: 85.173) mIOU 63.038 mAP 71.995 mAcc 72.325 +IOU: 76.326 96.263 61.938 59.291 86.303 71.164 72.913 46.391 37.497 74.463 9.513 61.990 52.460 52.299 60.802 58.986 91.991 61.298 85.621 43.255 +mAP: 75.413 97.805 64.792 68.227 90.077 84.210 73.684 62.309 46.876 68.046 33.213 53.152 61.131 74.689 57.086 92.587 97.924 86.811 91.509 60.368 +mAcc: 89.127 98.784 77.262 79.571 94.128 95.694 83.276 68.398 43.410 84.218 10.061 67.706 61.711 69.828 65.411 60.471 93.475 62.985 86.832 54.159 + +thomas 04/10 09:02:42 201/312: Data time: 0.0043, Iter time: 1.1507 Loss 0.902 (AVG: 0.576) Score 79.170 (AVG: 84.810) mIOU 60.794 mAP 70.498 mAcc 70.393 +IOU: 77.495 96.142 54.592 68.884 84.179 67.777 70.761 49.814 37.514 70.897 9.883 57.417 52.961 56.470 44.292 40.799 88.072 61.610 83.584 42.744 +mAP: 75.769 97.548 60.862 77.201 89.377 82.351 74.101 65.171 47.353 61.911 34.342 56.124 61.404 73.744 49.426 86.407 93.302 82.113 82.586 58.876 +mAcc: 88.863 98.688 68.311 85.479 91.776 94.935 82.406 71.563 43.758 81.572 11.087 65.609 62.496 72.561 47.590 43.280 89.367 63.510 84.664 60.336 + +thomas 04/10 09:05:34 301/312: Data time: 0.0029, Iter time: 0.4961 Loss 0.603 (AVG: 0.603) Score 86.993 (AVG: 84.435) mIOU 58.856 mAP 69.712 mAcc 68.674 +IOU: 77.219 96.271 55.138 66.239 84.565 67.132 68.218 47.605 36.348 69.272 11.899 56.132 54.854 56.899 43.555 30.729 84.728 49.781 77.933 42.614 +mAP: 75.756 97.689 58.567 75.971 88.662 82.653 72.426 63.258 48.728 66.192 36.900 53.716 64.889 75.289 47.770 85.160 91.367 74.236 78.573 56.442 +mAcc: 88.709 98.710 68.285 86.408 92.314 95.016 79.779 67.838 42.861 80.502 12.854 62.846 64.687 78.765 45.968 32.115 86.268 50.984 78.955 59.619 + +thomas 04/10 09:05:53 312/312: Data time: 0.0100, Iter time: 0.7106 Loss 1.335 (AVG: 0.605) Score 67.696 (AVG: 84.372) mIOU 59.127 mAP 69.842 mAcc 69.057 +IOU: 77.230 96.269 55.647 65.700 84.303 66.897 67.394 47.595 37.089 68.299 12.452 55.827 55.229 59.464 43.633 33.465 85.241 49.021 79.798 41.988 +mAP: 75.886 97.706 58.166 75.139 88.598 81.721 72.377 63.486 49.131 66.192 37.732 52.190 64.781 76.687 48.485 86.447 91.688 74.413 80.490 55.525 +mAcc: 88.533 98.691 68.773 86.585 92.469 94.378 79.112 67.451 43.799 80.502 13.432 62.915 64.968 80.525 47.158 34.875 86.855 50.204 80.978 58.930 + +thomas 04/10 09:05:53 Finished test. Elapsed time: 497.9512 +thomas 04/10 09:05:53 Current best mIoU: 62.239 at iter 59000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 09:10:35 ===> Epoch[220](66040/301): Loss 0.2533 LR: 4.871e-02 Score 91.898 Data time: 3.0663, Total iter time: 6.9566 +thomas 04/10 09:15:26 ===> Epoch[220](66080/301): Loss 0.2443 LR: 4.868e-02 Score 92.105 Data time: 3.1187, Total iter time: 7.2095 +thomas 04/10 09:20:37 ===> Epoch[220](66120/301): Loss 0.2435 LR: 4.864e-02 Score 92.228 Data time: 3.3615, Total iter time: 7.6671 +thomas 04/10 09:25:36 ===> Epoch[220](66160/301): Loss 0.2336 LR: 4.861e-02 Score 92.372 Data time: 3.2434, Total iter time: 7.4070 +thomas 04/10 09:30:30 ===> Epoch[220](66200/301): Loss 0.2106 LR: 4.858e-02 Score 93.040 Data time: 3.2339, Total iter time: 7.2556 +thomas 04/10 09:35:21 ===> Epoch[221](66240/301): Loss 0.2436 LR: 4.855e-02 Score 92.478 Data time: 3.1044, Total iter time: 7.1967 +thomas 04/10 09:40:23 ===> Epoch[221](66280/301): Loss 0.2239 LR: 4.851e-02 Score 92.820 Data time: 3.2687, Total iter time: 7.4479 +thomas 04/10 09:45:09 ===> Epoch[221](66320/301): Loss 0.2752 LR: 4.848e-02 Score 91.735 Data time: 3.0544, Total iter time: 7.0720 +thomas 04/10 09:50:12 ===> Epoch[221](66360/301): Loss 0.2363 LR: 4.845e-02 Score 92.592 Data time: 3.3435, Total iter time: 7.4747 +thomas 04/10 09:55:34 ===> Epoch[221](66400/301): Loss 0.2326 LR: 4.842e-02 Score 92.731 Data time: 3.5439, Total iter time: 7.9456 +thomas 04/10 10:00:21 ===> Epoch[221](66440/301): Loss 0.2275 LR: 4.838e-02 Score 92.569 Data time: 3.1419, Total iter time: 7.0852 +thomas 04/10 10:04:44 ===> Epoch[221](66480/301): Loss 0.2409 LR: 4.835e-02 Score 92.218 Data time: 2.6320, Total iter time: 6.5016 +thomas 04/10 10:09:10 ===> Epoch[221](66520/301): Loss 0.2252 LR: 4.832e-02 Score 92.826 Data time: 2.7141, Total iter time: 6.5590 +thomas 04/10 10:13:41 ===> Epoch[222](66560/301): Loss 0.2051 LR: 4.829e-02 Score 93.245 Data time: 2.8345, Total iter time: 6.6810 +thomas 04/10 10:18:03 ===> Epoch[222](66600/301): Loss 0.2360 LR: 4.825e-02 Score 92.557 Data time: 2.7232, Total iter time: 6.4621 +thomas 04/10 10:22:47 ===> Epoch[222](66640/301): Loss 0.2322 LR: 4.822e-02 Score 92.554 Data time: 2.9241, Total iter time: 7.0028 +thomas 04/10 10:27:17 ===> Epoch[222](66680/301): Loss 0.2138 LR: 4.819e-02 Score 93.038 Data time: 2.8542, Total iter time: 6.6676 +thomas 04/10 10:31:51 ===> Epoch[222](66720/301): Loss 0.2354 LR: 4.816e-02 Score 92.371 Data time: 2.8418, Total iter time: 6.7575 +thomas 04/10 10:36:14 ===> Epoch[222](66760/301): Loss 0.2100 LR: 4.812e-02 Score 93.088 Data time: 2.7921, Total iter time: 6.4893 +thomas 04/10 10:40:36 ===> Epoch[222](66800/301): Loss 0.2135 LR: 4.809e-02 Score 93.075 Data time: 2.7089, Total iter time: 6.4724 +thomas 04/10 10:44:07 ===> Epoch[223](66840/301): Loss 0.2105 LR: 4.806e-02 Score 93.070 Data time: 2.0407, Total iter time: 5.1869 +thomas 04/10 10:47:42 ===> Epoch[223](66880/301): Loss 0.2230 LR: 4.803e-02 Score 92.642 Data time: 2.0917, Total iter time: 5.3145 +thomas 04/10 10:51:15 ===> Epoch[223](66920/301): Loss 0.2239 LR: 4.799e-02 Score 92.825 Data time: 2.0808, Total iter time: 5.2340 +thomas 04/10 10:54:59 ===> Epoch[223](66960/301): Loss 0.2403 LR: 4.796e-02 Score 92.153 Data time: 2.1624, Total iter time: 5.5244 +thomas 04/10 10:58:53 ===> Epoch[223](67000/301): Loss 0.2103 LR: 4.793e-02 Score 92.975 Data time: 2.2413, Total iter time: 5.7647 +thomas 04/10 10:58:54 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 10:58:54 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 11:00:45 101/312: Data time: 0.0036, Iter time: 0.6213 Loss 0.471 (AVG: 0.578) Score 86.438 (AVG: 85.007) mIOU 59.725 mAP 71.469 mAcc 69.857 +IOU: 77.756 96.106 52.861 60.771 86.908 66.777 70.637 45.165 38.526 64.627 9.194 36.624 65.972 62.071 36.016 57.834 72.958 58.167 85.041 50.497 +mAP: 78.359 96.980 64.043 63.769 88.164 79.382 73.553 61.210 51.389 64.175 35.505 52.187 76.748 80.046 46.377 93.319 85.226 87.899 96.661 54.380 +mAcc: 89.861 98.688 79.558 64.422 90.135 91.703 83.396 70.561 39.592 95.386 11.334 39.357 81.184 76.492 38.421 70.587 73.490 58.855 85.637 58.472 + +thomas 04/10 11:02:28 201/312: Data time: 0.0036, Iter time: 0.5166 Loss 0.681 (AVG: 0.572) Score 79.603 (AVG: 85.188) mIOU 60.903 mAP 71.544 mAcc 69.985 +IOU: 77.795 96.011 54.774 67.567 86.461 67.035 70.194 45.317 34.104 70.255 10.092 46.757 61.517 66.537 53.130 45.973 74.489 58.075 80.857 51.121 +mAP: 78.690 96.843 63.312 68.678 90.490 78.822 76.247 63.159 52.508 64.545 33.000 56.309 71.542 79.491 55.398 86.221 86.870 87.008 88.753 53.001 +mAcc: 91.103 98.569 75.785 71.870 89.036 94.186 83.754 68.801 35.263 89.833 13.119 49.575 76.539 79.335 56.766 50.190 74.872 59.322 81.372 60.410 + +thomas 04/10 11:04:11 301/312: Data time: 0.0036, Iter time: 0.9936 Loss 0.862 (AVG: 0.567) Score 72.445 (AVG: 85.508) mIOU 60.734 mAP 71.247 mAcc 69.590 +IOU: 78.492 96.246 55.174 72.244 87.105 71.513 70.207 48.764 31.651 70.451 12.907 50.436 59.257 69.904 44.211 41.525 76.335 52.386 76.595 49.280 +mAP: 79.723 97.005 62.327 68.925 90.537 80.298 75.186 64.749 49.785 65.773 34.850 58.840 70.827 78.887 54.919 84.313 90.409 82.866 79.740 54.986 +mAcc: 91.199 98.645 76.780 76.994 89.564 95.167 83.071 72.202 33.045 89.880 16.566 53.123 75.429 81.652 47.262 43.789 76.725 53.797 77.125 59.791 + +thomas 04/10 11:04:21 312/312: Data time: 0.0026, Iter time: 0.2529 Loss 0.365 (AVG: 0.565) Score 88.807 (AVG: 85.567) mIOU 60.812 mAP 71.353 mAcc 69.638 +IOU: 78.642 96.163 55.528 71.554 87.326 72.289 70.247 49.167 31.200 69.953 13.325 51.021 59.253 70.089 43.676 41.038 76.837 52.386 77.648 48.908 +mAP: 79.888 97.022 62.466 68.901 90.489 80.570 75.422 65.135 49.975 65.773 35.552 59.516 70.673 79.639 52.740 84.313 90.595 82.866 80.391 55.134 +mAcc: 91.388 98.592 77.153 76.252 89.760 95.436 83.218 71.711 32.625 89.880 17.097 53.652 75.268 81.724 46.694 43.789 77.233 53.797 78.166 59.335 + +thomas 04/10 11:04:21 Finished test. Elapsed time: 326.5783 +thomas 04/10 11:04:21 Current best mIoU: 62.239 at iter 59000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 11:07:44 ===> Epoch[223](67040/301): Loss 0.2244 LR: 4.790e-02 Score 92.724 Data time: 1.9223, Total iter time: 4.9939 +thomas 04/10 11:11:12 ===> Epoch[223](67080/301): Loss 0.2198 LR: 4.786e-02 Score 93.051 Data time: 1.9893, Total iter time: 5.1213 +thomas 04/10 11:14:46 ===> Epoch[223](67120/301): Loss 0.2307 LR: 4.783e-02 Score 92.871 Data time: 2.0367, Total iter time: 5.2610 +thomas 04/10 11:18:19 ===> Epoch[224](67160/301): Loss 0.2189 LR: 4.780e-02 Score 92.800 Data time: 2.0504, Total iter time: 5.2472 +thomas 04/10 11:21:49 ===> Epoch[224](67200/301): Loss 0.2530 LR: 4.777e-02 Score 91.817 Data time: 2.0134, Total iter time: 5.1921 +thomas 04/10 11:25:16 ===> Epoch[224](67240/301): Loss 0.2623 LR: 4.773e-02 Score 91.756 Data time: 1.9607, Total iter time: 5.1031 +thomas 04/10 11:28:48 ===> Epoch[224](67280/301): Loss 0.2330 LR: 4.770e-02 Score 92.581 Data time: 2.0165, Total iter time: 5.2376 +thomas 04/10 11:32:29 ===> Epoch[224](67320/301): Loss 0.2318 LR: 4.767e-02 Score 92.564 Data time: 2.1105, Total iter time: 5.4447 +thomas 04/10 11:36:18 ===> Epoch[224](67360/301): Loss 0.2199 LR: 4.763e-02 Score 93.003 Data time: 2.2080, Total iter time: 5.6542 +thomas 04/10 11:39:48 ===> Epoch[224](67400/301): Loss 0.2085 LR: 4.760e-02 Score 93.600 Data time: 2.0161, Total iter time: 5.1668 +thomas 04/10 11:43:22 ===> Epoch[225](67440/301): Loss 0.2147 LR: 4.757e-02 Score 93.090 Data time: 2.0600, Total iter time: 5.2939 +thomas 04/10 11:46:46 ===> Epoch[225](67480/301): Loss 0.2178 LR: 4.754e-02 Score 92.922 Data time: 1.9901, Total iter time: 5.0496 +thomas 04/10 11:50:23 ===> Epoch[225](67520/301): Loss 0.2138 LR: 4.750e-02 Score 93.030 Data time: 2.0737, Total iter time: 5.3615 +thomas 04/10 11:53:56 ===> Epoch[225](67560/301): Loss 0.2354 LR: 4.747e-02 Score 92.298 Data time: 2.0594, Total iter time: 5.2661 +thomas 04/10 11:57:30 ===> Epoch[225](67600/301): Loss 0.2419 LR: 4.744e-02 Score 92.318 Data time: 2.0689, Total iter time: 5.2612 +thomas 04/10 12:00:59 ===> Epoch[225](67640/301): Loss 0.2291 LR: 4.741e-02 Score 92.625 Data time: 2.0336, Total iter time: 5.1736 +thomas 04/10 12:04:50 ===> Epoch[225](67680/301): Loss 0.2328 LR: 4.737e-02 Score 92.602 Data time: 2.2135, Total iter time: 5.6824 +thomas 04/10 12:08:35 ===> Epoch[225](67720/301): Loss 0.2229 LR: 4.734e-02 Score 92.749 Data time: 2.1779, Total iter time: 5.5704 +thomas 04/10 12:12:13 ===> Epoch[226](67760/301): Loss 0.2332 LR: 4.731e-02 Score 92.617 Data time: 2.0812, Total iter time: 5.3583 +thomas 04/10 12:15:47 ===> Epoch[226](67800/301): Loss 0.2107 LR: 4.728e-02 Score 93.056 Data time: 2.0667, Total iter time: 5.2867 +thomas 04/10 12:19:16 ===> Epoch[226](67840/301): Loss 0.2129 LR: 4.724e-02 Score 92.881 Data time: 2.0195, Total iter time: 5.1634 +thomas 04/10 12:23:05 ===> Epoch[226](67880/301): Loss 0.2502 LR: 4.721e-02 Score 92.062 Data time: 2.2010, Total iter time: 5.6487 +thomas 04/10 12:26:46 ===> Epoch[226](67920/301): Loss 0.2132 LR: 4.718e-02 Score 93.064 Data time: 2.1029, Total iter time: 5.4342 +thomas 04/10 12:30:39 ===> Epoch[226](67960/301): Loss 0.2154 LR: 4.715e-02 Score 93.156 Data time: 2.2410, Total iter time: 5.7627 +thomas 04/10 12:34:08 ===> Epoch[226](68000/301): Loss 0.2147 LR: 4.711e-02 Score 92.713 Data time: 2.0186, Total iter time: 5.1544 +thomas 04/10 12:34:09 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 12:34:10 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 12:36:01 101/312: Data time: 0.0038, Iter time: 0.3230 Loss 1.351 (AVG: 0.634) Score 68.006 (AVG: 83.375) mIOU 58.881 mAP 71.524 mAcc 70.397 +IOU: 75.409 96.621 47.929 57.298 90.707 75.792 65.299 43.281 25.566 65.874 7.091 59.302 53.436 62.672 60.066 35.595 75.546 66.425 62.398 51.323 +mAP: 78.884 97.032 53.628 62.269 89.312 82.650 74.119 62.542 47.322 70.361 29.097 71.866 60.512 76.019 70.626 85.872 90.910 89.326 81.184 56.954 +mAcc: 86.425 98.684 76.558 70.289 95.340 94.804 75.357 58.143 28.288 90.749 14.319 85.013 87.123 64.054 70.131 36.792 76.070 72.717 62.988 64.095 + +thomas 04/10 12:37:49 201/312: Data time: 0.0025, Iter time: 0.3613 Loss 0.462 (AVG: 0.627) Score 82.183 (AVG: 84.022) mIOU 59.676 mAP 70.422 mAcc 70.538 +IOU: 76.153 96.304 51.031 70.262 89.897 75.947 70.167 43.607 28.592 68.136 12.113 57.579 57.369 68.736 50.413 27.021 79.174 59.878 65.635 45.513 +mAP: 77.371 96.622 52.405 69.360 90.096 79.181 74.419 63.464 45.154 70.233 36.454 63.698 63.902 81.568 63.761 76.320 89.316 85.971 75.254 53.897 +mAcc: 87.424 98.540 77.029 79.913 94.047 93.329 80.497 60.813 30.813 92.912 16.440 80.290 84.833 72.681 60.606 28.268 79.719 63.995 66.182 62.430 + +thomas 04/10 12:39:35 301/312: Data time: 0.0022, Iter time: 0.5623 Loss 0.382 (AVG: 0.612) Score 90.719 (AVG: 84.516) mIOU 59.929 mAP 70.768 mAcc 70.637 +IOU: 76.719 96.334 49.511 72.065 90.584 76.000 70.467 44.800 27.340 70.499 13.502 55.665 57.286 66.350 52.333 33.129 73.864 61.356 65.838 44.929 +mAP: 78.173 97.012 54.358 72.900 91.198 79.820 74.598 64.288 43.579 70.934 35.447 62.072 64.355 77.529 68.301 76.521 90.025 85.858 76.114 52.283 +mAcc: 87.558 98.585 76.294 83.119 94.754 93.289 80.906 63.348 29.438 93.386 18.399 79.293 83.125 70.001 60.523 34.653 74.313 65.308 66.352 60.105 + +thomas 04/10 12:39:46 312/312: Data time: 0.0029, Iter time: 0.4208 Loss 1.988 (AVG: 0.620) Score 67.405 (AVG: 84.418) mIOU 59.765 mAP 70.669 mAcc 70.509 +IOU: 76.771 96.267 49.826 71.894 90.076 76.284 70.439 44.866 26.572 70.376 13.266 56.335 58.215 65.821 50.171 33.591 73.174 61.100 65.899 44.348 +mAP: 78.350 96.948 53.854 72.159 90.910 79.657 74.585 64.357 43.383 70.946 35.693 60.525 65.020 76.732 69.202 77.126 90.046 85.218 76.784 51.881 +mAcc: 87.524 98.586 77.013 82.560 94.225 93.354 80.957 63.961 28.579 93.439 17.991 78.992 83.454 69.339 60.481 35.121 73.611 65.068 66.411 59.523 + +thomas 04/10 12:39:46 Finished test. Elapsed time: 336.9201 +thomas 04/10 12:39:46 Current best mIoU: 62.239 at iter 59000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 12:43:29 ===> Epoch[227](68040/301): Loss 0.2079 LR: 4.708e-02 Score 93.377 Data time: 2.1399, Total iter time: 5.4889 +thomas 04/10 12:46:58 ===> Epoch[227](68080/301): Loss 0.2355 LR: 4.705e-02 Score 92.526 Data time: 2.0134, Total iter time: 5.1442 +thomas 04/10 12:50:37 ===> Epoch[227](68120/301): Loss 0.2585 LR: 4.702e-02 Score 91.935 Data time: 2.1204, Total iter time: 5.4190 +thomas 04/10 12:54:02 ===> Epoch[227](68160/301): Loss 0.2309 LR: 4.698e-02 Score 92.771 Data time: 1.9804, Total iter time: 5.0334 +thomas 04/10 12:58:31 ===> Epoch[227](68200/301): Loss 0.2186 LR: 4.695e-02 Score 92.840 Data time: 2.8888, Total iter time: 6.6619 +thomas 04/10 13:03:07 ===> Epoch[227](68240/301): Loss 0.1980 LR: 4.692e-02 Score 93.657 Data time: 2.9478, Total iter time: 6.8236 +thomas 04/10 13:07:43 ===> Epoch[227](68280/301): Loss 0.2056 LR: 4.688e-02 Score 93.518 Data time: 3.0767, Total iter time: 6.8138 +thomas 04/10 13:12:35 ===> Epoch[227](68320/301): Loss 0.1969 LR: 4.685e-02 Score 93.581 Data time: 3.2174, Total iter time: 7.2114 +thomas 04/10 13:17:15 ===> Epoch[228](68360/301): Loss 0.2163 LR: 4.682e-02 Score 92.815 Data time: 3.0223, Total iter time: 6.9184 +thomas 04/10 13:21:34 ===> Epoch[228](68400/301): Loss 0.2011 LR: 4.679e-02 Score 93.539 Data time: 2.7151, Total iter time: 6.3798 +thomas 04/10 13:25:41 ===> Epoch[228](68440/301): Loss 0.2237 LR: 4.675e-02 Score 92.999 Data time: 2.3921, Total iter time: 6.0936 +thomas 04/10 13:29:43 ===> Epoch[228](68480/301): Loss 0.2158 LR: 4.672e-02 Score 93.141 Data time: 2.3110, Total iter time: 5.9870 +thomas 04/10 13:33:39 ===> Epoch[228](68520/301): Loss 0.2298 LR: 4.669e-02 Score 92.762 Data time: 2.2579, Total iter time: 5.8192 +thomas 04/10 13:37:29 ===> Epoch[228](68560/301): Loss 0.1988 LR: 4.666e-02 Score 93.536 Data time: 2.2127, Total iter time: 5.6606 +thomas 04/10 13:41:05 ===> Epoch[228](68600/301): Loss 0.2367 LR: 4.662e-02 Score 92.754 Data time: 2.1228, Total iter time: 5.3480 +thomas 04/10 13:44:58 ===> Epoch[229](68640/301): Loss 0.2467 LR: 4.659e-02 Score 92.290 Data time: 2.2448, Total iter time: 5.7481 +thomas 04/10 13:48:45 ===> Epoch[229](68680/301): Loss 0.2346 LR: 4.656e-02 Score 92.663 Data time: 2.1821, Total iter time: 5.5966 +thomas 04/10 13:52:26 ===> Epoch[229](68720/301): Loss 0.2587 LR: 4.653e-02 Score 91.596 Data time: 2.1275, Total iter time: 5.4657 +thomas 04/10 13:56:19 ===> Epoch[229](68760/301): Loss 0.2633 LR: 4.649e-02 Score 91.622 Data time: 2.2173, Total iter time: 5.7515 +thomas 04/10 14:00:15 ===> Epoch[229](68800/301): Loss 0.2312 LR: 4.646e-02 Score 92.620 Data time: 2.2651, Total iter time: 5.8118 +thomas 04/10 14:04:03 ===> Epoch[229](68840/301): Loss 0.2267 LR: 4.643e-02 Score 92.860 Data time: 2.1986, Total iter time: 5.6280 +thomas 04/10 14:07:47 ===> Epoch[229](68880/301): Loss 0.2584 LR: 4.640e-02 Score 91.721 Data time: 2.1268, Total iter time: 5.5105 +thomas 04/10 14:11:24 ===> Epoch[229](68920/301): Loss 0.2738 LR: 4.636e-02 Score 91.412 Data time: 2.1020, Total iter time: 5.3511 +thomas 04/10 14:15:26 ===> Epoch[230](68960/301): Loss 0.2619 LR: 4.633e-02 Score 91.687 Data time: 2.3262, Total iter time: 5.9736 +thomas 04/10 14:19:15 ===> Epoch[230](69000/301): Loss 0.2774 LR: 4.630e-02 Score 91.484 Data time: 2.2178, Total iter time: 5.6560 +thomas 04/10 14:19:16 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 14:19:16 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 14:21:13 101/312: Data time: 0.1076, Iter time: 0.5082 Loss 1.794 (AVG: 0.573) Score 64.377 (AVG: 84.248) mIOU 55.645 mAP 67.759 mAcc 64.786 +IOU: 78.119 96.319 49.028 69.091 87.183 80.108 62.642 42.622 28.481 75.594 12.271 6.858 40.942 60.452 49.843 35.842 79.695 50.258 69.500 38.047 +mAP: 76.295 97.836 55.484 67.242 90.534 88.298 72.079 57.334 38.183 69.491 33.419 55.012 62.435 77.082 50.241 83.263 86.648 79.458 64.996 49.846 +mAcc: 93.280 98.182 66.079 78.547 92.412 96.045 70.028 54.157 30.642 85.812 14.468 6.905 87.271 68.575 52.643 39.455 80.083 52.351 69.907 58.876 + +thomas 04/10 14:23:04 201/312: Data time: 0.0025, Iter time: 0.6566 Loss 0.408 (AVG: 0.590) Score 90.947 (AVG: 84.101) mIOU 56.204 mAP 68.468 mAcc 64.894 +IOU: 77.830 96.175 47.090 66.148 88.238 78.113 67.428 41.069 26.486 73.575 10.323 9.386 49.238 55.422 45.703 41.887 86.754 53.130 72.146 37.947 +mAP: 78.209 97.122 54.427 64.486 89.840 85.336 73.653 58.853 41.831 65.814 29.486 51.913 66.042 71.768 56.567 85.277 91.505 78.699 75.606 52.917 +mAcc: 94.193 98.240 63.882 76.714 93.369 94.911 75.462 52.305 28.294 84.589 12.727 9.566 88.810 62.647 48.162 45.583 87.581 55.862 72.614 52.373 + +thomas 04/10 14:24:57 301/312: Data time: 0.0029, Iter time: 0.9469 Loss 1.123 (AVG: 0.604) Score 72.593 (AVG: 84.071) mIOU 56.566 mAP 68.300 mAcc 65.050 +IOU: 77.696 96.167 49.307 64.952 88.608 75.736 68.427 41.781 27.302 67.980 10.829 7.615 53.954 59.058 44.098 42.685 86.638 56.198 74.354 37.935 +mAP: 78.117 97.379 55.182 62.725 90.106 82.048 71.608 60.425 41.253 64.782 31.569 51.278 66.090 73.344 53.503 85.872 91.288 79.530 77.229 52.673 +mAcc: 94.417 98.268 66.932 75.506 93.814 94.665 76.762 52.539 29.166 79.423 13.100 7.744 86.093 66.816 46.583 46.378 87.437 58.956 74.824 51.573 + +thomas 04/10 14:25:08 312/312: Data time: 0.0029, Iter time: 0.3885 Loss 0.823 (AVG: 0.605) Score 82.430 (AVG: 84.034) mIOU 56.642 mAP 68.470 mAcc 65.117 +IOU: 77.686 96.177 49.030 64.949 88.737 76.690 68.057 41.503 27.216 67.944 10.945 7.615 53.268 59.045 42.708 43.843 86.885 56.918 75.155 38.471 +mAP: 78.333 97.439 55.036 63.580 90.336 82.649 71.554 59.985 41.755 64.782 31.440 51.278 65.915 73.593 52.366 86.055 91.604 79.978 77.914 53.805 +mAcc: 94.437 98.292 66.315 76.371 93.936 94.921 76.370 52.066 29.008 79.423 13.234 7.744 84.884 66.846 45.554 47.745 87.705 59.644 75.620 52.228 + +thomas 04/10 14:25:08 Finished test. Elapsed time: 351.5044 +thomas 04/10 14:25:08 Current best mIoU: 62.239 at iter 59000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 14:28:51 ===> Epoch[230](69040/301): Loss 0.2110 LR: 4.626e-02 Score 93.137 Data time: 2.1513, Total iter time: 5.5094 +thomas 04/10 14:32:42 ===> Epoch[230](69080/301): Loss 0.2267 LR: 4.623e-02 Score 92.798 Data time: 2.2477, Total iter time: 5.7078 +thomas 04/10 14:36:35 ===> Epoch[230](69120/301): Loss 0.2154 LR: 4.620e-02 Score 93.006 Data time: 2.2504, Total iter time: 5.7491 +thomas 04/10 14:40:14 ===> Epoch[230](69160/301): Loss 0.2319 LR: 4.617e-02 Score 92.643 Data time: 2.1061, Total iter time: 5.4027 +thomas 04/10 14:44:07 ===> Epoch[230](69200/301): Loss 0.2383 LR: 4.613e-02 Score 92.398 Data time: 2.2237, Total iter time: 5.7532 +thomas 04/10 14:47:54 ===> Epoch[231](69240/301): Loss 0.2136 LR: 4.610e-02 Score 93.263 Data time: 2.2098, Total iter time: 5.6016 +thomas 04/10 14:51:29 ===> Epoch[231](69280/301): Loss 0.2007 LR: 4.607e-02 Score 93.357 Data time: 2.0557, Total iter time: 5.2846 +thomas 04/10 14:55:11 ===> Epoch[231](69320/301): Loss 0.2511 LR: 4.604e-02 Score 92.061 Data time: 2.1279, Total iter time: 5.4862 +thomas 04/10 14:59:09 ===> Epoch[231](69360/301): Loss 0.2432 LR: 4.600e-02 Score 92.364 Data time: 2.3062, Total iter time: 5.8611 +thomas 04/10 15:02:43 ===> Epoch[231](69400/301): Loss 0.1994 LR: 4.597e-02 Score 93.683 Data time: 2.0737, Total iter time: 5.2635 +thomas 04/10 15:06:32 ===> Epoch[231](69440/301): Loss 0.2340 LR: 4.594e-02 Score 92.494 Data time: 2.2230, Total iter time: 5.6574 +thomas 04/10 15:10:22 ===> Epoch[231](69480/301): Loss 0.2381 LR: 4.590e-02 Score 92.335 Data time: 2.2379, Total iter time: 5.6874 +thomas 04/10 15:13:58 ===> Epoch[231](69520/301): Loss 0.2197 LR: 4.587e-02 Score 92.946 Data time: 2.0804, Total iter time: 5.3125 +thomas 04/10 15:18:03 ===> Epoch[232](69560/301): Loss 0.2276 LR: 4.584e-02 Score 92.787 Data time: 2.3526, Total iter time: 6.0637 +thomas 04/10 15:21:55 ===> Epoch[232](69600/301): Loss 0.2187 LR: 4.581e-02 Score 92.909 Data time: 2.2363, Total iter time: 5.7216 +thomas 04/10 15:25:42 ===> Epoch[232](69640/301): Loss 0.2245 LR: 4.577e-02 Score 92.858 Data time: 2.2216, Total iter time: 5.5964 +thomas 04/10 15:29:25 ===> Epoch[232](69680/301): Loss 0.2424 LR: 4.574e-02 Score 92.431 Data time: 2.1712, Total iter time: 5.5176 +thomas 04/10 15:33:02 ===> Epoch[232](69720/301): Loss 0.2390 LR: 4.571e-02 Score 92.503 Data time: 2.0824, Total iter time: 5.3586 +thomas 04/10 15:36:59 ===> Epoch[232](69760/301): Loss 0.2527 LR: 4.568e-02 Score 91.853 Data time: 2.2759, Total iter time: 5.8407 +thomas 04/10 15:40:50 ===> Epoch[232](69800/301): Loss 0.2155 LR: 4.564e-02 Score 92.836 Data time: 2.2457, Total iter time: 5.6939 +thomas 04/10 15:44:25 ===> Epoch[233](69840/301): Loss 0.2237 LR: 4.561e-02 Score 92.883 Data time: 2.0873, Total iter time: 5.3140 +thomas 04/10 15:48:28 ===> Epoch[233](69880/301): Loss 0.2153 LR: 4.558e-02 Score 92.967 Data time: 2.3311, Total iter time: 5.9965 +thomas 04/10 15:52:13 ===> Epoch[233](69920/301): Loss 0.2205 LR: 4.554e-02 Score 92.881 Data time: 2.1733, Total iter time: 5.5391 +thomas 04/10 15:56:06 ===> Epoch[233](69960/301): Loss 0.2429 LR: 4.551e-02 Score 92.404 Data time: 2.2448, Total iter time: 5.7403 +thomas 04/10 15:59:46 ===> Epoch[233](70000/301): Loss 0.2432 LR: 4.548e-02 Score 92.310 Data time: 2.1149, Total iter time: 5.4471 +thomas 04/10 15:59:48 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 15:59:48 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 16:01:36 101/312: Data time: 0.0025, Iter time: 0.3418 Loss 0.446 (AVG: 0.662) Score 86.623 (AVG: 82.703) mIOU 59.453 mAP 70.353 mAcc 71.133 +IOU: 72.689 95.616 54.986 64.935 87.043 73.659 68.890 40.351 25.702 67.182 17.221 53.438 52.981 56.764 44.286 44.087 92.607 59.847 74.876 41.909 +mAP: 73.448 97.349 65.112 69.686 90.220 81.585 75.725 63.378 43.219 58.215 35.033 58.337 66.186 69.179 58.609 88.990 97.313 87.881 72.401 55.203 +mAcc: 86.648 97.873 64.456 81.250 90.015 92.613 86.124 65.394 27.393 89.862 22.935 75.102 77.280 64.155 58.153 44.641 93.970 72.101 78.126 54.566 + +thomas 04/10 16:03:32 201/312: Data time: 0.0024, Iter time: 0.3679 Loss 0.177 (AVG: 0.671) Score 93.201 (AVG: 82.486) mIOU 58.076 mAP 68.585 mAcc 68.967 +IOU: 72.428 95.782 52.564 62.904 87.328 72.883 68.133 41.193 26.251 65.823 8.567 52.257 56.588 60.978 38.649 43.426 78.454 59.132 77.229 40.953 +mAP: 74.456 97.417 60.233 61.879 89.488 81.237 74.490 60.462 42.568 60.554 29.223 58.829 65.826 71.622 52.638 84.455 89.396 85.632 75.502 55.801 +mAcc: 86.175 97.979 62.756 77.948 90.567 92.370 84.440 66.908 28.278 87.407 10.622 71.562 78.079 66.996 49.064 47.274 78.967 64.270 81.360 56.309 + +thomas 04/10 16:05:29 301/312: Data time: 0.0027, Iter time: 0.4873 Loss 0.480 (AVG: 0.648) Score 86.954 (AVG: 82.924) mIOU 58.806 mAP 68.973 mAcc 69.689 +IOU: 72.850 95.769 55.478 65.059 87.072 75.809 67.548 40.043 27.666 64.476 7.301 56.284 55.780 65.398 39.569 46.619 76.860 57.630 78.599 40.312 +mAP: 74.896 97.378 60.681 65.708 88.855 81.647 73.326 59.976 45.185 63.979 27.067 60.384 64.899 75.899 54.352 83.200 88.057 81.796 77.134 55.052 +mAcc: 85.734 97.849 65.471 78.271 90.708 92.722 82.813 69.159 29.653 86.190 8.563 74.129 78.399 72.085 51.928 51.053 77.429 62.512 82.015 57.096 + +thomas 04/10 16:05:40 312/312: Data time: 0.0030, Iter time: 0.4191 Loss 0.525 (AVG: 0.651) Score 82.717 (AVG: 82.901) mIOU 58.915 mAP 69.314 mAcc 69.947 +IOU: 72.972 95.733 55.801 66.304 87.234 74.674 67.719 39.878 28.217 62.705 7.388 56.366 55.619 65.648 40.296 47.390 77.534 57.848 79.614 39.358 +mAP: 75.202 97.203 61.001 66.911 89.181 81.684 73.749 59.224 45.124 63.979 27.395 61.092 65.503 76.179 55.388 83.338 88.500 82.249 78.363 55.016 +mAcc: 85.670 97.839 65.580 79.388 90.880 92.771 83.028 69.222 30.202 86.190 8.667 74.525 78.195 72.416 53.125 52.147 78.119 62.787 82.890 55.301 + +thomas 04/10 16:05:40 Finished test. Elapsed time: 351.7977 +thomas 04/10 16:05:40 Current best mIoU: 62.239 at iter 59000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 16:09:32 ===> Epoch[233](70040/301): Loss 0.2181 LR: 4.545e-02 Score 93.042 Data time: 2.2313, Total iter time: 5.7354 +thomas 04/10 16:13:23 ===> Epoch[233](70080/301): Loss 0.2285 LR: 4.541e-02 Score 92.863 Data time: 2.2313, Total iter time: 5.6974 +thomas 04/10 16:17:16 ===> Epoch[233](70120/301): Loss 0.2122 LR: 4.538e-02 Score 93.147 Data time: 2.2285, Total iter time: 5.7410 +thomas 04/10 16:20:54 ===> Epoch[234](70160/301): Loss 0.2126 LR: 4.535e-02 Score 93.066 Data time: 2.1147, Total iter time: 5.3801 +thomas 04/10 16:24:46 ===> Epoch[234](70200/301): Loss 0.2265 LR: 4.532e-02 Score 92.831 Data time: 2.2353, Total iter time: 5.7353 +thomas 04/10 16:28:40 ===> Epoch[234](70240/301): Loss 0.2118 LR: 4.528e-02 Score 93.191 Data time: 2.2629, Total iter time: 5.7723 +thomas 04/10 16:32:20 ===> Epoch[234](70280/301): Loss 0.2216 LR: 4.525e-02 Score 92.939 Data time: 2.1211, Total iter time: 5.4370 +thomas 04/10 16:36:25 ===> Epoch[234](70320/301): Loss 0.2063 LR: 4.522e-02 Score 93.073 Data time: 2.3276, Total iter time: 6.0466 +thomas 04/10 16:40:04 ===> Epoch[234](70360/301): Loss 0.2302 LR: 4.518e-02 Score 92.626 Data time: 2.1288, Total iter time: 5.4070 +thomas 04/10 16:43:53 ===> Epoch[234](70400/301): Loss 0.2232 LR: 4.515e-02 Score 92.934 Data time: 2.2016, Total iter time: 5.6569 +thomas 04/10 16:47:14 ===> Epoch[235](70440/301): Loss 0.2212 LR: 4.512e-02 Score 92.759 Data time: 1.9400, Total iter time: 4.9509 +thomas 04/10 16:50:49 ===> Epoch[235](70480/301): Loss 0.2323 LR: 4.509e-02 Score 92.527 Data time: 2.0711, Total iter time: 5.2879 +thomas 04/10 16:54:36 ===> Epoch[235](70520/301): Loss 0.2088 LR: 4.505e-02 Score 93.351 Data time: 2.1855, Total iter time: 5.6025 +thomas 04/10 16:58:11 ===> Epoch[235](70560/301): Loss 0.1975 LR: 4.502e-02 Score 93.671 Data time: 2.0790, Total iter time: 5.3093 +thomas 04/10 17:01:47 ===> Epoch[235](70600/301): Loss 0.1807 LR: 4.499e-02 Score 94.196 Data time: 2.0544, Total iter time: 5.3164 +thomas 04/10 17:05:48 ===> Epoch[235](70640/301): Loss 0.2265 LR: 4.496e-02 Score 92.824 Data time: 2.3444, Total iter time: 5.9355 +thomas 04/10 17:09:38 ===> Epoch[235](70680/301): Loss 0.2050 LR: 4.492e-02 Score 93.285 Data time: 2.2918, Total iter time: 5.6856 +thomas 04/10 17:13:08 ===> Epoch[235](70720/301): Loss 0.2013 LR: 4.489e-02 Score 93.327 Data time: 2.0420, Total iter time: 5.1892 +thomas 04/10 17:16:37 ===> Epoch[236](70760/301): Loss 0.2406 LR: 4.486e-02 Score 92.290 Data time: 1.9895, Total iter time: 5.1629 +thomas 04/10 17:20:09 ===> Epoch[236](70800/301): Loss 0.1997 LR: 4.482e-02 Score 93.548 Data time: 2.0215, Total iter time: 5.2251 +thomas 04/10 17:23:35 ===> Epoch[236](70840/301): Loss 0.2046 LR: 4.479e-02 Score 93.586 Data time: 2.0223, Total iter time: 5.0817 +thomas 04/10 17:27:07 ===> Epoch[236](70880/301): Loss 0.2251 LR: 4.476e-02 Score 92.680 Data time: 2.0388, Total iter time: 5.2263 +thomas 04/10 17:31:05 ===> Epoch[236](70920/301): Loss 0.2089 LR: 4.473e-02 Score 93.209 Data time: 2.2646, Total iter time: 5.8585 +thomas 04/10 17:34:41 ===> Epoch[236](70960/301): Loss 0.2217 LR: 4.469e-02 Score 92.884 Data time: 2.0600, Total iter time: 5.3255 +thomas 04/10 17:38:16 ===> Epoch[236](71000/301): Loss 0.2306 LR: 4.466e-02 Score 92.576 Data time: 2.0706, Total iter time: 5.2833 +thomas 04/10 17:38:17 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 17:38:17 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 17:40:07 101/312: Data time: 0.0273, Iter time: 0.5422 Loss 2.104 (AVG: 0.621) Score 71.658 (AVG: 85.178) mIOU 58.480 mAP 69.891 mAcc 68.160 +IOU: 79.500 96.380 56.978 64.727 81.778 52.339 71.243 43.150 42.207 73.872 10.564 34.373 56.514 72.711 25.159 53.178 77.898 50.435 84.461 42.121 +mAP: 79.358 97.134 61.167 68.506 86.735 84.813 74.444 55.047 43.075 65.191 35.801 48.454 71.794 84.073 37.107 84.737 85.158 86.197 94.421 54.614 +mAcc: 92.117 98.595 74.918 75.856 84.381 94.088 82.877 60.170 46.733 86.974 10.803 35.781 73.410 88.547 25.464 60.936 78.654 51.510 85.908 55.478 + +thomas 04/10 17:41:55 201/312: Data time: 0.0030, Iter time: 0.3683 Loss 0.342 (AVG: 0.594) Score 88.701 (AVG: 85.426) mIOU 59.128 mAP 70.326 mAcc 68.572 +IOU: 79.693 96.456 54.858 66.907 84.117 62.579 69.314 45.663 48.414 72.420 9.547 32.732 62.085 62.232 34.583 42.480 84.087 47.599 81.063 45.731 +mAP: 79.934 97.441 61.875 69.948 86.478 78.238 71.989 58.755 51.160 65.187 29.989 54.945 67.372 81.490 51.296 85.341 90.581 84.091 85.747 54.661 +mAcc: 92.236 98.531 76.320 80.606 86.872 95.769 81.610 59.597 55.420 84.672 9.882 33.997 77.188 84.502 35.349 45.408 84.994 48.753 82.609 57.134 + +thomas 04/10 17:43:53 301/312: Data time: 0.0028, Iter time: 0.6154 Loss 0.652 (AVG: 0.605) Score 81.175 (AVG: 85.168) mIOU 58.941 mAP 70.176 mAcc 68.166 +IOU: 78.734 96.223 56.436 68.913 84.621 66.413 70.397 44.054 42.228 72.469 8.239 36.744 59.428 63.003 33.186 38.667 84.007 50.318 79.715 45.032 +mAP: 79.161 97.445 58.756 70.107 87.372 80.478 72.491 58.762 49.884 71.825 28.466 52.088 66.241 80.427 53.710 83.147 91.273 83.521 81.591 56.779 +mAcc: 91.819 98.358 76.130 81.705 87.673 96.283 83.430 59.085 47.513 84.808 8.447 38.483 72.373 85.660 33.975 40.679 84.895 51.903 81.386 58.718 + +thomas 04/10 17:44:06 312/312: Data time: 0.0034, Iter time: 1.1875 Loss 0.396 (AVG: 0.598) Score 89.712 (AVG: 85.321) mIOU 59.293 mAP 70.281 mAcc 68.539 +IOU: 78.937 96.286 56.520 69.457 84.909 67.633 70.889 45.019 41.649 72.321 8.158 36.307 59.628 62.011 36.022 38.045 83.954 53.431 79.715 44.968 +mAP: 78.874 97.480 58.721 71.143 87.585 80.859 72.804 58.763 50.018 71.571 28.240 52.533 66.151 80.226 53.857 82.128 91.735 84.096 81.591 57.239 +mAcc: 91.796 98.369 76.456 82.226 87.938 96.527 83.787 60.555 46.702 84.870 8.361 37.959 72.432 85.614 36.850 39.992 85.432 55.007 81.386 58.520 + +thomas 04/10 17:44:06 Finished test. Elapsed time: 349.3334 +thomas 04/10 17:44:06 Current best mIoU: 62.239 at iter 59000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 17:47:41 ===> Epoch[237](71040/301): Loss 0.2111 LR: 4.463e-02 Score 93.213 Data time: 2.0779, Total iter time: 5.2838 +thomas 04/10 17:51:26 ===> Epoch[237](71080/301): Loss 0.2414 LR: 4.459e-02 Score 92.046 Data time: 2.1435, Total iter time: 5.5461 +thomas 04/10 17:55:05 ===> Epoch[237](71120/301): Loss 0.2436 LR: 4.456e-02 Score 92.135 Data time: 2.0838, Total iter time: 5.4174 +thomas 04/10 17:58:39 ===> Epoch[237](71160/301): Loss 0.2060 LR: 4.453e-02 Score 93.245 Data time: 2.0610, Total iter time: 5.2644 +thomas 04/10 18:02:17 ===> Epoch[237](71200/301): Loss 0.1951 LR: 4.450e-02 Score 93.630 Data time: 2.0796, Total iter time: 5.3940 +thomas 04/10 18:05:53 ===> Epoch[237](71240/301): Loss 0.1918 LR: 4.446e-02 Score 93.697 Data time: 2.0723, Total iter time: 5.3188 +thomas 04/10 18:09:29 ===> Epoch[237](71280/301): Loss 0.1988 LR: 4.443e-02 Score 93.264 Data time: 2.0665, Total iter time: 5.3339 +thomas 04/10 18:13:10 ===> Epoch[237](71320/301): Loss 0.2163 LR: 4.440e-02 Score 92.748 Data time: 2.1120, Total iter time: 5.4441 +thomas 04/10 18:16:39 ===> Epoch[238](71360/301): Loss 0.2111 LR: 4.436e-02 Score 93.516 Data time: 2.0184, Total iter time: 5.1454 +thomas 04/10 18:20:20 ===> Epoch[238](71400/301): Loss 0.2354 LR: 4.433e-02 Score 92.510 Data time: 2.1045, Total iter time: 5.4409 +thomas 04/10 18:24:00 ===> Epoch[238](71440/301): Loss 0.2699 LR: 4.430e-02 Score 91.414 Data time: 2.1116, Total iter time: 5.4464 +thomas 04/10 18:27:32 ===> Epoch[238](71480/301): Loss 0.2122 LR: 4.427e-02 Score 93.207 Data time: 2.0219, Total iter time: 5.2186 +thomas 04/10 18:30:55 ===> Epoch[238](71520/301): Loss 0.2062 LR: 4.423e-02 Score 93.505 Data time: 1.9675, Total iter time: 5.0040 +thomas 04/10 18:34:41 ===> Epoch[238](71560/301): Loss 0.1880 LR: 4.420e-02 Score 93.816 Data time: 2.1681, Total iter time: 5.5732 +thomas 04/10 18:38:13 ===> Epoch[238](71600/301): Loss 0.1997 LR: 4.417e-02 Score 93.486 Data time: 2.0418, Total iter time: 5.2249 +thomas 04/10 18:41:54 ===> Epoch[239](71640/301): Loss 0.2195 LR: 4.413e-02 Score 92.742 Data time: 2.1062, Total iter time: 5.4345 +thomas 04/10 18:45:34 ===> Epoch[239](71680/301): Loss 0.2157 LR: 4.410e-02 Score 93.027 Data time: 2.0962, Total iter time: 5.4236 +thomas 04/10 18:49:14 ===> Epoch[239](71720/301): Loss 0.2215 LR: 4.407e-02 Score 92.978 Data time: 2.0916, Total iter time: 5.4294 +thomas 04/10 18:52:45 ===> Epoch[239](71760/301): Loss 0.2103 LR: 4.404e-02 Score 93.042 Data time: 2.0112, Total iter time: 5.1994 +thomas 04/10 18:56:29 ===> Epoch[239](71800/301): Loss 0.2040 LR: 4.400e-02 Score 93.412 Data time: 2.1044, Total iter time: 5.5209 +thomas 04/10 19:00:11 ===> Epoch[239](71840/301): Loss 0.1885 LR: 4.397e-02 Score 93.832 Data time: 2.1295, Total iter time: 5.4753 +thomas 04/10 19:03:39 ===> Epoch[239](71880/301): Loss 0.2165 LR: 4.394e-02 Score 93.125 Data time: 2.0107, Total iter time: 5.1293 +thomas 04/10 19:07:15 ===> Epoch[239](71920/301): Loss 0.2499 LR: 4.390e-02 Score 91.927 Data time: 2.0308, Total iter time: 5.3061 +thomas 04/10 19:11:00 ===> Epoch[240](71960/301): Loss 0.2270 LR: 4.387e-02 Score 92.716 Data time: 2.1549, Total iter time: 5.5613 +thomas 04/10 19:14:24 ===> Epoch[240](72000/301): Loss 0.2037 LR: 4.384e-02 Score 93.513 Data time: 1.9483, Total iter time: 5.0372 +thomas 04/10 19:14:26 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 19:14:26 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 19:16:14 101/312: Data time: 0.0023, Iter time: 0.4822 Loss 1.933 (AVG: 0.643) Score 57.884 (AVG: 84.038) mIOU 59.020 mAP 69.691 mAcc 69.853 +IOU: 75.542 96.701 51.671 51.828 88.244 62.419 73.000 43.975 26.447 75.744 11.110 60.779 52.042 52.095 48.924 36.325 86.575 53.304 92.019 41.647 +mAP: 75.412 97.951 60.160 51.044 86.834 87.193 69.129 64.147 43.322 69.116 31.021 56.528 62.858 62.539 63.477 84.621 94.762 86.043 93.031 54.641 +mAcc: 87.793 98.519 60.694 66.532 96.417 83.822 78.040 59.831 27.371 97.174 15.351 73.958 74.980 66.877 65.070 39.749 87.226 55.076 94.803 67.782 + +thomas 04/10 19:18:07 201/312: Data time: 0.0028, Iter time: 0.5128 Loss 1.241 (AVG: 0.588) Score 73.804 (AVG: 85.097) mIOU 61.698 mAP 71.303 mAcc 72.447 +IOU: 77.829 96.631 54.308 69.528 88.146 72.652 68.675 48.945 31.817 70.376 13.732 63.348 58.186 65.100 42.436 39.418 87.511 56.780 85.805 42.745 +mAP: 78.656 97.305 59.094 70.752 89.430 82.249 70.281 65.016 43.121 68.432 36.502 60.356 68.781 76.656 54.430 86.933 92.647 85.277 85.517 54.622 +mAcc: 88.690 98.495 62.868 83.794 96.483 88.468 74.589 63.587 33.169 95.507 20.537 75.854 79.368 77.653 57.232 45.207 88.323 58.568 89.779 70.766 + +thomas 04/10 19:19:43 301/312: Data time: 0.0031, Iter time: 0.2692 Loss 0.442 (AVG: 0.618) Score 88.615 (AVG: 84.600) mIOU 61.066 mAP 71.399 mAcc 71.877 +IOU: 77.564 96.201 55.179 67.286 86.959 69.670 68.399 47.689 33.338 67.832 14.650 59.683 59.031 67.862 44.673 41.741 86.517 50.503 85.705 40.832 +mAP: 78.909 97.279 59.550 67.627 89.880 80.627 70.532 63.825 45.693 68.733 37.396 57.641 69.784 80.888 58.649 90.075 89.410 84.836 82.075 54.568 +mAcc: 88.552 98.450 64.966 80.032 95.374 90.071 74.587 61.830 34.944 94.319 20.072 72.686 80.228 81.961 60.308 44.983 87.296 51.843 89.232 65.798 + +thomas 04/10 19:19:53 312/312: Data time: 0.0028, Iter time: 0.6217 Loss 1.023 (AVG: 0.613) Score 81.824 (AVG: 84.737) mIOU 61.176 mAP 71.416 mAcc 72.006 +IOU: 77.812 96.234 55.688 67.107 87.067 69.588 68.528 47.956 34.606 67.804 14.229 60.675 59.462 67.834 45.190 40.870 86.956 49.071 85.650 41.181 +mAP: 79.080 97.153 59.883 67.627 90.037 80.736 70.659 63.481 46.035 69.371 36.616 56.421 69.820 80.888 59.407 89.226 89.902 84.566 82.432 54.979 +mAcc: 88.674 98.445 65.412 80.032 95.448 90.197 74.658 62.319 36.199 94.359 19.963 73.704 80.613 81.961 61.015 43.974 87.725 50.371 89.113 65.945 + +thomas 04/10 19:19:53 Finished test. Elapsed time: 326.9093 +thomas 04/10 19:19:53 Current best mIoU: 62.239 at iter 59000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 19:23:12 ===> Epoch[240](72040/301): Loss 0.2366 LR: 4.381e-02 Score 92.571 Data time: 1.8876, Total iter time: 4.9088 +thomas 04/10 19:26:42 ===> Epoch[240](72080/301): Loss 0.2329 LR: 4.377e-02 Score 92.761 Data time: 2.0020, Total iter time: 5.1761 +thomas 04/10 19:30:19 ===> Epoch[240](72120/301): Loss 0.1976 LR: 4.374e-02 Score 93.833 Data time: 2.0393, Total iter time: 5.3600 +thomas 04/10 19:33:51 ===> Epoch[240](72160/301): Loss 0.1998 LR: 4.371e-02 Score 93.376 Data time: 2.0215, Total iter time: 5.2304 +thomas 04/10 19:37:18 ===> Epoch[240](72200/301): Loss 0.2077 LR: 4.367e-02 Score 93.321 Data time: 1.9680, Total iter time: 5.1038 +thomas 04/10 19:40:51 ===> Epoch[240](72240/301): Loss 0.2236 LR: 4.364e-02 Score 92.695 Data time: 2.0328, Total iter time: 5.2476 +thomas 04/10 19:44:26 ===> Epoch[241](72280/301): Loss 0.2464 LR: 4.361e-02 Score 92.026 Data time: 2.0856, Total iter time: 5.3166 +thomas 04/10 19:48:12 ===> Epoch[241](72320/301): Loss 0.2269 LR: 4.358e-02 Score 92.700 Data time: 2.1633, Total iter time: 5.5913 +thomas 04/10 19:52:05 ===> Epoch[241](72360/301): Loss 0.2164 LR: 4.354e-02 Score 92.956 Data time: 2.2263, Total iter time: 5.7416 +thomas 04/10 19:55:59 ===> Epoch[241](72400/301): Loss 0.1867 LR: 4.351e-02 Score 93.875 Data time: 2.2230, Total iter time: 5.7837 +thomas 04/10 19:59:40 ===> Epoch[241](72440/301): Loss 0.2034 LR: 4.348e-02 Score 93.494 Data time: 2.1177, Total iter time: 5.4416 +thomas 04/10 20:03:27 ===> Epoch[241](72480/301): Loss 0.2072 LR: 4.344e-02 Score 93.473 Data time: 2.1623, Total iter time: 5.6018 +thomas 04/10 20:07:06 ===> Epoch[241](72520/301): Loss 0.1923 LR: 4.341e-02 Score 93.741 Data time: 2.0827, Total iter time: 5.4026 +thomas 04/10 20:10:55 ===> Epoch[242](72560/301): Loss 0.2217 LR: 4.338e-02 Score 92.889 Data time: 2.1870, Total iter time: 5.6511 +thomas 04/10 20:14:44 ===> Epoch[242](72600/301): Loss 0.2057 LR: 4.335e-02 Score 93.438 Data time: 2.1782, Total iter time: 5.6609 +thomas 04/10 20:18:30 ===> Epoch[242](72640/301): Loss 0.2137 LR: 4.331e-02 Score 93.025 Data time: 2.1451, Total iter time: 5.5583 +thomas 04/10 20:21:54 ===> Epoch[242](72680/301): Loss 0.2054 LR: 4.328e-02 Score 93.440 Data time: 1.9549, Total iter time: 5.0292 +thomas 04/10 20:25:52 ===> Epoch[242](72720/301): Loss 0.2334 LR: 4.325e-02 Score 92.419 Data time: 2.2651, Total iter time: 5.8832 +thomas 04/10 20:29:35 ===> Epoch[242](72760/301): Loss 0.2784 LR: 4.321e-02 Score 91.403 Data time: 2.1277, Total iter time: 5.5051 +thomas 04/10 20:33:10 ===> Epoch[242](72800/301): Loss 0.3130 LR: 4.318e-02 Score 90.133 Data time: 2.0535, Total iter time: 5.3222 +thomas 04/10 20:37:05 ===> Epoch[242](72840/301): Loss 0.2601 LR: 4.315e-02 Score 92.092 Data time: 2.2340, Total iter time: 5.7872 +thomas 04/10 20:40:37 ===> Epoch[243](72880/301): Loss 0.2360 LR: 4.311e-02 Score 92.350 Data time: 2.0210, Total iter time: 5.2334 +thomas 04/10 20:44:22 ===> Epoch[243](72920/301): Loss 0.2534 LR: 4.308e-02 Score 92.203 Data time: 2.1313, Total iter time: 5.5378 +thomas 04/10 20:48:18 ===> Epoch[243](72960/301): Loss 0.2470 LR: 4.305e-02 Score 92.198 Data time: 2.2426, Total iter time: 5.8350 +thomas 04/10 20:52:12 ===> Epoch[243](73000/301): Loss 0.1974 LR: 4.302e-02 Score 93.606 Data time: 2.2239, Total iter time: 5.7614 +thomas 04/10 20:52:14 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 20:52:14 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 20:54:15 101/312: Data time: 0.0037, Iter time: 0.9659 Loss 0.525 (AVG: 0.558) Score 83.713 (AVG: 84.991) mIOU 59.493 mAP 69.709 mAcc 70.969 +IOU: 79.442 95.756 46.753 67.023 87.610 69.868 60.198 47.867 46.190 76.908 21.228 58.972 46.418 62.474 34.784 34.383 95.584 44.736 66.163 47.508 +mAP: 78.294 96.649 51.102 68.880 89.283 80.543 68.619 58.769 50.154 67.404 41.931 57.629 67.379 78.526 52.308 82.534 98.238 77.394 74.340 54.199 +mAcc: 87.710 98.857 71.145 76.157 96.065 95.337 64.501 79.389 51.246 90.947 25.694 66.142 80.054 74.123 50.604 35.302 96.651 46.333 74.707 58.418 + +thomas 04/10 20:56:05 201/312: Data time: 0.0029, Iter time: 0.3121 Loss 0.121 (AVG: 0.590) Score 96.823 (AVG: 84.188) mIOU 60.066 mAP 70.823 mAcc 71.495 +IOU: 76.153 96.127 49.729 69.397 86.628 68.853 63.975 44.396 38.011 76.271 16.025 61.897 52.617 57.992 40.904 37.427 92.846 51.920 73.473 46.674 +mAP: 76.910 97.013 56.596 72.611 89.012 79.625 68.029 62.492 46.940 72.369 36.978 64.014 69.368 71.494 54.993 84.714 97.139 78.294 82.880 54.995 +mAcc: 85.637 98.799 71.102 78.263 94.654 95.204 67.489 76.825 43.197 91.918 18.154 68.117 85.105 69.275 58.980 39.749 94.718 53.788 80.013 58.919 + +thomas 04/10 20:57:54 301/312: Data time: 0.0022, Iter time: 0.6331 Loss 0.719 (AVG: 0.567) Score 72.690 (AVG: 84.594) mIOU 61.119 mAP 71.310 mAcc 72.564 +IOU: 76.806 96.150 56.533 71.786 87.296 71.410 63.750 44.467 38.703 74.057 16.635 62.451 56.904 56.745 44.229 32.983 92.863 58.806 74.780 45.018 +mAP: 78.193 97.232 59.434 74.434 89.918 81.314 68.793 63.299 48.151 70.728 37.202 60.538 70.556 72.754 60.510 77.791 97.263 82.222 80.489 55.382 +mAcc: 85.646 98.819 74.940 81.010 95.289 95.154 67.530 76.558 45.160 90.843 19.290 72.653 84.950 70.782 63.973 35.347 94.542 61.268 79.983 57.537 + +thomas 04/10 20:58:01 312/312: Data time: 0.0031, Iter time: 0.2062 Loss 0.184 (AVG: 0.563) Score 92.603 (AVG: 84.651) mIOU 61.307 mAP 71.585 mAcc 72.767 +IOU: 76.864 96.152 57.499 71.771 87.550 71.295 63.736 44.404 38.255 73.926 16.590 63.030 56.458 56.180 46.293 33.881 92.853 58.979 75.331 45.091 +mAP: 78.184 97.293 59.934 74.906 90.069 81.314 69.649 63.439 47.865 70.728 36.795 61.407 70.625 73.297 61.309 78.539 97.304 82.381 81.094 55.567 +mAcc: 85.643 98.829 75.726 81.279 95.364 95.154 67.463 76.248 44.639 90.843 19.230 72.780 84.684 71.135 65.989 36.213 94.513 61.507 80.426 57.668 + +thomas 04/10 20:58:01 Finished test. Elapsed time: 347.3332 +thomas 04/10 20:58:01 Current best mIoU: 62.239 at iter 59000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 21:01:28 ===> Epoch[243](73040/301): Loss 0.2067 LR: 4.298e-02 Score 93.192 Data time: 2.0110, Total iter time: 5.1044 +thomas 04/10 21:05:19 ===> Epoch[243](73080/301): Loss 0.2388 LR: 4.295e-02 Score 92.311 Data time: 2.2398, Total iter time: 5.6925 +thomas 04/10 21:09:09 ===> Epoch[243](73120/301): Loss 0.3065 LR: 4.292e-02 Score 90.644 Data time: 2.1974, Total iter time: 5.6717 +thomas 04/10 21:12:55 ===> Epoch[244](73160/301): Loss 0.2788 LR: 4.288e-02 Score 91.215 Data time: 2.1475, Total iter time: 5.5744 +thomas 04/10 21:16:38 ===> Epoch[244](73200/301): Loss 0.2553 LR: 4.285e-02 Score 92.061 Data time: 2.1572, Total iter time: 5.5124 +thomas 04/10 21:20:40 ===> Epoch[244](73240/301): Loss 0.2578 LR: 4.282e-02 Score 91.817 Data time: 2.3079, Total iter time: 5.9564 +thomas 04/10 21:24:34 ===> Epoch[244](73280/301): Loss 0.2502 LR: 4.279e-02 Score 92.249 Data time: 2.2485, Total iter time: 5.7812 +thomas 04/10 21:28:41 ===> Epoch[244](73320/301): Loss 0.2285 LR: 4.275e-02 Score 92.658 Data time: 2.3143, Total iter time: 6.0967 +thomas 04/10 21:32:19 ===> Epoch[244](73360/301): Loss 0.2081 LR: 4.272e-02 Score 93.025 Data time: 2.0710, Total iter time: 5.3747 +thomas 04/10 21:36:04 ===> Epoch[244](73400/301): Loss 0.2062 LR: 4.269e-02 Score 93.321 Data time: 2.1542, Total iter time: 5.5636 +thomas 04/10 21:39:39 ===> Epoch[244](73440/301): Loss 0.2099 LR: 4.265e-02 Score 93.356 Data time: 2.0264, Total iter time: 5.2915 +thomas 04/10 21:43:24 ===> Epoch[245](73480/301): Loss 0.2257 LR: 4.262e-02 Score 93.078 Data time: 2.1396, Total iter time: 5.5719 +thomas 04/10 21:47:02 ===> Epoch[245](73520/301): Loss 0.1908 LR: 4.259e-02 Score 93.835 Data time: 2.0698, Total iter time: 5.3759 +thomas 04/10 21:50:45 ===> Epoch[245](73560/301): Loss 0.2086 LR: 4.255e-02 Score 92.910 Data time: 2.1308, Total iter time: 5.4916 +thomas 04/10 21:54:28 ===> Epoch[245](73600/301): Loss 0.1899 LR: 4.252e-02 Score 93.756 Data time: 2.1286, Total iter time: 5.5091 +thomas 04/10 21:58:21 ===> Epoch[245](73640/301): Loss 0.2333 LR: 4.249e-02 Score 92.469 Data time: 2.2020, Total iter time: 5.7461 +thomas 04/10 22:02:05 ===> Epoch[245](73680/301): Loss 0.2034 LR: 4.246e-02 Score 93.410 Data time: 2.1415, Total iter time: 5.5279 +thomas 04/10 22:05:52 ===> Epoch[245](73720/301): Loss 0.2070 LR: 4.242e-02 Score 93.076 Data time: 2.1483, Total iter time: 5.5842 +thomas 04/10 22:09:48 ===> Epoch[246](73760/301): Loss 0.1937 LR: 4.239e-02 Score 93.637 Data time: 2.2549, Total iter time: 5.8427 +thomas 04/10 22:13:35 ===> Epoch[246](73800/301): Loss 0.2104 LR: 4.236e-02 Score 93.323 Data time: 2.1589, Total iter time: 5.5763 +thomas 04/10 22:17:13 ===> Epoch[246](73840/301): Loss 0.2097 LR: 4.232e-02 Score 93.347 Data time: 2.0849, Total iter time: 5.3901 +thomas 04/10 22:20:40 ===> Epoch[246](73880/301): Loss 0.2116 LR: 4.229e-02 Score 93.125 Data time: 2.0005, Total iter time: 5.0963 +thomas 04/10 22:24:13 ===> Epoch[246](73920/301): Loss 0.1944 LR: 4.226e-02 Score 93.692 Data time: 2.0406, Total iter time: 5.2441 +thomas 04/10 22:28:03 ===> Epoch[246](73960/301): Loss 0.2078 LR: 4.222e-02 Score 93.063 Data time: 2.1995, Total iter time: 5.6858 +thomas 04/10 22:31:50 ===> Epoch[246](74000/301): Loss 0.2202 LR: 4.219e-02 Score 92.802 Data time: 2.1518, Total iter time: 5.5954 +thomas 04/10 22:31:52 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 22:31:52 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 22:33:48 101/312: Data time: 0.0027, Iter time: 0.4721 Loss 0.205 (AVG: 0.590) Score 91.744 (AVG: 84.705) mIOU 61.863 mAP 70.575 mAcc 73.445 +IOU: 76.799 95.931 54.587 80.064 87.288 69.038 68.713 44.369 32.384 82.266 4.956 60.020 58.651 68.657 43.243 57.437 85.221 41.455 80.086 46.099 +mAP: 76.598 97.422 54.028 79.440 90.436 80.372 70.702 61.709 48.192 69.475 26.498 61.890 63.521 80.595 66.656 93.276 90.540 69.252 75.969 54.941 +mAcc: 87.660 98.757 74.320 90.940 91.166 94.946 75.957 69.967 34.396 92.532 5.118 70.217 85.090 81.547 68.778 63.351 85.712 54.870 85.296 58.281 + +thomas 04/10 22:35:39 201/312: Data time: 0.0027, Iter time: 0.7781 Loss 0.641 (AVG: 0.572) Score 82.771 (AVG: 85.148) mIOU 62.887 mAP 71.594 mAcc 74.307 +IOU: 77.273 95.909 56.450 77.504 86.327 68.102 71.775 44.170 39.473 80.802 7.777 56.316 59.651 62.579 44.382 60.628 89.587 48.122 81.976 48.936 +mAP: 76.704 97.338 59.089 77.742 89.605 81.366 70.940 58.020 52.255 72.211 32.082 57.260 64.084 77.556 64.874 90.523 94.837 77.886 78.954 58.550 +mAcc: 88.206 98.680 72.875 88.723 89.982 94.845 78.736 68.738 41.864 91.609 8.124 66.014 85.764 77.449 66.943 64.085 90.345 63.388 86.028 63.746 + +thomas 04/10 22:37:25 301/312: Data time: 0.0030, Iter time: 0.3049 Loss 0.226 (AVG: 0.571) Score 90.777 (AVG: 85.168) mIOU 62.216 mAP 71.691 mAcc 73.788 +IOU: 77.705 96.094 54.873 74.739 86.220 67.386 70.966 44.749 37.912 76.905 9.195 54.812 55.952 68.066 45.035 57.880 84.631 49.011 83.854 48.325 +mAP: 76.525 97.260 56.884 77.313 89.988 81.878 72.093 59.721 49.799 72.146 33.829 59.572 64.888 82.167 65.056 89.974 89.623 79.019 78.629 57.448 +mAcc: 88.776 98.748 71.182 85.445 89.842 95.396 78.607 67.602 39.961 89.200 9.565 63.675 85.023 81.712 69.340 62.930 85.438 63.058 87.916 62.337 + +thomas 04/10 22:37:35 312/312: Data time: 0.0025, Iter time: 0.4496 Loss 0.555 (AVG: 0.570) Score 86.870 (AVG: 85.186) mIOU 62.140 mAP 71.393 mAcc 73.623 +IOU: 77.712 96.130 54.279 74.726 86.384 68.059 71.234 45.452 37.435 77.259 9.078 54.702 56.401 67.421 45.168 56.979 83.455 49.070 83.854 47.998 +mAP: 76.621 97.308 56.230 77.709 89.950 82.098 72.078 60.569 48.836 71.637 33.098 59.895 64.474 80.141 65.175 87.527 89.901 79.285 78.629 56.706 +mAcc: 88.816 98.755 70.260 85.710 90.005 95.654 78.827 68.107 39.621 89.493 9.487 63.467 85.045 80.918 69.528 61.866 84.217 62.697 87.916 62.079 + +thomas 04/10 22:37:35 Finished test. Elapsed time: 342.9827 +thomas 04/10 22:37:35 Current best mIoU: 62.239 at iter 59000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 22:41:19 ===> Epoch[246](74040/301): Loss 0.2203 LR: 4.216e-02 Score 93.026 Data time: 2.1458, Total iter time: 5.5344 +thomas 04/10 22:45:07 ===> Epoch[247](74080/301): Loss 0.2046 LR: 4.213e-02 Score 93.386 Data time: 2.1739, Total iter time: 5.6129 +thomas 04/10 22:49:03 ===> Epoch[247](74120/301): Loss 0.2351 LR: 4.209e-02 Score 92.387 Data time: 2.2543, Total iter time: 5.8104 +thomas 04/10 22:52:44 ===> Epoch[247](74160/301): Loss 0.2232 LR: 4.206e-02 Score 92.997 Data time: 2.1148, Total iter time: 5.4685 +thomas 04/10 22:56:29 ===> Epoch[247](74200/301): Loss 0.2171 LR: 4.203e-02 Score 93.033 Data time: 2.1580, Total iter time: 5.5458 +thomas 04/10 23:00:41 ===> Epoch[247](74240/301): Loss 0.2364 LR: 4.199e-02 Score 92.590 Data time: 2.4774, Total iter time: 6.2189 +thomas 04/10 23:04:04 ===> Epoch[247](74280/301): Loss 0.2528 LR: 4.196e-02 Score 91.917 Data time: 1.9527, Total iter time: 5.0308 +thomas 04/10 23:07:37 ===> Epoch[247](74320/301): Loss 0.2008 LR: 4.193e-02 Score 93.619 Data time: 2.0226, Total iter time: 5.2293 +thomas 04/10 23:11:04 ===> Epoch[248](74360/301): Loss 0.2144 LR: 4.189e-02 Score 93.009 Data time: 1.9540, Total iter time: 5.1182 +thomas 04/10 23:14:41 ===> Epoch[248](74400/301): Loss 0.2253 LR: 4.186e-02 Score 92.625 Data time: 2.0180, Total iter time: 5.3227 +thomas 04/10 23:18:08 ===> Epoch[248](74440/301): Loss 0.1962 LR: 4.183e-02 Score 93.602 Data time: 1.9771, Total iter time: 5.1087 +thomas 04/10 23:21:40 ===> Epoch[248](74480/301): Loss 0.1934 LR: 4.179e-02 Score 93.678 Data time: 1.9654, Total iter time: 5.2164 +thomas 04/10 23:25:10 ===> Epoch[248](74520/301): Loss 0.1915 LR: 4.176e-02 Score 93.726 Data time: 1.9794, Total iter time: 5.1935 +thomas 04/10 23:28:37 ===> Epoch[248](74560/301): Loss 0.1821 LR: 4.173e-02 Score 93.990 Data time: 1.9696, Total iter time: 5.0875 +thomas 04/10 23:32:13 ===> Epoch[248](74600/301): Loss 0.1851 LR: 4.170e-02 Score 93.937 Data time: 2.0489, Total iter time: 5.3298 +thomas 04/10 23:35:40 ===> Epoch[248](74640/301): Loss 0.2178 LR: 4.166e-02 Score 93.013 Data time: 1.9493, Total iter time: 5.1006 +thomas 04/10 23:39:16 ===> Epoch[249](74680/301): Loss 0.1983 LR: 4.163e-02 Score 93.532 Data time: 2.0465, Total iter time: 5.3191 +thomas 04/10 23:42:41 ===> Epoch[249](74720/301): Loss 0.2173 LR: 4.160e-02 Score 93.269 Data time: 1.9269, Total iter time: 5.0501 +thomas 04/10 23:46:16 ===> Epoch[249](74760/301): Loss 0.2236 LR: 4.156e-02 Score 92.539 Data time: 2.0416, Total iter time: 5.3081 +thomas 04/10 23:49:48 ===> Epoch[249](74800/301): Loss 0.1947 LR: 4.153e-02 Score 93.552 Data time: 2.0099, Total iter time: 5.2398 +thomas 04/10 23:53:25 ===> Epoch[249](74840/301): Loss 0.1929 LR: 4.150e-02 Score 93.494 Data time: 2.0359, Total iter time: 5.3411 +thomas 04/10 23:57:01 ===> Epoch[249](74880/301): Loss 0.1905 LR: 4.146e-02 Score 93.710 Data time: 2.0519, Total iter time: 5.3255 +thomas 04/11 00:00:38 ===> Epoch[249](74920/301): Loss 0.1902 LR: 4.143e-02 Score 93.711 Data time: 2.0503, Total iter time: 5.3441 +thomas 04/11 00:04:05 ===> Epoch[250](74960/301): Loss 0.2349 LR: 4.140e-02 Score 92.740 Data time: 1.9532, Total iter time: 5.0911 +thomas 04/11 00:07:27 ===> Epoch[250](75000/301): Loss 0.2326 LR: 4.137e-02 Score 92.621 Data time: 1.9285, Total iter time: 4.9780 +thomas 04/11 00:07:28 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/11 00:07:28 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/11 00:09:15 101/312: Data time: 0.0025, Iter time: 0.3609 Loss 1.269 (AVG: 0.594) Score 66.670 (AVG: 85.830) mIOU 63.959 mAP 72.766 mAcc 74.383 +IOU: 78.476 95.525 62.693 71.569 88.588 78.978 70.212 46.290 33.513 76.213 23.049 60.518 59.955 61.616 43.705 64.711 92.625 53.215 76.006 41.726 +mAP: 77.780 97.075 66.679 75.555 92.152 77.550 75.305 66.868 49.395 52.555 49.579 61.429 62.207 81.353 52.015 94.142 96.352 87.554 82.569 57.206 +mAcc: 92.929 98.954 75.054 77.760 93.404 94.050 77.565 68.549 34.433 90.154 26.428 93.567 83.584 77.945 54.287 71.820 94.323 53.660 82.818 46.374 + +thomas 04/11 00:11:02 201/312: Data time: 0.0023, Iter time: 0.5203 Loss 0.421 (AVG: 0.605) Score 91.062 (AVG: 85.049) mIOU 61.409 mAP 71.805 mAcc 71.848 +IOU: 76.990 95.828 56.241 74.740 88.071 78.110 69.013 45.995 29.993 75.133 16.738 59.600 59.770 57.378 35.761 53.863 88.165 45.507 80.179 41.111 +mAP: 75.365 97.192 64.070 78.346 90.561 81.206 73.965 66.525 44.672 60.854 43.609 57.562 60.949 79.620 55.332 91.519 93.033 85.111 80.268 56.349 +mAcc: 92.254 98.906 70.681 82.167 92.697 93.205 77.973 65.470 31.151 85.194 19.090 84.451 83.717 81.466 48.088 59.929 91.933 45.973 85.737 46.876 + +thomas 04/11 00:12:51 301/312: Data time: 0.0023, Iter time: 0.5830 Loss 0.724 (AVG: 0.596) Score 72.091 (AVG: 85.129) mIOU 61.327 mAP 71.503 mAcc 71.897 +IOU: 77.282 95.814 56.758 73.843 88.852 81.917 69.034 45.171 29.926 75.574 15.540 58.060 57.665 60.462 34.003 53.414 88.241 43.762 78.907 42.307 +mAP: 75.115 97.149 61.285 75.872 90.570 81.606 72.843 64.616 48.257 64.391 41.998 58.207 59.777 81.509 52.788 89.003 93.269 82.798 80.646 58.369 +mAcc: 92.023 98.950 70.751 79.849 92.997 94.456 77.935 64.100 31.149 86.375 17.402 84.867 82.933 85.186 46.326 60.292 92.363 44.126 87.222 48.642 + +thomas 04/11 00:13:02 312/312: Data time: 0.0025, Iter time: 0.4363 Loss 0.748 (AVG: 0.588) Score 81.073 (AVG: 85.303) mIOU 61.548 mAP 71.560 mAcc 72.120 +IOU: 77.408 95.864 57.519 73.762 89.122 81.592 69.833 45.181 29.697 75.468 15.539 58.588 57.149 60.876 36.658 53.414 88.239 43.888 78.907 42.253 +mAP: 75.149 97.209 60.769 75.105 90.494 82.064 73.071 64.571 48.323 65.059 41.998 58.430 59.777 81.564 53.769 89.003 93.269 82.842 80.646 58.081 +mAcc: 92.012 98.978 70.983 79.631 93.152 94.547 78.323 64.398 30.897 86.312 17.402 84.933 82.933 85.739 49.280 60.292 92.363 44.280 87.222 48.716 + +thomas 04/11 00:13:02 Finished test. Elapsed time: 333.7607 +thomas 04/11 00:13:02 Current best mIoU: 62.239 at iter 59000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/11 00:16:38 ===> Epoch[250](75040/301): Loss 0.2132 LR: 4.133e-02 Score 93.174 Data time: 2.0599, Total iter time: 5.3184 +thomas 04/11 00:20:01 ===> Epoch[250](75080/301): Loss 0.1996 LR: 4.130e-02 Score 93.484 Data time: 1.9219, Total iter time: 5.0148 +thomas 04/11 00:23:30 ===> Epoch[250](75120/301): Loss 0.2095 LR: 4.127e-02 Score 93.432 Data time: 1.9687, Total iter time: 5.1522 +thomas 04/11 00:27:02 ===> Epoch[250](75160/301): Loss 0.2338 LR: 4.123e-02 Score 92.634 Data time: 1.9934, Total iter time: 5.2167 +thomas 04/11 00:30:33 ===> Epoch[250](75200/301): Loss 0.2185 LR: 4.120e-02 Score 93.071 Data time: 2.0221, Total iter time: 5.1945 +thomas 04/11 00:34:19 ===> Epoch[250](75240/301): Loss 0.2446 LR: 4.117e-02 Score 92.522 Data time: 2.1656, Total iter time: 5.5816 +thomas 04/11 00:37:55 ===> Epoch[251](75280/301): Loss 0.1985 LR: 4.113e-02 Score 93.524 Data time: 2.0615, Total iter time: 5.3319 +thomas 04/11 00:41:28 ===> Epoch[251](75320/301): Loss 0.1883 LR: 4.110e-02 Score 93.585 Data time: 2.0225, Total iter time: 5.2434 +thomas 04/11 00:45:00 ===> Epoch[251](75360/301): Loss 0.1992 LR: 4.107e-02 Score 93.398 Data time: 2.0248, Total iter time: 5.2185 +thomas 04/11 00:48:35 ===> Epoch[251](75400/301): Loss 0.1862 LR: 4.103e-02 Score 94.081 Data time: 2.0429, Total iter time: 5.3048 +thomas 04/11 00:52:14 ===> Epoch[251](75440/301): Loss 0.1981 LR: 4.100e-02 Score 93.584 Data time: 2.0951, Total iter time: 5.4070 +thomas 04/11 00:55:45 ===> Epoch[251](75480/301): Loss 0.1824 LR: 4.097e-02 Score 93.986 Data time: 2.0270, Total iter time: 5.1794 +thomas 04/11 00:59:11 ===> Epoch[251](75520/301): Loss 0.1918 LR: 4.093e-02 Score 93.589 Data time: 1.9616, Total iter time: 5.0765 +thomas 04/11 01:02:35 ===> Epoch[252](75560/301): Loss 0.2151 LR: 4.090e-02 Score 93.258 Data time: 1.9631, Total iter time: 5.0480 +thomas 04/11 01:06:02 ===> Epoch[252](75600/301): Loss 0.2221 LR: 4.087e-02 Score 92.845 Data time: 1.9571, Total iter time: 5.0800 +thomas 04/11 01:09:55 ===> Epoch[252](75640/301): Loss 0.2276 LR: 4.084e-02 Score 92.725 Data time: 2.2279, Total iter time: 5.7522 +thomas 04/11 01:13:18 ===> Epoch[252](75680/301): Loss 0.2269 LR: 4.080e-02 Score 92.755 Data time: 1.9450, Total iter time: 5.0132 +thomas 04/11 01:16:50 ===> Epoch[252](75720/301): Loss 0.1949 LR: 4.077e-02 Score 93.624 Data time: 2.0252, Total iter time: 5.2147 +thomas 04/11 01:20:16 ===> Epoch[252](75760/301): Loss 0.1879 LR: 4.074e-02 Score 93.723 Data time: 1.9927, Total iter time: 5.0889 +thomas 04/11 01:24:00 ===> Epoch[252](75800/301): Loss 0.2066 LR: 4.070e-02 Score 93.326 Data time: 2.1602, Total iter time: 5.5332 +thomas 04/11 01:27:23 ===> Epoch[252](75840/301): Loss 0.2253 LR: 4.067e-02 Score 92.847 Data time: 1.9507, Total iter time: 4.9992 +thomas 04/11 01:31:08 ===> Epoch[253](75880/301): Loss 0.2165 LR: 4.064e-02 Score 92.926 Data time: 2.1117, Total iter time: 5.5349 +thomas 04/11 01:34:38 ===> Epoch[253](75920/301): Loss 0.2146 LR: 4.060e-02 Score 93.056 Data time: 1.9897, Total iter time: 5.1617 +thomas 04/11 01:38:12 ===> Epoch[253](75960/301): Loss 0.2158 LR: 4.057e-02 Score 93.002 Data time: 2.0376, Total iter time: 5.2646 +thomas 04/11 01:41:48 ===> Epoch[253](76000/301): Loss 0.1851 LR: 4.054e-02 Score 93.950 Data time: 2.0551, Total iter time: 5.3302 +thomas 04/11 01:41:49 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/11 01:41:49 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/11 01:43:36 101/312: Data time: 0.0023, Iter time: 0.4961 Loss 0.881 (AVG: 0.613) Score 80.048 (AVG: 83.657) mIOU 58.923 mAP 71.021 mAcc 71.220 +IOU: 73.794 96.531 52.784 70.879 90.299 65.091 74.533 40.947 39.731 63.441 11.900 67.884 57.192 67.029 45.146 29.325 86.249 42.876 66.803 36.025 +mAP: 76.083 97.886 61.674 67.981 91.944 75.476 72.328 55.946 46.421 67.184 32.968 68.313 66.239 87.245 73.426 88.676 91.169 78.029 72.684 48.750 +mAcc: 83.062 99.062 69.180 74.961 94.786 92.370 83.896 51.578 47.112 95.002 12.808 88.413 85.189 82.944 79.233 29.537 86.966 43.259 67.961 57.071 + +thomas 04/11 01:45:22 201/312: Data time: 0.0025, Iter time: 0.6239 Loss 1.168 (AVG: 0.616) Score 75.154 (AVG: 83.578) mIOU 59.013 mAP 71.635 mAcc 71.917 +IOU: 75.075 96.373 50.730 66.823 88.328 65.649 70.116 42.518 41.385 57.394 14.989 59.356 55.019 66.120 39.936 45.066 81.588 44.818 78.083 40.899 +mAP: 77.050 97.720 62.715 69.874 90.523 74.020 71.259 57.651 50.122 68.702 38.973 62.464 66.804 81.087 66.431 87.127 90.785 80.828 84.366 54.193 +mAcc: 84.496 98.995 69.873 71.640 93.327 89.927 81.240 54.127 47.816 94.071 16.345 87.586 81.259 79.956 74.375 46.467 82.390 45.246 79.346 59.852 + +thomas 04/11 01:47:02 301/312: Data time: 0.0024, Iter time: 0.2638 Loss 0.300 (AVG: 0.613) Score 89.187 (AVG: 83.519) mIOU 58.203 mAP 71.617 mAcc 71.554 +IOU: 75.667 96.190 51.114 67.821 88.027 69.732 66.970 41.499 43.397 56.159 16.005 54.830 54.854 68.119 38.238 33.191 82.162 40.884 78.498 40.699 +mAP: 78.307 97.397 61.210 69.897 90.690 80.344 69.036 58.138 50.845 70.448 41.245 60.372 66.409 81.063 69.179 85.130 89.482 81.699 78.203 53.242 +mAcc: 85.434 98.874 69.932 72.282 92.948 92.540 77.590 52.708 49.079 94.498 18.107 88.667 83.845 80.138 77.694 34.751 82.987 41.330 79.654 58.015 + +thomas 04/11 01:47:14 312/312: Data time: 0.0024, Iter time: 0.2669 Loss 0.212 (AVG: 0.616) Score 92.652 (AVG: 83.506) mIOU 58.042 mAP 71.503 mAcc 71.258 +IOU: 75.527 96.235 51.021 67.820 88.118 70.141 66.668 41.501 42.192 56.273 15.675 54.776 53.505 68.130 39.459 32.479 81.739 40.521 78.498 40.571 +mAP: 78.086 97.455 61.052 69.897 90.831 80.835 69.755 58.031 49.804 70.426 41.186 59.932 66.195 81.101 70.081 82.895 89.660 81.480 78.203 53.150 +mAcc: 85.417 98.888 69.865 72.282 93.057 92.804 77.047 52.656 47.608 94.739 17.686 87.107 83.667 80.071 77.624 33.972 82.558 40.950 79.654 57.502 + +thomas 04/11 01:47:14 Finished test. Elapsed time: 324.5964 +thomas 04/11 01:47:14 Current best mIoU: 62.239 at iter 59000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/11 01:50:38 ===> Epoch[253](76040/301): Loss 0.1963 LR: 4.050e-02 Score 93.799 Data time: 1.9488, Total iter time: 5.0238 +thomas 04/11 01:54:07 ===> Epoch[253](76080/301): Loss 0.2080 LR: 4.047e-02 Score 93.273 Data time: 2.0257, Total iter time: 5.1676 +thomas 04/11 01:58:00 ===> Epoch[253](76120/301): Loss 0.1941 LR: 4.044e-02 Score 93.553 Data time: 2.2450, Total iter time: 5.7339 +thomas 04/11 02:01:52 ===> Epoch[254](76160/301): Loss 0.1932 LR: 4.040e-02 Score 93.808 Data time: 2.2323, Total iter time: 5.7258 +thomas 04/11 02:05:37 ===> Epoch[254](76200/301): Loss 0.2140 LR: 4.037e-02 Score 93.104 Data time: 2.1313, Total iter time: 5.5347 +thomas 04/11 02:08:56 ===> Epoch[254](76240/301): Loss 0.2101 LR: 4.034e-02 Score 93.056 Data time: 1.9030, Total iter time: 4.9165 +thomas 04/11 02:12:13 ===> Epoch[254](76280/301): Loss 0.1872 LR: 4.030e-02 Score 93.891 Data time: 1.8895, Total iter time: 4.8575 +thomas 04/11 02:15:43 ===> Epoch[254](76320/301): Loss 0.1829 LR: 4.027e-02 Score 93.913 Data time: 1.9913, Total iter time: 5.1747 +thomas 04/11 02:19:15 ===> Epoch[254](76360/301): Loss 0.2012 LR: 4.024e-02 Score 93.719 Data time: 1.9987, Total iter time: 5.2209 +thomas 04/11 02:22:44 ===> Epoch[254](76400/301): Loss 0.1922 LR: 4.021e-02 Score 93.788 Data time: 1.9798, Total iter time: 5.1503 +thomas 04/11 02:26:13 ===> Epoch[254](76440/301): Loss 0.1924 LR: 4.017e-02 Score 93.704 Data time: 1.9963, Total iter time: 5.1506 +thomas 04/11 02:29:50 ===> Epoch[255](76480/301): Loss 0.2190 LR: 4.014e-02 Score 92.914 Data time: 2.0928, Total iter time: 5.3533 +thomas 04/11 02:33:31 ===> Epoch[255](76520/301): Loss 0.1958 LR: 4.011e-02 Score 93.637 Data time: 2.1178, Total iter time: 5.4701 +thomas 04/11 02:37:12 ===> Epoch[255](76560/301): Loss 0.2099 LR: 4.007e-02 Score 93.404 Data time: 2.1167, Total iter time: 5.4459 +thomas 04/11 02:40:33 ===> Epoch[255](76600/301): Loss 0.1748 LR: 4.004e-02 Score 94.161 Data time: 1.9402, Total iter time: 4.9467 +thomas 04/11 02:44:06 ===> Epoch[255](76640/301): Loss 0.1875 LR: 4.001e-02 Score 93.889 Data time: 2.0633, Total iter time: 5.2632 +thomas 04/11 02:47:49 ===> Epoch[255](76680/301): Loss 0.1968 LR: 3.997e-02 Score 93.585 Data time: 2.1285, Total iter time: 5.4929 +thomas 04/11 02:51:37 ===> Epoch[255](76720/301): Loss 0.1878 LR: 3.994e-02 Score 93.736 Data time: 2.1664, Total iter time: 5.6187 +thomas 04/11 02:55:07 ===> Epoch[256](76760/301): Loss 0.1966 LR: 3.991e-02 Score 93.794 Data time: 2.0084, Total iter time: 5.1833 +thomas 04/11 02:58:46 ===> Epoch[256](76800/301): Loss 0.1838 LR: 3.987e-02 Score 93.937 Data time: 2.0941, Total iter time: 5.3845 +thomas 04/11 03:02:07 ===> Epoch[256](76840/301): Loss 0.2190 LR: 3.984e-02 Score 93.047 Data time: 1.9206, Total iter time: 4.9734 +thomas 04/11 03:05:16 ===> Epoch[256](76880/301): Loss 0.2205 LR: 3.981e-02 Score 92.908 Data time: 1.8162, Total iter time: 4.6469 +thomas 04/11 03:08:52 ===> Epoch[256](76920/301): Loss 0.2194 LR: 3.977e-02 Score 93.102 Data time: 2.0540, Total iter time: 5.3240 +thomas 04/11 03:12:24 ===> Epoch[256](76960/301): Loss 0.1876 LR: 3.974e-02 Score 93.937 Data time: 2.0417, Total iter time: 5.2362 +thomas 04/11 03:15:42 ===> Epoch[256](77000/301): Loss 0.2007 LR: 3.971e-02 Score 93.423 Data time: 1.8971, Total iter time: 4.8859 +thomas 04/11 03:15:44 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/11 03:15:44 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/11 03:17:28 101/312: Data time: 0.0025, Iter time: 0.4016 Loss 0.803 (AVG: 0.581) Score 82.004 (AVG: 85.191) mIOU 62.539 mAP 72.429 mAcc 73.072 +IOU: 77.047 96.085 62.178 68.597 79.131 71.883 71.747 49.248 25.398 79.298 18.774 65.305 59.204 71.239 29.969 49.168 91.273 51.083 84.096 50.058 +mAP: 77.824 97.022 57.052 75.637 90.209 78.122 71.254 62.749 48.598 75.664 45.545 61.802 69.123 85.088 54.244 84.037 93.095 84.065 83.926 53.528 +mAcc: 87.623 98.275 76.128 84.585 85.080 95.422 83.914 73.426 26.716 94.669 29.858 77.575 73.424 90.599 36.086 51.583 92.524 52.019 85.652 66.281 + +thomas 04/11 03:19:13 201/312: Data time: 0.0028, Iter time: 0.3945 Loss 0.349 (AVG: 0.566) Score 88.972 (AVG: 85.317) mIOU 62.936 mAP 72.100 mAcc 73.320 +IOU: 77.843 96.263 61.768 68.685 85.446 72.250 70.481 48.546 31.148 76.540 19.394 65.492 59.313 66.867 37.823 53.580 88.399 45.560 85.611 47.709 +mAP: 78.914 97.574 57.107 76.730 91.761 79.443 69.079 64.043 48.815 72.752 45.905 59.494 66.903 81.976 55.949 88.856 90.239 77.682 81.299 57.475 +mAcc: 88.287 98.543 74.459 87.561 90.370 94.238 82.039 69.018 32.961 92.140 26.164 77.358 71.830 87.074 44.262 57.646 89.761 47.033 87.298 68.361 + +thomas 04/11 03:21:01 301/312: Data time: 0.0035, Iter time: 0.5929 Loss 0.155 (AVG: 0.546) Score 95.702 (AVG: 85.596) mIOU 62.665 mAP 72.385 mAcc 73.480 +IOU: 78.547 96.385 61.257 67.041 87.441 73.167 71.279 47.451 30.708 75.753 18.696 61.885 58.879 62.822 40.694 50.663 88.865 47.178 84.911 49.684 +mAP: 78.951 97.621 58.393 75.133 91.756 78.603 72.579 64.211 49.102 73.835 45.025 58.266 65.136 83.542 59.409 87.446 91.897 79.148 81.242 56.413 +mAcc: 88.582 98.636 74.569 87.957 92.228 94.011 83.342 66.196 33.212 92.465 26.798 73.510 71.058 88.871 50.104 53.777 90.298 48.622 86.912 68.445 + +thomas 04/11 03:21:11 312/312: Data time: 0.0029, Iter time: 0.2816 Loss 0.873 (AVG: 0.544) Score 78.402 (AVG: 85.679) mIOU 62.662 mAP 72.363 mAcc 73.477 +IOU: 78.553 96.420 60.808 66.957 87.627 73.592 71.893 47.466 30.405 76.780 18.487 60.371 58.864 62.786 40.611 50.657 88.984 47.031 85.126 49.820 +mAP: 78.778 97.560 58.176 75.133 91.762 78.895 72.719 64.064 48.861 73.514 44.044 58.202 65.136 83.641 59.409 87.446 92.051 79.375 81.706 56.781 +mAcc: 88.692 98.652 73.772 87.957 92.364 94.133 83.672 66.077 32.821 92.909 26.654 73.509 71.058 88.721 50.104 53.777 90.429 48.467 87.115 68.666 + +thomas 04/11 03:21:11 Finished test. Elapsed time: 327.4835 +thomas 04/11 03:21:13 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/11 03:21:13 Current best mIoU: 62.662 at iter 77000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/11 03:24:42 ===> Epoch[256](77040/301): Loss 0.2056 LR: 3.967e-02 Score 93.053 Data time: 1.9800, Total iter time: 5.1492 +thomas 04/11 03:28:23 ===> Epoch[257](77080/301): Loss 0.2154 LR: 3.964e-02 Score 93.096 Data time: 2.1355, Total iter time: 5.4582 +thomas 04/11 03:32:01 ===> Epoch[257](77120/301): Loss 0.1986 LR: 3.961e-02 Score 93.351 Data time: 2.0734, Total iter time: 5.3646 +thomas 04/11 03:35:50 ===> Epoch[257](77160/301): Loss 0.2061 LR: 3.957e-02 Score 93.533 Data time: 2.1911, Total iter time: 5.6701 +thomas 04/11 03:39:33 ===> Epoch[257](77200/301): Loss 0.2069 LR: 3.954e-02 Score 93.500 Data time: 2.1413, Total iter time: 5.5032 +thomas 04/11 03:43:08 ===> Epoch[257](77240/301): Loss 0.1801 LR: 3.951e-02 Score 94.151 Data time: 2.0683, Total iter time: 5.2798 +thomas 04/11 03:46:47 ===> Epoch[257](77280/301): Loss 0.1847 LR: 3.947e-02 Score 93.746 Data time: 2.1208, Total iter time: 5.4059 +thomas 04/11 03:50:27 ===> Epoch[257](77320/301): Loss 0.1840 LR: 3.944e-02 Score 94.007 Data time: 2.1332, Total iter time: 5.4299 +thomas 04/11 03:54:04 ===> Epoch[258](77360/301): Loss 0.1995 LR: 3.941e-02 Score 93.649 Data time: 2.1112, Total iter time: 5.3577 +thomas 04/11 03:57:44 ===> Epoch[258](77400/301): Loss 0.2099 LR: 3.937e-02 Score 93.178 Data time: 2.1344, Total iter time: 5.4304 +thomas 04/11 04:01:17 ===> Epoch[258](77440/301): Loss 0.1925 LR: 3.934e-02 Score 93.725 Data time: 2.0600, Total iter time: 5.2490 +thomas 04/11 04:04:45 ===> Epoch[258](77480/301): Loss 0.2152 LR: 3.931e-02 Score 92.885 Data time: 1.9866, Total iter time: 5.1324 +thomas 04/11 04:08:18 ===> Epoch[258](77520/301): Loss 0.1967 LR: 3.927e-02 Score 93.569 Data time: 2.0582, Total iter time: 5.2700 +thomas 04/11 04:12:07 ===> Epoch[258](77560/301): Loss 0.1713 LR: 3.924e-02 Score 94.465 Data time: 2.2088, Total iter time: 5.6325 +thomas 04/11 04:15:48 ===> Epoch[258](77600/301): Loss 0.1920 LR: 3.921e-02 Score 93.667 Data time: 2.1250, Total iter time: 5.4563 +thomas 04/11 04:19:02 ===> Epoch[258](77640/301): Loss 0.1814 LR: 3.917e-02 Score 94.054 Data time: 1.9041, Total iter time: 4.7887 +thomas 04/11 04:22:44 ===> Epoch[259](77680/301): Loss 0.2120 LR: 3.914e-02 Score 93.035 Data time: 2.1530, Total iter time: 5.4809 +thomas 04/11 04:26:32 ===> Epoch[259](77720/301): Loss 0.1964 LR: 3.911e-02 Score 93.760 Data time: 2.2308, Total iter time: 5.6320 +thomas 04/11 04:30:14 ===> Epoch[259](77760/301): Loss 0.1908 LR: 3.907e-02 Score 93.877 Data time: 2.1435, Total iter time: 5.4614 +thomas 04/11 04:33:47 ===> Epoch[259](77800/301): Loss 0.1780 LR: 3.904e-02 Score 94.205 Data time: 2.0505, Total iter time: 5.2476 +thomas 04/11 04:37:26 ===> Epoch[259](77840/301): Loss 0.1936 LR: 3.901e-02 Score 93.723 Data time: 2.1110, Total iter time: 5.4107 +thomas 04/11 04:41:09 ===> Epoch[259](77880/301): Loss 0.1810 LR: 3.897e-02 Score 93.893 Data time: 2.1582, Total iter time: 5.5041 +thomas 04/11 04:44:32 ===> Epoch[259](77920/301): Loss 0.2020 LR: 3.894e-02 Score 93.490 Data time: 1.9688, Total iter time: 4.9957 +thomas 04/11 04:48:05 ===> Epoch[260](77960/301): Loss 0.1931 LR: 3.891e-02 Score 93.591 Data time: 2.0680, Total iter time: 5.2613 +thomas 04/11 04:51:55 ===> Epoch[260](78000/301): Loss 0.1886 LR: 3.887e-02 Score 93.870 Data time: 2.2466, Total iter time: 5.6889 +thomas 04/11 04:51:57 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/11 04:51:57 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/11 04:53:47 101/312: Data time: 0.0023, Iter time: 0.3381 Loss 0.236 (AVG: 0.584) Score 91.336 (AVG: 85.744) mIOU 64.006 mAP 72.611 mAcc 73.790 +IOU: 78.722 96.244 59.336 74.190 86.846 71.624 74.663 46.965 36.457 63.040 22.364 61.272 70.211 69.865 45.955 58.488 81.698 53.390 87.130 41.658 +mAP: 81.154 97.233 52.537 69.103 88.524 88.803 69.856 63.324 52.369 72.739 44.244 55.780 73.382 82.849 69.080 86.815 84.745 82.942 82.235 54.498 +mAcc: 92.350 98.864 72.666 78.746 92.312 92.923 83.271 57.290 41.599 74.134 33.216 74.087 82.199 87.506 63.268 62.484 83.823 58.595 89.877 56.601 + +thomas 04/11 04:55:39 201/312: Data time: 0.0026, Iter time: 0.6626 Loss 0.158 (AVG: 0.574) Score 95.084 (AVG: 85.929) mIOU 63.730 mAP 71.617 mAcc 73.234 +IOU: 79.114 96.199 54.583 72.979 88.320 76.852 72.810 46.371 36.282 68.843 17.214 56.940 63.352 67.274 37.130 63.128 84.644 60.939 85.476 46.152 +mAP: 80.146 97.161 50.814 71.295 89.157 81.282 72.629 59.919 51.292 75.735 39.269 57.290 67.249 80.121 54.594 88.877 88.717 83.858 85.903 57.031 +mAcc: 93.003 98.869 70.407 78.009 92.952 93.328 83.254 58.023 39.665 77.946 24.026 73.802 72.708 84.473 50.563 68.150 86.063 67.351 90.846 61.245 + +thomas 04/11 04:57:24 301/312: Data time: 0.0026, Iter time: 0.7239 Loss 0.302 (AVG: 0.572) Score 90.089 (AVG: 86.062) mIOU 63.929 mAP 71.899 mAcc 73.700 +IOU: 79.063 96.159 57.653 70.158 89.399 74.115 72.182 45.865 36.367 69.493 16.624 60.233 60.230 69.322 44.020 63.086 85.772 57.954 85.447 45.430 +mAP: 80.185 97.333 54.536 68.337 89.898 78.845 75.605 59.903 50.770 73.057 39.381 61.748 66.715 81.852 60.465 84.713 89.872 83.047 82.970 58.743 +mAcc: 92.991 98.920 74.274 75.304 94.164 93.320 82.205 58.154 39.223 79.845 23.664 77.552 71.322 87.721 59.677 66.801 87.356 64.863 89.760 56.893 + +thomas 04/11 04:57:37 312/312: Data time: 0.0034, Iter time: 0.3776 Loss 0.673 (AVG: 0.576) Score 85.878 (AVG: 85.982) mIOU 63.885 mAP 71.833 mAcc 73.672 +IOU: 78.895 96.183 57.521 71.123 89.231 74.229 71.692 46.359 36.081 69.097 16.891 59.379 59.604 68.383 44.731 63.086 85.772 57.503 85.447 46.488 +mAP: 80.293 97.381 55.047 68.954 89.428 79.155 75.080 60.314 50.986 71.855 39.677 61.417 65.959 80.643 61.884 84.713 89.872 82.485 82.970 58.542 +mAcc: 93.018 98.917 74.631 76.206 94.186 93.023 81.712 58.656 38.862 79.895 24.165 77.733 70.613 85.473 60.690 66.801 87.356 64.341 89.760 57.400 + +thomas 04/11 04:57:37 Finished test. Elapsed time: 340.5245 +thomas 04/11 04:57:39 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/11 04:57:39 Current best mIoU: 63.885 at iter 78000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/11 05:01:19 ===> Epoch[260](78040/301): Loss 0.1892 LR: 3.884e-02 Score 93.961 Data time: 2.1229, Total iter time: 5.4414 +thomas 04/11 05:04:46 ===> Epoch[260](78080/301): Loss 0.2034 LR: 3.881e-02 Score 93.455 Data time: 1.9955, Total iter time: 5.1085 +thomas 04/11 05:08:10 ===> Epoch[260](78120/301): Loss 0.1905 LR: 3.877e-02 Score 93.858 Data time: 1.9609, Total iter time: 5.0224 +thomas 04/11 05:11:48 ===> Epoch[260](78160/301): Loss 0.2107 LR: 3.874e-02 Score 93.137 Data time: 2.0960, Total iter time: 5.3817 +thomas 04/11 05:15:30 ===> Epoch[260](78200/301): Loss 0.1979 LR: 3.871e-02 Score 93.618 Data time: 2.1400, Total iter time: 5.4587 +thomas 04/11 05:19:09 ===> Epoch[260](78240/301): Loss 0.1911 LR: 3.867e-02 Score 93.867 Data time: 2.1057, Total iter time: 5.4050 +thomas 04/11 05:22:28 ===> Epoch[261](78280/301): Loss 0.2034 LR: 3.864e-02 Score 93.528 Data time: 1.9046, Total iter time: 4.8987 +thomas 04/11 05:26:13 ===> Epoch[261](78320/301): Loss 0.2077 LR: 3.861e-02 Score 93.192 Data time: 2.1577, Total iter time: 5.5344 +thomas 04/11 05:29:48 ===> Epoch[261](78360/301): Loss 0.2063 LR: 3.857e-02 Score 93.399 Data time: 2.0720, Total iter time: 5.2974 +thomas 04/11 05:33:30 ===> Epoch[261](78400/301): Loss 0.1862 LR: 3.854e-02 Score 93.881 Data time: 2.1163, Total iter time: 5.4841 +thomas 04/11 05:37:07 ===> Epoch[261](78440/301): Loss 0.1933 LR: 3.851e-02 Score 93.567 Data time: 2.0487, Total iter time: 5.3418 +thomas 04/11 05:40:26 ===> Epoch[261](78480/301): Loss 0.1772 LR: 3.847e-02 Score 94.197 Data time: 1.9123, Total iter time: 4.9020 +thomas 04/11 05:44:03 ===> Epoch[261](78520/301): Loss 0.2151 LR: 3.844e-02 Score 93.231 Data time: 2.0730, Total iter time: 5.3313 +thomas 04/11 05:47:31 ===> Epoch[261](78560/301): Loss 0.1887 LR: 3.841e-02 Score 93.665 Data time: 2.0017, Total iter time: 5.1334 +thomas 04/11 05:50:59 ===> Epoch[262](78600/301): Loss 0.1851 LR: 3.837e-02 Score 93.913 Data time: 1.9946, Total iter time: 5.1292 +thomas 04/11 05:54:39 ===> Epoch[262](78640/301): Loss 0.1967 LR: 3.834e-02 Score 93.696 Data time: 2.1113, Total iter time: 5.4191 +thomas 04/11 05:58:18 ===> Epoch[262](78680/301): Loss 0.2253 LR: 3.831e-02 Score 92.517 Data time: 2.1085, Total iter time: 5.4118 +thomas 04/11 06:01:43 ===> Epoch[262](78720/301): Loss 0.1983 LR: 3.827e-02 Score 93.579 Data time: 1.9724, Total iter time: 5.0407 +thomas 04/11 06:05:19 ===> Epoch[262](78760/301): Loss 0.1957 LR: 3.824e-02 Score 93.698 Data time: 2.0678, Total iter time: 5.3251 +thomas 04/11 06:08:42 ===> Epoch[262](78800/301): Loss 0.1989 LR: 3.821e-02 Score 93.683 Data time: 1.9449, Total iter time: 5.0102 +thomas 04/11 06:12:40 ===> Epoch[262](78840/301): Loss 0.1953 LR: 3.817e-02 Score 93.529 Data time: 2.2829, Total iter time: 5.8719 +thomas 04/11 06:16:12 ===> Epoch[263](78880/301): Loss 0.1745 LR: 3.814e-02 Score 94.346 Data time: 2.0428, Total iter time: 5.2256 +thomas 04/11 06:19:42 ===> Epoch[263](78920/301): Loss 0.1993 LR: 3.811e-02 Score 93.535 Data time: 2.0251, Total iter time: 5.1829 +thomas 04/11 06:23:18 ===> Epoch[263](78960/301): Loss 0.2263 LR: 3.807e-02 Score 92.635 Data time: 2.0600, Total iter time: 5.3200 +thomas 04/11 06:26:54 ===> Epoch[263](79000/301): Loss 0.1900 LR: 3.804e-02 Score 93.900 Data time: 2.0942, Total iter time: 5.3334 +thomas 04/11 06:26:55 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/11 06:26:55 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/11 06:28:46 101/312: Data time: 0.0158, Iter time: 0.5250 Loss 0.336 (AVG: 0.768) Score 89.820 (AVG: 82.238) mIOU 58.101 mAP 69.087 mAcc 69.369 +IOU: 74.592 94.875 42.882 72.463 86.491 56.222 65.166 42.733 33.196 55.963 23.327 51.405 49.995 68.125 36.283 62.446 73.648 63.045 77.319 31.841 +mAP: 74.898 97.123 58.697 77.190 88.855 77.062 67.610 59.004 41.902 68.800 33.083 62.724 65.050 70.239 61.952 83.514 83.326 85.539 77.222 47.937 +mAcc: 88.198 98.723 58.264 79.218 92.023 93.869 71.215 56.314 36.333 95.892 26.393 70.658 71.591 77.311 44.674 68.219 74.325 65.836 78.302 40.013 + +thomas 04/11 06:30:32 201/312: Data time: 0.0029, Iter time: 0.3229 Loss 0.206 (AVG: 0.767) Score 95.680 (AVG: 82.296) mIOU 56.547 mAP 69.623 mAcc 67.337 +IOU: 75.382 95.200 45.065 66.086 86.382 56.577 65.947 43.733 37.718 53.821 13.671 49.423 53.841 66.295 36.972 37.432 72.211 60.707 78.892 35.589 +mAP: 77.249 97.459 60.048 70.579 89.936 78.145 70.209 58.839 47.361 65.916 32.403 59.463 64.372 76.438 63.971 84.643 84.888 87.669 71.793 51.078 +mAcc: 88.848 98.776 62.656 74.699 92.031 95.826 74.375 55.328 42.444 92.486 14.655 62.639 69.171 80.879 42.163 38.817 72.633 62.893 80.168 45.258 + +thomas 04/11 06:32:17 301/312: Data time: 0.0024, Iter time: 0.2769 Loss 0.153 (AVG: 0.742) Score 96.140 (AVG: 82.753) mIOU 56.711 mAP 69.418 mAcc 67.326 +IOU: 75.696 95.501 46.380 63.530 85.530 60.104 66.219 44.247 36.512 53.823 11.722 50.510 51.199 63.912 38.256 41.550 74.216 57.013 81.666 36.638 +mAP: 77.795 97.305 57.807 70.114 89.139 79.494 71.448 60.160 46.108 67.068 29.924 57.738 61.376 76.165 62.387 85.231 85.080 83.988 78.992 51.046 +mAcc: 89.144 98.878 64.478 72.710 90.520 95.232 74.852 54.567 40.841 92.666 12.408 63.168 67.291 78.548 44.069 44.579 74.613 58.940 82.857 46.163 + +thomas 04/11 06:32:27 312/312: Data time: 0.0026, Iter time: 0.2794 Loss 0.203 (AVG: 0.737) Score 94.016 (AVG: 82.838) mIOU 57.151 mAP 69.682 mAcc 67.778 +IOU: 75.852 95.500 48.272 63.623 85.634 59.870 65.959 44.750 36.004 53.566 11.656 52.228 51.016 64.888 42.433 42.623 74.287 57.700 80.485 36.674 +mAP: 77.924 97.158 58.135 69.736 89.176 79.172 71.460 60.657 46.053 66.786 29.337 58.250 61.693 77.071 65.281 85.665 85.574 83.975 79.461 51.071 +mAcc: 89.206 98.869 66.029 72.508 90.641 95.256 74.533 55.142 40.245 92.620 12.335 64.200 67.742 79.462 48.332 45.697 74.669 59.928 81.711 46.432 + +thomas 04/11 06:32:27 Finished test. Elapsed time: 331.9839 +thomas 04/11 06:32:27 Current best mIoU: 63.885 at iter 78000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/11 06:36:19 ===> Epoch[263](79040/301): Loss 0.1950 LR: 3.801e-02 Score 93.688 Data time: 2.2157, Total iter time: 5.7018 +thomas 04/11 06:39:32 ===> Epoch[263](79080/301): Loss 0.2593 LR: 3.797e-02 Score 91.960 Data time: 1.8645, Total iter time: 4.7838 +thomas 04/11 06:43:00 ===> Epoch[263](79120/301): Loss 0.2228 LR: 3.794e-02 Score 92.704 Data time: 1.9932, Total iter time: 5.1169 +thomas 04/11 06:46:37 ===> Epoch[263](79160/301): Loss 0.2175 LR: 3.791e-02 Score 92.961 Data time: 2.0879, Total iter time: 5.3641 +thomas 04/11 06:50:08 ===> Epoch[264](79200/301): Loss 0.2190 LR: 3.787e-02 Score 92.762 Data time: 2.0311, Total iter time: 5.2007 +thomas 04/11 06:53:48 ===> Epoch[264](79240/301): Loss 0.1853 LR: 3.784e-02 Score 94.010 Data time: 2.1142, Total iter time: 5.4205 +thomas 04/11 06:57:28 ===> Epoch[264](79280/301): Loss 0.1967 LR: 3.781e-02 Score 93.609 Data time: 2.1233, Total iter time: 5.4384 +thomas 04/11 07:01:08 ===> Epoch[264](79320/301): Loss 0.1844 LR: 3.777e-02 Score 93.920 Data time: 2.1120, Total iter time: 5.4101 +thomas 04/11 07:04:44 ===> Epoch[264](79360/301): Loss 0.1902 LR: 3.774e-02 Score 93.820 Data time: 2.1018, Total iter time: 5.3467 +thomas 04/11 07:09:44 ===> Epoch[264](79400/301): Loss 0.2040 LR: 3.771e-02 Score 93.172 Data time: 3.1570, Total iter time: 7.4034 +thomas 04/11 07:13:59 ===> Epoch[264](79440/301): Loss 0.2234 LR: 3.767e-02 Score 92.971 Data time: 2.6416, Total iter time: 6.2847 +thomas 04/11 07:18:21 ===> Epoch[265](79480/301): Loss 0.1960 LR: 3.764e-02 Score 93.393 Data time: 2.7051, Total iter time: 6.4725 +thomas 04/11 07:23:31 ===> Epoch[265](79520/301): Loss 0.1821 LR: 3.761e-02 Score 94.063 Data time: 3.3008, Total iter time: 7.6531 +thomas 04/11 07:28:00 ===> Epoch[265](79560/301): Loss 0.1936 LR: 3.757e-02 Score 93.608 Data time: 2.8353, Total iter time: 6.6467 +thomas 04/11 07:32:09 ===> Epoch[265](79600/301): Loss 0.1813 LR: 3.754e-02 Score 94.113 Data time: 2.4874, Total iter time: 6.1504 +thomas 04/11 07:36:35 ===> Epoch[265](79640/301): Loss 0.1725 LR: 3.751e-02 Score 94.097 Data time: 2.7615, Total iter time: 6.5756 +thomas 04/11 07:41:37 ===> Epoch[265](79680/301): Loss 0.1932 LR: 3.747e-02 Score 93.412 Data time: 3.1966, Total iter time: 7.4771 +thomas 04/11 07:46:06 ===> Epoch[265](79720/301): Loss 0.1888 LR: 3.744e-02 Score 93.755 Data time: 2.7520, Total iter time: 6.6431 +thomas 04/11 07:50:14 ===> Epoch[265](79760/301): Loss 0.1618 LR: 3.741e-02 Score 94.768 Data time: 2.4552, Total iter time: 6.1189 +thomas 04/11 07:54:59 ===> Epoch[266](79800/301): Loss 0.2146 LR: 3.737e-02 Score 93.131 Data time: 3.0408, Total iter time: 7.0302 +thomas 04/11 07:59:27 ===> Epoch[266](79840/301): Loss 0.2113 LR: 3.734e-02 Score 93.275 Data time: 2.8182, Total iter time: 6.6041 +thomas 04/11 08:03:54 ===> Epoch[266](79880/301): Loss 0.2134 LR: 3.731e-02 Score 93.332 Data time: 2.6673, Total iter time: 6.6057 +thomas 04/11 08:08:03 ===> Epoch[266](79920/301): Loss 0.2093 LR: 3.727e-02 Score 93.252 Data time: 2.4833, Total iter time: 6.1379 +thomas 04/11 08:12:45 ===> Epoch[266](79960/301): Loss 0.1945 LR: 3.724e-02 Score 93.524 Data time: 3.0209, Total iter time: 6.9777 +thomas 04/11 08:17:27 ===> Epoch[266](80000/301): Loss 0.1956 LR: 3.720e-02 Score 93.716 Data time: 2.9665, Total iter time: 6.9483 +thomas 04/11 08:17:28 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/11 08:17:28 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/11 08:19:31 101/312: Data time: 0.0036, Iter time: 0.6152 Loss 0.324 (AVG: 0.672) Score 87.481 (AVG: 83.603) mIOU 56.185 mAP 69.923 mAcc 66.599 +IOU: 76.517 95.782 53.985 67.972 81.618 57.838 71.750 48.308 22.467 71.207 20.483 48.755 56.410 63.076 36.666 17.081 69.084 48.234 81.889 34.585 +mAP: 78.766 97.120 52.387 66.268 87.883 73.171 75.696 60.428 40.137 64.613 42.327 62.403 70.670 74.424 55.396 84.029 94.719 83.938 80.173 53.903 +mAcc: 93.473 98.682 70.956 79.733 84.627 93.851 80.617 59.908 25.260 81.762 27.076 78.035 67.908 79.369 45.261 17.559 69.164 50.194 82.329 46.216 + +thomas 04/11 08:21:40 201/312: Data time: 0.0024, Iter time: 0.5389 Loss 0.555 (AVG: 0.638) Score 77.190 (AVG: 84.544) mIOU 58.200 mAP 70.762 mAcc 67.774 +IOU: 78.225 96.030 55.902 68.942 82.710 65.288 69.013 44.474 29.949 71.677 15.277 54.805 58.656 65.390 41.364 26.828 65.975 50.557 80.461 42.477 +mAP: 81.599 97.490 55.553 68.469 89.209 77.976 73.675 59.456 48.179 70.724 38.490 63.387 68.311 75.812 55.798 80.820 89.473 83.414 83.288 54.116 +mAcc: 94.076 98.690 71.205 80.669 85.920 95.040 81.451 54.517 33.047 84.829 18.912 79.100 68.431 78.792 50.446 27.373 66.129 52.521 80.994 53.345 + +thomas 04/11 08:23:50 301/312: Data time: 0.0027, Iter time: 0.4311 Loss 0.434 (AVG: 0.628) Score 84.648 (AVG: 84.973) mIOU 58.443 mAP 70.829 mAcc 68.159 +IOU: 78.578 96.006 57.282 67.802 84.533 64.034 71.016 46.960 27.551 70.161 15.010 51.634 55.901 67.486 44.308 26.330 67.125 51.815 78.857 46.480 +mAP: 80.723 97.606 57.243 69.286 88.986 79.751 76.465 62.341 46.007 72.917 37.658 59.229 65.553 78.694 60.563 79.300 87.708 82.713 78.259 55.582 +mAcc: 94.048 98.649 73.495 79.740 87.692 95.432 82.108 56.883 29.880 83.391 19.291 75.017 65.471 80.682 53.647 27.530 67.277 54.708 79.410 58.832 + +thomas 04/11 08:24:07 312/312: Data time: 0.0025, Iter time: 0.7672 Loss 0.238 (AVG: 0.622) Score 93.909 (AVG: 85.112) mIOU 58.727 mAP 70.799 mAcc 68.351 +IOU: 78.758 95.965 58.276 67.882 84.511 64.382 70.798 47.075 26.760 70.220 14.515 53.024 57.111 67.382 44.061 28.756 67.541 52.104 79.000 46.423 +mAP: 80.947 97.606 57.594 69.388 89.019 79.924 75.770 61.945 44.588 72.935 37.144 59.024 65.515 78.549 59.465 80.748 88.063 82.588 79.496 55.666 +mAcc: 94.102 98.645 74.540 79.885 88.053 94.958 81.148 57.048 29.066 83.587 18.749 75.653 66.634 80.597 53.416 30.002 67.689 54.938 79.540 58.770 + +thomas 04/11 08:24:07 Finished test. Elapsed time: 398.7426 +thomas 04/11 08:24:07 Current best mIoU: 63.885 at iter 78000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/11 08:29:09 ===> Epoch[266](80040/301): Loss 0.1953 LR: 3.717e-02 Score 93.764 Data time: 3.1928, Total iter time: 7.4652 +thomas 04/11 08:33:36 ===> Epoch[267](80080/301): Loss 0.1791 LR: 3.714e-02 Score 94.248 Data time: 2.7732, Total iter time: 6.5799 +thomas 04/11 08:38:14 ===> Epoch[267](80120/301): Loss 0.2089 LR: 3.710e-02 Score 93.142 Data time: 2.8053, Total iter time: 6.8650 +thomas 04/11 08:42:36 ===> Epoch[267](80160/301): Loss 0.2047 LR: 3.707e-02 Score 93.187 Data time: 2.6361, Total iter time: 6.4590 +thomas 04/11 08:47:33 ===> Epoch[267](80200/301): Loss 0.2016 LR: 3.704e-02 Score 93.421 Data time: 3.0874, Total iter time: 7.3442 +thomas 04/11 08:52:23 ===> Epoch[267](80240/301): Loss 0.1976 LR: 3.700e-02 Score 93.681 Data time: 2.9679, Total iter time: 7.1629 +thomas 04/11 08:56:25 ===> Epoch[267](80280/301): Loss 0.2051 LR: 3.697e-02 Score 93.415 Data time: 2.4125, Total iter time: 5.9916 +thomas 04/11 09:01:06 ===> Epoch[267](80320/301): Loss 0.2050 LR: 3.694e-02 Score 93.380 Data time: 2.9720, Total iter time: 6.9504 +thomas 04/11 09:05:52 ===> Epoch[267](80360/301): Loss 0.2166 LR: 3.690e-02 Score 93.157 Data time: 3.1784, Total iter time: 7.0685 +thomas 04/11 09:10:29 ===> Epoch[268](80400/301): Loss 0.2231 LR: 3.687e-02 Score 92.977 Data time: 2.9565, Total iter time: 6.8482 +thomas 04/11 09:15:08 ===> Epoch[268](80440/301): Loss 0.1995 LR: 3.684e-02 Score 93.713 Data time: 2.8012, Total iter time: 6.9020 +thomas 04/11 09:20:53 ===> Epoch[268](80480/301): Loss 0.2000 LR: 3.680e-02 Score 93.431 Data time: 3.8734, Total iter time: 8.5182 +thomas 04/11 09:26:07 ===> Epoch[268](80520/301): Loss 0.1844 LR: 3.677e-02 Score 93.958 Data time: 3.4963, Total iter time: 7.7527 +thomas 04/11 09:30:30 ===> Epoch[268](80560/301): Loss 0.1903 LR: 3.674e-02 Score 93.751 Data time: 2.7233, Total iter time: 6.4671 +thomas 04/11 09:34:53 ===> Epoch[268](80600/301): Loss 0.1980 LR: 3.670e-02 Score 93.508 Data time: 2.6915, Total iter time: 6.5009 +thomas 04/11 09:39:48 ===> Epoch[268](80640/301): Loss 0.1811 LR: 3.667e-02 Score 94.007 Data time: 3.1817, Total iter time: 7.3046 +thomas 04/11 09:44:44 ===> Epoch[269](80680/301): Loss 0.1777 LR: 3.663e-02 Score 94.104 Data time: 3.0683, Total iter time: 7.3013 +thomas 04/11 09:49:02 ===> Epoch[269](80720/301): Loss 0.1692 LR: 3.660e-02 Score 94.363 Data time: 2.5440, Total iter time: 6.3851 +thomas 04/11 09:53:39 ===> Epoch[269](80760/301): Loss 0.1766 LR: 3.657e-02 Score 94.202 Data time: 2.8604, Total iter time: 6.8408 +thomas 04/11 09:58:29 ===> Epoch[269](80800/301): Loss 0.1893 LR: 3.653e-02 Score 93.763 Data time: 3.0775, Total iter time: 7.1641 +thomas 04/11 10:03:14 ===> Epoch[269](80840/301): Loss 0.1815 LR: 3.650e-02 Score 93.919 Data time: 2.9187, Total iter time: 7.0370 +thomas 04/11 10:07:30 ===> Epoch[269](80880/301): Loss 0.1737 LR: 3.647e-02 Score 94.121 Data time: 2.5510, Total iter time: 6.3427 +thomas 04/11 10:12:26 ===> Epoch[269](80920/301): Loss 0.2015 LR: 3.643e-02 Score 93.568 Data time: 3.1611, Total iter time: 7.3109 +thomas 04/11 10:16:55 ===> Epoch[269](80960/301): Loss 0.1979 LR: 3.640e-02 Score 93.515 Data time: 2.8773, Total iter time: 6.6343 +thomas 04/11 10:21:25 ===> Epoch[270](81000/301): Loss 0.1919 LR: 3.637e-02 Score 93.682 Data time: 2.7528, Total iter time: 6.6902 +thomas 04/11 10:21:27 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/11 10:21:27 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/11 10:23:33 101/312: Data time: 0.0027, Iter time: 1.3886 Loss 1.183 (AVG: 0.582) Score 67.907 (AVG: 85.080) mIOU 59.748 mAP 69.670 mAcc 69.593 +IOU: 79.283 96.316 59.056 73.170 90.154 70.239 67.726 45.738 28.726 69.756 12.478 50.910 59.857 69.243 53.797 18.707 74.625 65.603 72.806 36.766 +mAP: 79.060 97.866 61.115 76.063 91.132 84.873 73.542 64.016 43.375 63.092 28.537 61.259 60.788 73.958 67.585 66.129 87.551 84.100 69.220 60.129 +mAcc: 89.841 98.514 72.608 81.782 95.041 85.724 83.446 67.570 31.352 90.594 16.549 65.054 82.027 75.554 62.640 19.451 75.199 69.570 74.125 55.219 + +thomas 04/11 10:25:36 201/312: Data time: 0.0032, Iter time: 0.6396 Loss 0.369 (AVG: 0.584) Score 90.114 (AVG: 85.550) mIOU 59.758 mAP 70.021 mAcc 69.344 +IOU: 79.599 96.261 55.585 70.469 90.621 73.179 71.306 47.816 27.262 68.169 16.546 51.488 60.667 76.353 48.288 23.023 66.707 60.932 69.245 41.653 +mAP: 79.093 97.515 60.982 71.007 91.733 82.410 76.067 65.591 41.411 67.285 39.935 55.392 63.086 80.945 62.989 65.522 83.698 85.644 72.745 57.375 +mAcc: 89.110 98.697 73.111 79.473 95.008 88.963 85.094 72.357 29.930 89.429 20.253 60.518 79.887 81.569 55.434 24.477 67.176 65.133 70.186 61.068 + +thomas 04/11 10:28:30 301/312: Data time: 0.0040, Iter time: 0.9783 Loss 0.594 (AVG: 0.594) Score 82.080 (AVG: 85.310) mIOU 60.541 mAP 70.046 mAcc 69.860 +IOU: 78.431 96.413 55.102 70.510 90.389 77.046 70.607 44.113 26.654 72.493 13.975 53.072 59.266 70.223 45.447 32.308 73.285 56.946 76.308 48.229 +mAP: 79.128 97.761 59.522 70.671 90.953 83.465 73.427 63.084 42.402 66.932 38.205 57.128 62.422 75.526 61.713 74.571 86.844 82.730 76.938 57.500 +mAcc: 88.796 98.752 72.698 79.868 94.790 88.259 84.166 71.105 29.100 90.550 16.118 61.758 77.811 75.182 53.805 34.449 73.876 61.261 77.405 67.458 + +thomas 04/11 10:28:47 312/312: Data time: 0.0037, Iter time: 0.7616 Loss 0.454 (AVG: 0.601) Score 87.516 (AVG: 85.216) mIOU 60.500 mAP 69.914 mAcc 69.867 +IOU: 78.493 96.293 55.489 71.190 89.901 77.055 70.600 43.532 26.642 71.341 14.461 54.650 60.054 70.971 43.514 31.620 72.693 57.370 76.300 47.832 +mAP: 79.000 97.489 59.560 71.665 90.859 83.397 72.901 62.854 42.564 66.130 38.753 57.869 63.277 76.019 60.459 72.301 87.281 82.096 76.938 56.867 +mAcc: 88.784 98.755 73.138 79.956 94.230 88.311 84.216 70.960 29.015 90.544 16.617 62.894 78.420 76.081 52.500 33.668 73.259 61.821 77.405 66.760 + +thomas 04/11 10:28:47 Finished test. Elapsed time: 439.7340 +thomas 04/11 10:28:47 Current best mIoU: 63.885 at iter 78000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/11 10:33:29 ===> Epoch[270](81040/301): Loss 0.1820 LR: 3.633e-02 Score 94.081 Data time: 2.9403, Total iter time: 6.9787 +thomas 04/11 10:36:58 ===> Epoch[270](81080/301): Loss 0.1756 LR: 3.630e-02 Score 94.104 Data time: 2.0447, Total iter time: 5.1609 +thomas 04/11 10:42:11 ===> Epoch[270](81120/301): Loss 0.1764 LR: 3.627e-02 Score 94.186 Data time: 3.4253, Total iter time: 7.7456 +thomas 04/11 10:47:03 ===> Epoch[270](81160/301): Loss 0.1757 LR: 3.623e-02 Score 94.452 Data time: 3.0659, Total iter time: 7.2204 +thomas 04/11 10:51:09 ===> Epoch[270](81200/301): Loss 0.2041 LR: 3.620e-02 Score 93.284 Data time: 2.4548, Total iter time: 6.0734 +thomas 04/11 10:55:01 ===> Epoch[270](81240/301): Loss 0.1788 LR: 3.617e-02 Score 94.077 Data time: 2.3280, Total iter time: 5.7159 +thomas 04/11 11:00:16 ===> Epoch[271](81280/301): Loss 0.1969 LR: 3.613e-02 Score 93.518 Data time: 3.3660, Total iter time: 7.8014 +thomas 04/11 11:05:02 ===> Epoch[271](81320/301): Loss 0.1738 LR: 3.610e-02 Score 94.302 Data time: 2.9610, Total iter time: 7.0496 +thomas 04/11 11:09:15 ===> Epoch[271](81360/301): Loss 0.1917 LR: 3.606e-02 Score 93.743 Data time: 2.5314, Total iter time: 6.2527 +thomas 04/11 11:13:50 ===> Epoch[271](81400/301): Loss 0.1953 LR: 3.603e-02 Score 93.600 Data time: 2.8685, Total iter time: 6.7753 +thomas 04/11 11:18:44 ===> Epoch[271](81440/301): Loss 0.1886 LR: 3.600e-02 Score 93.818 Data time: 3.1671, Total iter time: 7.2870 +thomas 04/11 11:23:10 ===> Epoch[271](81480/301): Loss 0.1642 LR: 3.596e-02 Score 94.581 Data time: 2.7563, Total iter time: 6.5627 +thomas 04/11 11:27:28 ===> Epoch[271](81520/301): Loss 0.1721 LR: 3.593e-02 Score 94.405 Data time: 2.5658, Total iter time: 6.3680 +thomas 04/11 11:32:36 ===> Epoch[271](81560/301): Loss 0.1942 LR: 3.590e-02 Score 93.737 Data time: 3.2717, Total iter time: 7.6209 +thomas 04/11 11:37:28 ===> Epoch[272](81600/301): Loss 0.1988 LR: 3.586e-02 Score 93.551 Data time: 3.1228, Total iter time: 7.2093 +thomas 04/11 11:41:38 ===> Epoch[272](81640/301): Loss 0.1984 LR: 3.583e-02 Score 93.673 Data time: 2.4762, Total iter time: 6.1584 +thomas 04/11 11:45:29 ===> Epoch[272](81680/301): Loss 0.2097 LR: 3.580e-02 Score 93.149 Data time: 2.3262, Total iter time: 5.7158 +thomas 04/11 11:50:57 ===> Epoch[272](81720/301): Loss 0.1904 LR: 3.576e-02 Score 93.827 Data time: 3.5514, Total iter time: 8.0996 +thomas 04/11 11:55:41 ===> Epoch[272](81760/301): Loss 0.1657 LR: 3.573e-02 Score 94.567 Data time: 2.8544, Total iter time: 6.9821 +thomas 04/11 11:59:58 ===> Epoch[272](81800/301): Loss 0.1700 LR: 3.569e-02 Score 94.201 Data time: 2.5789, Total iter time: 6.3484 +thomas 04/11 12:04:41 ===> Epoch[272](81840/301): Loss 0.1823 LR: 3.566e-02 Score 94.008 Data time: 2.9764, Total iter time: 7.0030 +thomas 04/11 12:09:32 ===> Epoch[273](81880/301): Loss 0.1886 LR: 3.563e-02 Score 93.929 Data time: 3.1061, Total iter time: 7.1999 +thomas 04/11 12:13:59 ===> Epoch[273](81920/301): Loss 0.1862 LR: 3.559e-02 Score 93.864 Data time: 2.7799, Total iter time: 6.6225 +thomas 04/11 12:17:52 ===> Epoch[273](81960/301): Loss 0.2191 LR: 3.556e-02 Score 92.902 Data time: 2.3089, Total iter time: 5.7538 +thomas 04/11 12:22:28 ===> Epoch[273](82000/301): Loss 0.2075 LR: 3.553e-02 Score 93.059 Data time: 2.9243, Total iter time: 6.8108 +thomas 04/11 12:22:29 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/11 12:22:29 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/11 12:25:10 101/312: Data time: 0.1669, Iter time: 0.7045 Loss 0.357 (AVG: 0.664) Score 84.352 (AVG: 84.215) mIOU 57.430 mAP 68.836 mAcc 66.527 +IOU: 76.486 96.643 52.162 77.414 87.744 69.167 73.009 40.601 30.317 63.926 10.251 48.700 57.634 33.060 43.625 27.895 88.911 47.718 79.151 44.177 +mAP: 78.500 98.128 52.257 75.829 89.043 76.435 74.043 60.723 44.838 71.586 25.388 51.865 69.774 62.710 64.615 71.667 95.921 84.019 77.498 51.880 +mAcc: 93.585 98.808 73.466 89.645 90.382 96.206 86.452 55.496 32.464 72.890 10.320 66.778 66.059 34.433 61.676 27.919 89.520 49.145 79.911 55.390 + +thomas 04/11 12:27:50 201/312: Data time: 0.0025, Iter time: 0.3406 Loss 0.407 (AVG: 0.711) Score 87.745 (AVG: 83.240) mIOU 56.954 mAP 68.454 mAcc 65.995 +IOU: 75.054 96.431 52.155 67.540 84.720 63.802 70.473 39.581 31.769 64.485 4.629 55.885 57.767 30.641 41.380 41.490 90.670 46.119 84.522 39.966 +mAP: 78.667 98.095 56.255 70.163 87.627 78.674 70.342 58.425 44.876 68.221 21.480 57.117 67.424 64.358 58.936 76.486 95.922 78.475 84.553 52.987 +mAcc: 93.490 98.607 78.402 82.495 87.048 95.871 83.115 51.851 33.772 73.308 4.644 69.121 66.950 31.721 53.586 42.578 91.169 47.480 85.632 49.072 + +thomas 04/11 12:30:02 301/312: Data time: 0.0377, Iter time: 0.6907 Loss 0.675 (AVG: 0.680) Score 80.353 (AVG: 83.787) mIOU 57.409 mAP 69.043 mAcc 66.123 +IOU: 75.898 96.288 51.329 70.509 85.876 67.228 70.974 38.408 31.108 74.142 4.303 57.906 55.860 34.356 42.128 32.115 84.350 51.307 84.325 39.776 +mAP: 78.741 97.894 56.034 73.353 89.474 80.428 71.749 57.565 46.991 71.321 20.130 57.862 66.487 66.085 59.120 76.354 89.981 81.383 85.133 54.776 +mAcc: 93.444 98.642 75.913 83.195 88.009 96.264 83.003 51.984 32.784 82.187 4.317 70.544 65.812 35.422 54.190 33.485 84.862 52.809 86.210 49.379 + +thomas 04/11 12:30:17 312/312: Data time: 0.0042, Iter time: 0.4673 Loss 0.176 (AVG: 0.680) Score 93.680 (AVG: 83.883) mIOU 57.183 mAP 69.107 mAcc 66.005 +IOU: 76.131 96.316 51.065 70.712 85.333 66.582 71.455 39.161 30.772 73.934 4.940 57.678 55.784 34.355 41.764 28.089 84.567 51.805 83.578 39.641 +mAP: 78.816 97.909 56.868 73.913 89.466 80.107 72.556 57.743 47.383 70.875 21.788 58.510 66.712 66.085 58.824 75.379 90.163 81.769 82.413 54.861 +mAcc: 93.427 98.654 75.816 83.757 87.422 96.208 83.500 53.057 32.367 82.363 4.957 70.676 65.484 35.422 54.518 29.131 85.076 53.396 85.449 49.421 + +thomas 04/11 12:30:17 Finished test. Elapsed time: 467.6924 +thomas 04/11 12:30:17 Current best mIoU: 63.885 at iter 78000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/11 12:34:20 ===> Epoch[273](82040/301): Loss 0.2026 LR: 3.549e-02 Score 93.447 Data time: 2.4299, Total iter time: 5.9948 +thomas 04/11 12:38:59 ===> Epoch[273](82080/301): Loss 0.2194 LR: 3.546e-02 Score 93.112 Data time: 2.9327, Total iter time: 6.8815 +thomas 04/11 12:44:00 ===> Epoch[273](82120/301): Loss 0.2114 LR: 3.543e-02 Score 93.132 Data time: 3.1769, Total iter time: 7.4445 +thomas 04/11 12:48:44 ===> Epoch[273](82160/301): Loss 0.1842 LR: 3.539e-02 Score 93.803 Data time: 2.8753, Total iter time: 7.0113 +thomas 04/11 12:52:37 ===> Epoch[274](82200/301): Loss 0.1759 LR: 3.536e-02 Score 94.095 Data time: 2.3046, Total iter time: 5.7767 +thomas 04/11 12:57:35 ===> Epoch[274](82240/301): Loss 0.1772 LR: 3.532e-02 Score 94.218 Data time: 3.1388, Total iter time: 7.3690 +thomas 04/11 13:02:19 ===> Epoch[274](82280/301): Loss 0.1873 LR: 3.529e-02 Score 94.019 Data time: 3.0043, Total iter time: 7.0123 +thomas 04/11 13:06:35 ===> Epoch[274](82320/301): Loss 0.1926 LR: 3.526e-02 Score 93.914 Data time: 2.6267, Total iter time: 6.3277 +thomas 04/11 13:11:04 ===> Epoch[274](82360/301): Loss 0.1882 LR: 3.522e-02 Score 93.985 Data time: 2.6886, Total iter time: 6.6278 +thomas 04/11 13:15:46 ===> Epoch[274](82400/301): Loss 0.1806 LR: 3.519e-02 Score 93.957 Data time: 3.0121, Total iter time: 6.9496 +thomas 04/11 13:20:40 ===> Epoch[274](82440/301): Loss 0.1916 LR: 3.516e-02 Score 93.742 Data time: 3.1199, Total iter time: 7.2773 +thomas 04/11 13:25:02 ===> Epoch[275](82480/301): Loss 0.2224 LR: 3.512e-02 Score 92.764 Data time: 2.6115, Total iter time: 6.4539 +thomas 04/11 13:29:27 ===> Epoch[275](82520/301): Loss 0.2318 LR: 3.509e-02 Score 92.684 Data time: 2.7740, Total iter time: 6.5667 +thomas 04/11 13:34:27 ===> Epoch[275](82560/301): Loss 0.2044 LR: 3.505e-02 Score 93.139 Data time: 3.1817, Total iter time: 7.4142 +thomas 04/11 13:38:52 ===> Epoch[275](82600/301): Loss 0.1902 LR: 3.502e-02 Score 93.774 Data time: 2.7202, Total iter time: 6.5421 +thomas 04/11 13:43:15 ===> Epoch[275](82640/301): Loss 0.1943 LR: 3.499e-02 Score 93.649 Data time: 2.6161, Total iter time: 6.4964 +thomas 04/11 13:48:13 ===> Epoch[275](82680/301): Loss 0.1925 LR: 3.495e-02 Score 93.718 Data time: 3.2170, Total iter time: 7.3620 +thomas 04/11 13:53:07 ===> Epoch[275](82720/301): Loss 0.1763 LR: 3.492e-02 Score 94.233 Data time: 3.0596, Total iter time: 7.2610 +thomas 04/11 13:57:13 ===> Epoch[275](82760/301): Loss 0.1957 LR: 3.489e-02 Score 93.518 Data time: 2.4472, Total iter time: 6.0939 +thomas 04/11 14:01:26 ===> Epoch[276](82800/301): Loss 0.1916 LR: 3.485e-02 Score 93.613 Data time: 2.5409, Total iter time: 6.2507 +thomas 04/11 14:06:13 ===> Epoch[276](82840/301): Loss 0.1806 LR: 3.482e-02 Score 94.094 Data time: 3.0487, Total iter time: 7.0786 +thomas 04/11 14:10:41 ===> Epoch[276](82880/301): Loss 0.1705 LR: 3.478e-02 Score 94.342 Data time: 2.8527, Total iter time: 6.6057 +thomas 04/11 14:14:43 ===> Epoch[276](82920/301): Loss 0.1788 LR: 3.475e-02 Score 94.185 Data time: 2.4093, Total iter time: 5.9681 +thomas 04/11 14:19:47 ===> Epoch[276](82960/301): Loss 0.1911 LR: 3.472e-02 Score 93.854 Data time: 3.1840, Total iter time: 7.5205 +thomas 04/11 14:24:42 ===> Epoch[276](83000/301): Loss 0.1806 LR: 3.468e-02 Score 93.759 Data time: 3.0611, Total iter time: 7.2858 +thomas 04/11 14:24:43 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/11 14:24:43 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/11 14:27:08 101/312: Data time: 0.0039, Iter time: 0.8604 Loss 0.113 (AVG: 0.558) Score 97.353 (AVG: 86.890) mIOU 64.167 mAP 72.483 mAcc 72.441 +IOU: 79.841 96.615 62.583 77.420 91.772 92.294 71.157 42.942 23.593 79.983 15.345 66.117 58.260 75.824 47.571 36.934 72.450 61.787 84.098 46.753 +mAP: 78.926 98.056 59.849 85.809 90.242 93.682 76.142 58.839 44.245 78.991 35.395 66.719 69.004 77.527 60.037 75.010 87.678 80.394 69.983 63.142 +mAcc: 93.989 98.488 78.151 90.936 95.827 98.398 78.895 54.807 24.536 95.767 16.388 77.098 80.850 84.903 57.678 37.117 73.105 64.940 85.504 61.452 + +thomas 04/11 14:29:20 201/312: Data time: 0.0024, Iter time: 0.6839 Loss 0.327 (AVG: 0.594) Score 88.411 (AVG: 86.485) mIOU 63.230 mAP 71.739 mAcc 71.512 +IOU: 79.465 96.272 61.590 71.422 91.153 90.220 71.352 45.229 31.673 79.088 15.573 61.655 57.176 71.302 46.657 42.574 77.196 52.034 80.488 42.489 +mAP: 79.496 97.680 62.456 77.534 91.497 89.605 75.718 62.664 48.230 73.384 32.708 63.818 68.250 76.333 59.765 74.368 87.920 78.377 75.768 59.217 +mAcc: 94.074 98.481 78.887 87.253 95.713 97.426 80.618 57.910 33.253 94.457 16.821 73.768 77.880 80.297 55.678 42.864 77.959 54.621 81.413 50.861 + +thomas 04/11 14:31:22 301/312: Data time: 0.0028, Iter time: 0.6634 Loss 0.763 (AVG: 0.629) Score 85.793 (AVG: 85.904) mIOU 62.212 mAP 70.909 mAcc 70.781 +IOU: 78.988 95.993 58.474 71.400 89.498 86.850 69.647 45.021 31.722 72.781 14.719 60.772 57.066 70.555 44.135 43.054 80.238 49.108 83.078 41.139 +mAP: 78.966 97.537 60.829 76.409 90.562 85.067 75.053 61.577 47.486 71.410 32.813 60.687 66.776 74.858 54.487 79.746 89.408 78.158 79.309 57.045 +mAcc: 93.565 98.403 77.930 85.673 94.440 95.810 78.655 56.930 33.139 88.684 16.346 73.391 78.128 79.314 53.403 45.230 80.896 51.083 84.048 50.546 + +thomas 04/11 14:31:40 312/312: Data time: 0.0026, Iter time: 0.7140 Loss 0.962 (AVG: 0.640) Score 77.348 (AVG: 85.730) mIOU 62.038 mAP 70.814 mAcc 70.691 +IOU: 78.457 95.993 57.658 69.497 89.868 86.935 70.267 45.035 32.099 73.210 12.701 60.203 57.455 70.832 43.681 43.054 79.993 49.428 83.127 41.257 +mAP: 78.740 97.532 61.226 76.463 90.621 84.682 75.445 61.466 47.262 70.312 32.315 60.687 66.084 75.428 53.461 79.746 89.595 78.535 79.766 56.918 +mAcc: 93.575 98.376 78.025 85.733 94.629 95.864 79.107 56.726 33.572 88.860 13.904 73.391 78.658 79.570 52.930 45.230 80.629 51.344 84.078 49.613 + +thomas 04/11 14:31:40 Finished test. Elapsed time: 416.3014 +thomas 04/11 14:31:40 Current best mIoU: 63.885 at iter 78000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/11 14:35:46 ===> Epoch[276](83040/301): Loss 0.1972 LR: 3.465e-02 Score 93.616 Data time: 2.5439, Total iter time: 6.0752 +thomas 04/11 14:40:59 ===> Epoch[277](83080/301): Loss 0.1781 LR: 3.462e-02 Score 94.117 Data time: 3.3221, Total iter time: 7.7484 +thomas 04/11 14:45:09 ===> Epoch[277](83120/301): Loss 0.1929 LR: 3.458e-02 Score 93.663 Data time: 2.5664, Total iter time: 6.1757 +thomas 04/11 14:49:16 ===> Epoch[277](83160/301): Loss 0.2166 LR: 3.455e-02 Score 92.961 Data time: 2.4631, Total iter time: 6.1067 +thomas 04/11 14:53:49 ===> Epoch[277](83200/301): Loss 0.2010 LR: 3.451e-02 Score 93.467 Data time: 2.8701, Total iter time: 6.7160 +thomas 04/11 14:58:46 ===> Epoch[277](83240/301): Loss 0.1765 LR: 3.448e-02 Score 94.161 Data time: 3.2243, Total iter time: 7.3421 +thomas 04/11 15:02:55 ===> Epoch[277](83280/301): Loss 0.2073 LR: 3.445e-02 Score 93.253 Data time: 2.5501, Total iter time: 6.1447 +thomas 04/11 15:07:09 ===> Epoch[277](83320/301): Loss 0.2312 LR: 3.441e-02 Score 92.575 Data time: 2.5594, Total iter time: 6.2726 +thomas 04/11 15:11:37 ===> Epoch[277](83360/301): Loss 0.1967 LR: 3.438e-02 Score 93.591 Data time: 2.8282, Total iter time: 6.5917 +thomas 04/11 15:16:35 ===> Epoch[278](83400/301): Loss 0.2001 LR: 3.435e-02 Score 93.381 Data time: 3.0858, Total iter time: 7.3669 +thomas 04/11 15:21:01 ===> Epoch[278](83440/301): Loss 0.1943 LR: 3.431e-02 Score 93.625 Data time: 2.6757, Total iter time: 6.5423 +thomas 04/11 15:25:13 ===> Epoch[278](83480/301): Loss 0.1807 LR: 3.428e-02 Score 93.951 Data time: 2.4954, Total iter time: 6.2349 +thomas 04/11 15:30:17 ===> Epoch[278](83520/301): Loss 0.1779 LR: 3.424e-02 Score 94.093 Data time: 3.3010, Total iter time: 7.4924 +thomas 04/11 15:34:58 ===> Epoch[278](83560/301): Loss 0.1903 LR: 3.421e-02 Score 93.783 Data time: 2.9552, Total iter time: 6.9516 +thomas 04/11 15:39:03 ===> Epoch[278](83600/301): Loss 0.2061 LR: 3.418e-02 Score 93.226 Data time: 2.4477, Total iter time: 6.0428 +thomas 04/11 15:43:40 ===> Epoch[278](83640/301): Loss 0.2040 LR: 3.414e-02 Score 93.410 Data time: 2.8694, Total iter time: 6.8530 +thomas 04/11 15:48:25 ===> Epoch[279](83680/301): Loss 0.2247 LR: 3.411e-02 Score 92.818 Data time: 3.0672, Total iter time: 7.0442 +thomas 04/11 15:52:50 ===> Epoch[279](83720/301): Loss 0.1772 LR: 3.408e-02 Score 94.193 Data time: 2.7421, Total iter time: 6.5661 +thomas 04/11 15:56:41 ===> Epoch[279](83760/301): Loss 0.1645 LR: 3.404e-02 Score 94.473 Data time: 2.3323, Total iter time: 5.7228 +thomas 04/11 16:01:10 ===> Epoch[279](83800/301): Loss 0.1563 LR: 3.401e-02 Score 94.785 Data time: 2.8697, Total iter time: 6.6344 +thomas 04/11 16:05:56 ===> Epoch[279](83840/301): Loss 0.1793 LR: 3.397e-02 Score 94.053 Data time: 3.0068, Total iter time: 7.0630 +thomas 04/11 16:10:13 ===> Epoch[279](83880/301): Loss 0.1739 LR: 3.394e-02 Score 94.262 Data time: 2.6215, Total iter time: 6.3661 +thomas 04/11 16:14:11 ===> Epoch[279](83920/301): Loss 0.1973 LR: 3.391e-02 Score 93.654 Data time: 2.3444, Total iter time: 5.8578 +thomas 04/11 16:18:56 ===> Epoch[279](83960/301): Loss 0.1842 LR: 3.387e-02 Score 94.111 Data time: 3.0129, Total iter time: 7.0601 +thomas 04/11 16:23:33 ===> Epoch[280](84000/301): Loss 0.1914 LR: 3.384e-02 Score 93.929 Data time: 2.8815, Total iter time: 6.8362 +thomas 04/11 16:23:35 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/11 16:23:35 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/11 16:26:04 101/312: Data time: 0.0028, Iter time: 0.8824 Loss 1.332 (AVG: 0.594) Score 71.337 (AVG: 84.350) mIOU 62.216 mAP 72.209 mAcc 74.138 +IOU: 76.469 96.546 50.257 67.423 88.823 75.088 68.908 47.129 42.248 66.116 17.091 56.277 56.740 64.202 47.704 49.591 91.245 56.547 79.152 46.761 +mAP: 79.131 97.054 63.169 72.153 89.485 81.075 68.475 64.458 50.499 70.437 39.131 49.105 62.121 78.585 63.077 93.791 95.865 81.893 85.336 59.333 +mAcc: 85.784 98.160 83.640 86.344 92.535 90.968 87.367 67.036 49.987 94.115 23.681 82.080 73.859 72.576 54.526 50.335 92.294 61.915 80.551 55.002 + +thomas 04/11 16:28:08 201/312: Data time: 0.0053, Iter time: 0.6450 Loss 0.228 (AVG: 0.585) Score 91.500 (AVG: 84.805) mIOU 61.997 mAP 71.874 mAcc 73.130 +IOU: 76.868 96.411 47.247 72.147 88.615 79.977 69.906 46.730 43.888 71.439 15.972 58.369 57.507 65.972 48.744 30.730 87.379 56.638 80.531 44.874 +mAP: 79.896 96.837 61.381 73.982 91.399 85.271 72.592 62.998 52.635 69.091 36.791 54.911 63.848 78.827 62.002 88.769 90.893 78.535 78.019 58.797 +mAcc: 86.369 98.272 81.492 86.513 92.391 92.934 89.265 68.016 51.658 92.591 21.820 80.210 72.200 73.345 58.966 32.712 88.259 61.766 81.776 52.048 + +thomas 04/11 16:30:05 301/312: Data time: 0.0027, Iter time: 0.6586 Loss 0.688 (AVG: 0.575) Score 81.910 (AVG: 85.246) mIOU 63.112 mAP 72.020 mAcc 74.029 +IOU: 77.413 96.320 49.661 72.932 89.360 81.063 70.166 47.783 41.897 71.402 15.795 59.281 57.442 65.766 55.246 38.153 86.214 59.103 82.732 44.520 +mAP: 79.915 96.918 57.829 73.961 90.749 85.231 74.522 61.978 52.481 71.217 35.962 57.418 61.584 77.962 64.511 85.825 91.510 81.887 80.633 58.311 +mAcc: 86.725 98.232 80.897 85.863 93.139 93.487 90.267 67.756 48.969 92.810 22.865 81.987 68.568 74.194 65.766 40.004 87.118 65.168 83.968 52.803 + +thomas 04/11 16:30:16 312/312: Data time: 0.0024, Iter time: 0.2877 Loss 0.258 (AVG: 0.575) Score 91.414 (AVG: 85.248) mIOU 63.022 mAP 71.970 mAcc 73.774 +IOU: 77.412 96.343 49.786 72.877 89.082 80.955 70.496 47.991 41.452 71.552 15.725 59.625 57.317 65.745 53.548 38.153 86.214 58.898 82.732 44.529 +mAP: 80.060 97.009 57.052 73.961 90.692 84.193 74.594 62.099 52.213 70.921 36.129 57.457 61.584 77.962 65.552 85.825 91.510 81.482 80.633 58.461 +mAcc: 86.803 98.236 81.258 85.863 92.971 93.454 90.379 67.931 48.407 92.949 22.257 80.790 68.568 74.194 62.923 40.004 87.118 64.717 83.968 52.692 + +thomas 04/11 16:30:16 Finished test. Elapsed time: 401.1708 +thomas 04/11 16:30:16 Current best mIoU: 63.885 at iter 78000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/11 16:35:18 ===> Epoch[280](84040/301): Loss 0.1746 LR: 3.381e-02 Score 94.120 Data time: 3.1516, Total iter time: 7.4511 +thomas 04/11 16:40:10 ===> Epoch[280](84080/301): Loss 0.1957 LR: 3.377e-02 Score 93.488 Data time: 3.0768, Total iter time: 7.2123 +thomas 04/11 16:44:41 ===> Epoch[280](84120/301): Loss 0.1768 LR: 3.374e-02 Score 94.170 Data time: 2.7645, Total iter time: 6.7002 +thomas 04/11 16:48:46 ===> Epoch[280](84160/301): Loss 0.1623 LR: 3.370e-02 Score 94.717 Data time: 2.4656, Total iter time: 6.0598 +thomas 04/11 16:53:32 ===> Epoch[280](84200/301): Loss 0.1802 LR: 3.367e-02 Score 93.828 Data time: 3.1297, Total iter time: 7.0539 +thomas 04/11 16:58:18 ===> Epoch[280](84240/301): Loss 0.1963 LR: 3.364e-02 Score 93.453 Data time: 3.0320, Total iter time: 7.0822 +thomas 04/11 17:02:23 ===> Epoch[280](84280/301): Loss 0.2051 LR: 3.360e-02 Score 93.357 Data time: 2.4221, Total iter time: 6.0353 +thomas 04/11 17:06:33 ===> Epoch[281](84320/301): Loss 0.2027 LR: 3.357e-02 Score 93.433 Data time: 2.5543, Total iter time: 6.1839 +thomas 04/11 17:11:27 ===> Epoch[281](84360/301): Loss 0.2310 LR: 3.353e-02 Score 92.676 Data time: 3.1353, Total iter time: 7.2564 +thomas 04/11 17:16:02 ===> Epoch[281](84400/301): Loss 0.1938 LR: 3.350e-02 Score 93.817 Data time: 2.8229, Total iter time: 6.7830 +thomas 04/11 17:20:08 ===> Epoch[281](84440/301): Loss 0.2067 LR: 3.347e-02 Score 93.325 Data time: 2.4428, Total iter time: 6.0441 +thomas 04/11 17:24:46 ===> Epoch[281](84480/301): Loss 0.2058 LR: 3.343e-02 Score 93.432 Data time: 2.8914, Total iter time: 6.8699 +thomas 04/11 17:29:27 ===> Epoch[281](84520/301): Loss 0.1993 LR: 3.340e-02 Score 93.482 Data time: 2.9512, Total iter time: 6.9347 +thomas 04/11 17:33:50 ===> Epoch[281](84560/301): Loss 0.1968 LR: 3.336e-02 Score 93.677 Data time: 2.7071, Total iter time: 6.4860 +thomas 04/11 17:38:09 ===> Epoch[282](84600/301): Loss 0.2106 LR: 3.333e-02 Score 93.186 Data time: 2.5670, Total iter time: 6.3988 +thomas 04/11 17:42:42 ===> Epoch[282](84640/301): Loss 0.1880 LR: 3.330e-02 Score 93.939 Data time: 2.8849, Total iter time: 6.7195 +thomas 04/11 17:47:30 ===> Epoch[282](84680/301): Loss 0.1924 LR: 3.326e-02 Score 93.847 Data time: 3.0602, Total iter time: 7.1328 +thomas 04/11 17:51:52 ===> Epoch[282](84720/301): Loss 0.1721 LR: 3.323e-02 Score 94.279 Data time: 2.6353, Total iter time: 6.4474 +thomas 04/11 17:55:58 ===> Epoch[282](84760/301): Loss 0.1656 LR: 3.320e-02 Score 94.368 Data time: 2.4735, Total iter time: 6.0663 +thomas 04/11 18:01:01 ===> Epoch[282](84800/301): Loss 0.1620 LR: 3.316e-02 Score 94.702 Data time: 3.2072, Total iter time: 7.4787 +thomas 04/11 18:05:24 ===> Epoch[282](84840/301): Loss 0.1774 LR: 3.313e-02 Score 94.160 Data time: 2.7846, Total iter time: 6.4963 +thomas 04/11 18:09:28 ===> Epoch[282](84880/301): Loss 0.1766 LR: 3.309e-02 Score 94.321 Data time: 2.4721, Total iter time: 6.0326 +thomas 04/11 18:14:05 ===> Epoch[283](84920/301): Loss 0.1920 LR: 3.306e-02 Score 93.863 Data time: 2.8794, Total iter time: 6.8265 +thomas 04/11 18:18:47 ===> Epoch[283](84960/301): Loss 0.1837 LR: 3.303e-02 Score 94.026 Data time: 2.9772, Total iter time: 6.9765 +thomas 04/11 18:23:13 ===> Epoch[283](85000/301): Loss 0.1799 LR: 3.299e-02 Score 94.087 Data time: 2.7481, Total iter time: 6.5555 +thomas 04/11 18:23:14 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/11 18:23:14 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/11 18:25:26 101/312: Data time: 0.0032, Iter time: 1.0586 Loss 0.806 (AVG: 0.624) Score 83.800 (AVG: 84.841) mIOU 61.435 mAP 72.438 mAcc 73.113 +IOU: 77.909 96.212 53.983 69.942 90.954 81.878 69.427 46.125 36.418 50.195 14.612 49.539 46.417 70.246 43.986 63.287 83.803 61.104 83.951 38.707 +mAP: 78.726 97.290 56.795 69.225 90.855 85.289 76.104 62.313 56.182 63.985 37.875 51.349 65.157 84.752 56.218 96.254 87.670 83.492 91.372 57.848 +mAcc: 89.530 98.964 72.910 78.763 96.092 95.083 76.563 59.378 38.026 91.473 16.072 73.979 77.219 86.922 49.992 76.788 85.223 66.027 84.758 48.505 + +thomas 04/11 18:27:26 201/312: Data time: 0.0025, Iter time: 0.3932 Loss 0.177 (AVG: 0.640) Score 93.402 (AVG: 84.681) mIOU 61.048 mAP 71.703 mAcc 71.838 +IOU: 77.109 96.373 54.603 72.266 89.396 78.818 68.119 46.900 30.309 66.949 11.783 55.549 52.027 67.551 51.592 37.383 84.640 58.613 79.018 41.970 +mAP: 77.255 97.231 61.050 75.377 89.135 85.144 72.694 59.174 51.325 67.849 33.051 55.330 67.819 80.057 62.254 86.849 89.345 83.208 81.356 58.555 +mAcc: 90.208 98.899 73.417 84.879 95.407 91.233 75.726 62.939 31.444 91.895 12.756 81.025 79.741 87.132 61.163 40.530 86.610 61.545 79.780 50.427 + +thomas 04/11 18:29:28 301/312: Data time: 0.0041, Iter time: 0.5105 Loss 0.460 (AVG: 0.626) Score 83.295 (AVG: 85.177) mIOU 61.377 mAP 71.865 mAcc 72.323 +IOU: 78.444 96.153 55.356 70.766 88.961 79.788 68.660 48.324 30.108 62.756 10.192 56.089 54.148 66.702 52.413 41.726 85.141 60.090 79.232 42.483 +mAP: 78.827 97.201 63.597 68.576 89.607 81.646 72.306 61.779 50.908 68.203 34.577 58.016 69.556 82.151 62.721 86.616 90.457 82.825 79.466 58.258 +mAcc: 90.581 98.854 75.359 81.101 94.093 92.443 76.480 64.943 31.528 89.338 10.879 83.173 81.604 86.890 62.933 44.221 86.705 63.880 80.098 51.366 + +thomas 04/11 18:29:46 312/312: Data time: 0.0037, Iter time: 0.9136 Loss 1.324 (AVG: 0.630) Score 74.188 (AVG: 85.125) mIOU 61.074 mAP 71.870 mAcc 72.032 +IOU: 78.287 96.132 53.748 71.738 88.917 79.257 69.209 48.437 29.824 63.072 11.439 55.401 54.486 66.872 51.344 40.454 85.338 58.320 76.957 42.246 +mAP: 78.902 97.255 63.220 69.790 89.610 81.533 72.902 61.960 51.123 67.783 35.490 58.016 70.917 80.409 62.318 84.929 90.477 82.401 79.933 58.422 +mAcc: 90.308 98.853 75.108 81.766 94.089 91.883 76.962 65.136 31.193 89.443 12.199 83.173 82.035 85.872 62.102 42.794 86.871 61.855 77.761 51.239 + +thomas 04/11 18:29:46 Finished test. Elapsed time: 391.5528 +thomas 04/11 18:29:46 Current best mIoU: 63.885 at iter 78000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/11 18:34:35 ===> Epoch[283](85040/301): Loss 0.1831 LR: 3.296e-02 Score 93.935 Data time: 3.0792, Total iter time: 7.1451 +thomas 04/11 18:39:16 ===> Epoch[283](85080/301): Loss 0.1908 LR: 3.292e-02 Score 93.700 Data time: 2.9043, Total iter time: 6.9519 +thomas 04/11 18:43:15 ===> Epoch[283](85120/301): Loss 0.1832 LR: 3.289e-02 Score 93.905 Data time: 2.3876, Total iter time: 5.9052 +thomas 04/11 18:47:31 ===> Epoch[283](85160/301): Loss 0.1829 LR: 3.286e-02 Score 94.007 Data time: 2.6490, Total iter time: 6.3039 +thomas 04/11 18:52:04 ===> Epoch[284](85200/301): Loss 0.1942 LR: 3.282e-02 Score 93.612 Data time: 2.8729, Total iter time: 6.7460 +thomas 04/11 18:56:40 ===> Epoch[284](85240/301): Loss 0.1998 LR: 3.279e-02 Score 93.451 Data time: 2.8677, Total iter time: 6.8308 +thomas 04/11 19:00:54 ===> Epoch[284](85280/301): Loss 0.1787 LR: 3.275e-02 Score 94.277 Data time: 2.5220, Total iter time: 6.2721 +thomas 04/11 19:05:48 ===> Epoch[284](85320/301): Loss 0.1643 LR: 3.272e-02 Score 94.344 Data time: 3.0826, Total iter time: 7.2533 +thomas 04/11 19:10:27 ===> Epoch[284](85360/301): Loss 0.1627 LR: 3.269e-02 Score 94.730 Data time: 2.9484, Total iter time: 6.8852 +thomas 04/11 19:14:50 ===> Epoch[284](85400/301): Loss 0.1821 LR: 3.265e-02 Score 93.911 Data time: 2.6872, Total iter time: 6.4821 +thomas 04/11 19:18:49 ===> Epoch[284](85440/301): Loss 0.1699 LR: 3.262e-02 Score 94.261 Data time: 2.3717, Total iter time: 5.8941 +thomas 04/11 19:23:59 ===> Epoch[284](85480/301): Loss 0.1755 LR: 3.258e-02 Score 94.215 Data time: 3.3452, Total iter time: 7.6532 +thomas 04/11 19:28:57 ===> Epoch[285](85520/301): Loss 0.1633 LR: 3.255e-02 Score 94.687 Data time: 3.1322, Total iter time: 7.3782 +thomas 04/11 19:32:50 ===> Epoch[285](85560/301): Loss 0.1755 LR: 3.252e-02 Score 94.032 Data time: 2.3279, Total iter time: 5.7437 +thomas 04/11 19:36:51 ===> Epoch[285](85600/301): Loss 0.1633 LR: 3.248e-02 Score 94.627 Data time: 2.3972, Total iter time: 5.9397 +thomas 04/11 19:42:05 ===> Epoch[285](85640/301): Loss 0.1731 LR: 3.245e-02 Score 94.144 Data time: 3.3783, Total iter time: 7.7588 +thomas 04/11 19:46:46 ===> Epoch[285](85680/301): Loss 0.1875 LR: 3.241e-02 Score 93.770 Data time: 2.9559, Total iter time: 6.9422 +thomas 04/11 19:51:02 ===> Epoch[285](85720/301): Loss 0.1981 LR: 3.238e-02 Score 93.682 Data time: 2.5860, Total iter time: 6.3230 +thomas 04/11 19:55:12 ===> Epoch[285](85760/301): Loss 0.1896 LR: 3.235e-02 Score 93.966 Data time: 2.5970, Total iter time: 6.1732 +thomas 04/11 20:00:15 ===> Epoch[286](85800/301): Loss 0.1855 LR: 3.231e-02 Score 93.920 Data time: 3.2455, Total iter time: 7.4834 +thomas 04/11 20:04:42 ===> Epoch[286](85840/301): Loss 0.2061 LR: 3.228e-02 Score 93.485 Data time: 2.7569, Total iter time: 6.6074 +thomas 04/11 20:08:50 ===> Epoch[286](85880/301): Loss 0.2012 LR: 3.224e-02 Score 93.634 Data time: 2.4674, Total iter time: 6.1065 +thomas 04/11 20:13:19 ===> Epoch[286](85920/301): Loss 0.1814 LR: 3.221e-02 Score 93.953 Data time: 2.8385, Total iter time: 6.6526 +thomas 04/11 20:18:29 ===> Epoch[286](85960/301): Loss 0.1858 LR: 3.218e-02 Score 93.923 Data time: 3.3073, Total iter time: 7.6642 +thomas 04/11 20:22:51 ===> Epoch[286](86000/301): Loss 0.1557 LR: 3.214e-02 Score 94.895 Data time: 2.7000, Total iter time: 6.4646 +thomas 04/11 20:22:52 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/11 20:22:52 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/11 20:25:01 101/312: Data time: 0.0024, Iter time: 0.6928 Loss 0.591 (AVG: 0.565) Score 85.265 (AVG: 86.532) mIOU 66.195 mAP 73.160 mAcc 75.840 +IOU: 79.040 95.959 59.640 69.668 91.062 78.417 70.621 42.843 38.351 79.606 15.633 63.891 65.189 63.840 56.336 76.918 81.225 57.445 91.394 46.828 +mAP: 78.978 97.697 53.679 66.393 91.210 75.908 73.954 55.846 54.861 74.642 37.614 54.579 66.464 87.138 69.500 94.567 92.373 85.418 96.160 56.227 +mAcc: 92.608 98.788 78.647 73.417 95.369 95.781 78.298 51.837 41.947 92.674 20.501 78.058 76.403 91.431 71.553 88.960 81.917 58.770 92.603 57.230 + +thomas 04/11 20:26:57 201/312: Data time: 0.0026, Iter time: 0.5830 Loss 0.522 (AVG: 0.576) Score 86.444 (AVG: 86.401) mIOU 64.759 mAP 73.290 mAcc 74.454 +IOU: 79.737 96.174 61.560 74.818 90.600 73.474 67.294 47.883 34.309 74.844 14.816 67.399 60.041 68.813 54.493 60.921 85.940 55.087 83.573 43.398 +mAP: 79.125 97.978 59.757 73.900 90.884 79.273 73.059 63.021 52.692 73.007 37.213 59.116 63.677 85.504 69.182 87.096 94.640 82.761 87.129 56.787 +mAcc: 92.661 98.820 80.686 79.606 95.167 96.753 75.130 55.607 36.978 89.936 22.313 82.364 72.667 89.459 68.260 67.467 86.916 56.693 84.653 56.938 + +thomas 04/11 20:29:34 301/312: Data time: 0.0038, Iter time: 0.7098 Loss 0.289 (AVG: 0.576) Score 89.000 (AVG: 86.289) mIOU 64.237 mAP 73.485 mAcc 73.803 +IOU: 79.431 96.369 61.481 74.206 89.802 71.981 70.569 49.012 34.887 73.542 12.836 63.175 61.514 67.481 52.256 56.102 88.806 53.398 82.747 45.137 +mAP: 79.249 97.642 61.528 72.626 90.231 79.801 74.120 65.239 52.122 73.981 38.561 61.113 64.887 84.939 70.449 88.886 96.012 81.085 79.500 57.728 +mAcc: 93.031 98.807 81.766 79.714 94.090 95.656 78.509 56.492 37.417 87.471 17.551 77.513 74.570 89.273 67.806 60.760 89.554 55.016 83.897 57.162 + +thomas 04/11 20:29:50 312/312: Data time: 0.0038, Iter time: 1.5234 Loss 0.472 (AVG: 0.579) Score 87.931 (AVG: 86.272) mIOU 64.095 mAP 73.235 mAcc 73.652 +IOU: 79.294 96.378 61.260 74.138 89.645 71.404 71.113 48.599 34.004 73.505 12.906 63.152 61.428 65.845 51.108 56.643 88.960 53.644 83.433 45.433 +mAP: 78.824 97.668 61.104 72.626 89.911 79.800 74.054 65.031 51.756 72.633 38.018 61.527 64.887 83.303 68.974 88.932 96.089 81.468 80.158 57.931 +mAcc: 93.062 98.800 81.367 79.714 94.150 94.997 78.957 56.042 36.663 87.473 17.579 77.501 74.570 87.009 66.866 61.285 89.703 55.312 84.567 57.422 + +thomas 04/11 20:29:50 Finished test. Elapsed time: 417.5216 +thomas 04/11 20:29:52 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/11 20:29:52 Current best mIoU: 64.095 at iter 86000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/11 20:34:23 ===> Epoch[286](86040/301): Loss 0.1720 LR: 3.211e-02 Score 94.448 Data time: 2.8915, Total iter time: 6.6978 +thomas 04/11 20:38:48 ===> Epoch[286](86080/301): Loss 0.1751 LR: 3.207e-02 Score 94.187 Data time: 2.7316, Total iter time: 6.5428 +thomas 04/11 20:43:01 ===> Epoch[287](86120/301): Loss 0.1711 LR: 3.204e-02 Score 94.210 Data time: 2.5483, Total iter time: 6.2647 +thomas 04/11 20:47:41 ===> Epoch[287](86160/301): Loss 0.1672 LR: 3.201e-02 Score 94.482 Data time: 2.9753, Total iter time: 6.9048 +thomas 04/11 20:52:22 ===> Epoch[287](86200/301): Loss 0.1623 LR: 3.197e-02 Score 94.700 Data time: 2.9977, Total iter time: 6.9504 +thomas 04/11 20:57:00 ===> Epoch[287](86240/301): Loss 0.1675 LR: 3.194e-02 Score 94.367 Data time: 2.8484, Total iter time: 6.8594 +thomas 04/11 21:00:49 ===> Epoch[287](86280/301): Loss 0.1586 LR: 3.190e-02 Score 94.727 Data time: 2.3070, Total iter time: 5.6614 +thomas 04/11 21:06:12 ===> Epoch[287](86320/301): Loss 0.1626 LR: 3.187e-02 Score 94.662 Data time: 3.6625, Total iter time: 7.9940 +thomas 04/11 21:10:59 ===> Epoch[287](86360/301): Loss 0.1516 LR: 3.184e-02 Score 94.869 Data time: 3.0028, Total iter time: 7.1008 +thomas 04/11 21:15:19 ===> Epoch[288](86400/301): Loss 0.1660 LR: 3.180e-02 Score 94.441 Data time: 2.6032, Total iter time: 6.4067 +thomas 04/11 21:19:41 ===> Epoch[288](86440/301): Loss 0.1589 LR: 3.177e-02 Score 94.786 Data time: 2.6899, Total iter time: 6.4686 +thomas 04/11 21:24:30 ===> Epoch[288](86480/301): Loss 0.1826 LR: 3.173e-02 Score 93.797 Data time: 3.1118, Total iter time: 7.1476 +thomas 04/11 21:28:58 ===> Epoch[288](86520/301): Loss 0.1801 LR: 3.170e-02 Score 93.965 Data time: 2.8280, Total iter time: 6.6230 +thomas 04/11 21:33:07 ===> Epoch[288](86560/301): Loss 0.1707 LR: 3.167e-02 Score 94.455 Data time: 2.4677, Total iter time: 6.1459 +thomas 04/11 21:37:32 ===> Epoch[288](86600/301): Loss 0.2573 LR: 3.163e-02 Score 91.947 Data time: 2.7542, Total iter time: 6.5603 +thomas 04/11 21:42:26 ===> Epoch[288](86640/301): Loss 0.2899 LR: 3.160e-02 Score 91.024 Data time: 3.2725, Total iter time: 7.2424 +thomas 04/11 21:47:05 ===> Epoch[288](86680/301): Loss 0.1997 LR: 3.156e-02 Score 93.559 Data time: 2.9015, Total iter time: 6.9016 +thomas 04/11 21:51:25 ===> Epoch[289](86720/301): Loss 0.2127 LR: 3.153e-02 Score 93.279 Data time: 2.6009, Total iter time: 6.4166 +thomas 04/11 21:56:25 ===> Epoch[289](86760/301): Loss 0.2076 LR: 3.149e-02 Score 93.248 Data time: 3.1966, Total iter time: 7.4191 +thomas 04/11 22:01:08 ===> Epoch[289](86800/301): Loss 0.1860 LR: 3.146e-02 Score 94.076 Data time: 3.0281, Total iter time: 7.0032 +thomas 04/11 22:05:30 ===> Epoch[289](86840/301): Loss 0.1584 LR: 3.143e-02 Score 94.633 Data time: 2.6727, Total iter time: 6.4884 +thomas 04/11 22:09:26 ===> Epoch[289](86880/301): Loss 0.1736 LR: 3.139e-02 Score 94.318 Data time: 2.3603, Total iter time: 5.8073 +thomas 04/11 22:14:32 ===> Epoch[289](86920/301): Loss 0.1933 LR: 3.136e-02 Score 93.549 Data time: 3.3006, Total iter time: 7.5671 +thomas 04/11 22:19:31 ===> Epoch[289](86960/301): Loss 0.1683 LR: 3.132e-02 Score 94.434 Data time: 3.1526, Total iter time: 7.4034 +thomas 04/11 22:24:00 ===> Epoch[290](87000/301): Loss 0.1728 LR: 3.129e-02 Score 94.317 Data time: 2.7230, Total iter time: 6.6466 +thomas 04/11 22:24:02 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/11 22:24:02 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/11 22:26:06 101/312: Data time: 0.0041, Iter time: 0.9134 Loss 0.853 (AVG: 0.605) Score 76.647 (AVG: 84.926) mIOU 62.630 mAP 73.232 mAcc 73.991 +IOU: 78.796 96.625 52.263 73.006 91.397 73.237 68.343 45.840 31.128 80.869 18.011 44.860 55.536 51.342 50.002 51.991 88.617 65.995 88.264 46.483 +mAP: 79.843 97.342 64.925 76.233 89.040 83.008 72.740 61.877 45.520 72.490 39.469 67.063 60.866 87.624 58.393 91.112 91.166 88.784 77.043 60.101 +mAcc: 91.365 98.765 71.020 83.630 95.375 90.266 88.406 57.037 32.662 91.520 21.937 80.955 66.359 90.214 60.799 52.088 89.519 69.830 90.265 57.808 + +thomas 04/11 22:28:23 201/312: Data time: 0.1271, Iter time: 0.5478 Loss 0.043 (AVG: 0.561) Score 98.990 (AVG: 86.088) mIOU 64.240 mAP 72.981 mAcc 74.866 +IOU: 79.282 96.250 58.157 74.463 90.246 77.722 72.089 45.515 30.942 79.448 15.081 50.012 59.934 55.627 49.975 57.993 87.952 64.123 91.573 48.412 +mAP: 79.463 96.918 61.617 73.903 89.916 84.491 72.153 60.427 45.395 71.972 38.329 62.281 63.638 83.305 61.694 91.629 92.657 86.425 84.146 59.263 +mAcc: 91.669 98.678 75.121 84.796 94.223 93.057 86.593 58.253 32.497 91.558 18.596 86.128 70.315 86.082 60.935 59.727 88.538 67.163 93.201 60.183 + +thomas 04/11 22:31:13 301/312: Data time: 0.0031, Iter time: 0.4956 Loss 0.200 (AVG: 0.566) Score 93.668 (AVG: 85.909) mIOU 62.668 mAP 72.199 mAcc 73.549 +IOU: 79.301 96.412 56.176 71.302 89.999 79.515 72.589 47.295 31.792 74.309 15.910 50.477 60.284 56.314 43.881 47.826 86.566 58.334 88.161 46.915 +mAP: 79.959 97.240 56.843 69.228 90.475 83.079 73.080 64.065 46.960 71.135 39.420 63.181 63.948 82.947 55.717 90.159 91.613 85.127 81.257 58.540 +mAcc: 91.358 98.713 71.558 82.692 94.182 94.533 86.929 60.843 33.336 87.944 20.364 87.090 69.460 86.649 54.119 50.257 87.195 61.769 90.225 61.765 + +thomas 04/11 22:31:33 312/312: Data time: 0.0040, Iter time: 0.9913 Loss 0.223 (AVG: 0.563) Score 90.463 (AVG: 85.946) mIOU 62.990 mAP 72.141 mAcc 73.777 +IOU: 79.323 96.402 57.172 70.911 90.022 79.469 72.756 47.066 31.719 75.138 16.079 50.856 59.799 58.550 45.367 48.588 86.377 59.333 88.158 46.721 +mAP: 79.858 97.128 56.361 68.708 90.550 83.326 73.193 63.806 46.635 71.388 39.546 61.947 63.790 83.814 56.310 90.136 91.786 85.254 81.257 58.032 +mAcc: 91.270 98.710 72.521 81.654 94.138 94.754 87.109 60.793 33.255 88.067 20.394 87.121 69.344 87.848 55.661 51.177 86.983 62.814 90.225 61.702 + +thomas 04/11 22:31:33 Finished test. Elapsed time: 451.4857 +thomas 04/11 22:31:33 Current best mIoU: 64.095 at iter 86000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/11 22:36:19 ===> Epoch[290](87040/301): Loss 0.1925 LR: 3.126e-02 Score 93.714 Data time: 3.0477, Total iter time: 7.0810 +thomas 04/11 22:40:21 ===> Epoch[290](87080/301): Loss 0.1803 LR: 3.122e-02 Score 94.035 Data time: 2.4757, Total iter time: 5.9728 +thomas 04/11 22:44:19 ===> Epoch[290](87120/301): Loss 0.1672 LR: 3.119e-02 Score 94.598 Data time: 2.3530, Total iter time: 5.8693 +thomas 04/11 22:49:01 ===> Epoch[290](87160/301): Loss 0.1837 LR: 3.115e-02 Score 94.079 Data time: 2.9734, Total iter time: 6.9488 +thomas 04/11 22:53:56 ===> Epoch[290](87200/301): Loss 0.1810 LR: 3.112e-02 Score 93.995 Data time: 3.1135, Total iter time: 7.2983 +thomas 04/11 22:58:17 ===> Epoch[290](87240/301): Loss 0.1758 LR: 3.109e-02 Score 94.160 Data time: 2.5761, Total iter time: 6.4279 +thomas 04/11 23:02:39 ===> Epoch[290](87280/301): Loss 0.1658 LR: 3.105e-02 Score 94.373 Data time: 2.7232, Total iter time: 6.4822 +thomas 04/11 23:07:21 ===> Epoch[291](87320/301): Loss 0.1891 LR: 3.102e-02 Score 93.813 Data time: 3.0005, Total iter time: 6.9579 +thomas 04/11 23:11:29 ===> Epoch[291](87360/301): Loss 0.1707 LR: 3.098e-02 Score 94.382 Data time: 2.5923, Total iter time: 6.1219 +thomas 04/11 23:15:35 ===> Epoch[291](87400/301): Loss 0.1698 LR: 3.095e-02 Score 94.650 Data time: 2.4530, Total iter time: 6.0744 +thomas 04/11 23:19:48 ===> Epoch[291](87440/301): Loss 0.1623 LR: 3.091e-02 Score 94.704 Data time: 2.6165, Total iter time: 6.2512 +thomas 04/11 23:24:54 ===> Epoch[291](87480/301): Loss 0.1703 LR: 3.088e-02 Score 94.366 Data time: 3.2869, Total iter time: 7.5580 +thomas 04/11 23:29:02 ===> Epoch[291](87520/301): Loss 0.1647 LR: 3.085e-02 Score 94.474 Data time: 2.5751, Total iter time: 6.1448 +thomas 04/11 23:33:15 ===> Epoch[291](87560/301): Loss 0.1943 LR: 3.081e-02 Score 93.617 Data time: 2.4969, Total iter time: 6.2500 +thomas 04/11 23:37:42 ===> Epoch[292](87600/301): Loss 0.1794 LR: 3.078e-02 Score 93.937 Data time: 2.8665, Total iter time: 6.5998 +thomas 04/11 23:42:35 ===> Epoch[292](87640/301): Loss 0.1717 LR: 3.074e-02 Score 94.279 Data time: 3.1153, Total iter time: 7.2255 +thomas 04/11 23:46:52 ===> Epoch[292](87680/301): Loss 0.1719 LR: 3.071e-02 Score 94.224 Data time: 2.6285, Total iter time: 6.3597 +thomas 04/11 23:50:59 ===> Epoch[292](87720/301): Loss 0.1496 LR: 3.068e-02 Score 94.931 Data time: 2.4281, Total iter time: 6.0765 +thomas 04/11 23:55:43 ===> Epoch[292](87760/301): Loss 0.1794 LR: 3.064e-02 Score 94.048 Data time: 3.0611, Total iter time: 7.0135 +thomas 04/12 00:00:20 ===> Epoch[292](87800/301): Loss 0.1764 LR: 3.061e-02 Score 94.070 Data time: 2.9299, Total iter time: 6.8504 +thomas 04/12 00:04:31 ===> Epoch[292](87840/301): Loss 0.1810 LR: 3.057e-02 Score 94.213 Data time: 2.4883, Total iter time: 6.1912 +thomas 04/12 00:08:21 ===> Epoch[292](87880/301): Loss 0.1706 LR: 3.054e-02 Score 94.285 Data time: 2.3098, Total iter time: 5.6788 +thomas 04/12 00:13:26 ===> Epoch[293](87920/301): Loss 0.1726 LR: 3.050e-02 Score 94.531 Data time: 3.2612, Total iter time: 7.5554 +thomas 04/12 00:17:50 ===> Epoch[293](87960/301): Loss 0.1805 LR: 3.047e-02 Score 94.193 Data time: 2.7589, Total iter time: 6.5168 +thomas 04/12 00:22:02 ===> Epoch[293](88000/301): Loss 0.1771 LR: 3.044e-02 Score 94.158 Data time: 2.4919, Total iter time: 6.2041 +thomas 04/12 00:22:03 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/12 00:22:03 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/12 00:24:06 101/312: Data time: 0.0025, Iter time: 0.6308 Loss 0.221 (AVG: 0.572) Score 92.715 (AVG: 85.942) mIOU 65.480 mAP 74.085 mAcc 75.086 +IOU: 78.879 96.665 57.919 67.726 89.931 81.370 66.669 47.606 22.829 74.714 18.247 58.672 61.279 84.033 55.156 69.659 83.534 54.253 88.749 51.718 +mAP: 78.627 98.018 56.986 65.663 90.436 86.713 71.056 59.245 46.054 79.951 43.291 62.996 71.419 89.409 70.922 93.024 84.876 83.592 95.273 54.159 +mAcc: 91.173 98.418 80.293 83.130 94.667 89.601 81.069 62.687 24.387 88.983 19.479 82.096 80.414 85.952 59.702 82.623 85.383 55.199 90.616 65.857 + +thomas 04/12 00:26:45 201/312: Data time: 0.0038, Iter time: 0.6399 Loss 0.274 (AVG: 0.546) Score 91.796 (AVG: 86.881) mIOU 65.428 mAP 73.320 mAcc 74.701 +IOU: 79.317 96.719 58.068 73.371 90.481 79.807 73.870 48.080 25.995 77.762 16.677 59.895 62.867 75.177 54.276 61.861 85.107 59.131 78.205 51.888 +mAP: 79.399 97.965 59.619 70.477 92.016 83.669 76.062 59.666 44.266 72.386 40.809 63.510 71.653 84.491 72.977 86.583 91.628 83.688 77.517 58.025 +mAcc: 91.618 98.473 77.300 88.328 94.524 89.532 85.489 64.544 27.309 91.311 20.873 84.795 79.012 79.114 60.910 66.781 86.484 60.137 80.272 67.204 + +thomas 04/12 00:29:17 301/312: Data time: 0.0033, Iter time: 0.3833 Loss 0.105 (AVG: 0.582) Score 96.210 (AVG: 86.228) mIOU 63.950 mAP 72.344 mAcc 73.591 +IOU: 79.520 96.486 56.352 72.000 89.405 78.885 70.785 48.749 27.190 74.232 16.386 56.929 59.896 71.839 48.644 62.406 85.248 52.595 80.977 50.481 +mAP: 79.075 97.678 60.390 71.396 90.999 83.608 71.327 58.179 43.589 74.839 41.864 62.331 70.241 79.793 65.463 89.183 91.521 79.188 79.356 56.858 +mAcc: 92.157 98.399 76.945 85.237 93.300 88.862 83.959 63.264 28.777 90.633 20.751 83.773 77.660 76.604 58.069 66.322 86.443 53.515 82.911 64.246 + +thomas 04/12 00:29:31 312/312: Data time: 0.0030, Iter time: 0.6506 Loss 2.335 (AVG: 0.591) Score 68.134 (AVG: 86.159) mIOU 63.901 mAP 72.312 mAcc 73.490 +IOU: 79.459 96.451 56.114 71.986 89.399 78.724 70.758 49.201 26.848 74.069 16.509 57.037 60.172 73.039 48.371 61.087 85.383 52.349 80.977 50.086 +mAP: 79.223 97.662 60.518 71.296 91.072 83.375 71.736 58.789 43.536 74.803 42.468 62.197 69.417 80.157 65.636 88.137 91.680 79.028 79.356 56.154 +mAcc: 92.189 98.421 76.462 84.893 93.347 88.853 84.065 63.801 28.364 90.836 21.064 83.660 77.431 77.716 57.336 64.834 86.573 53.273 82.911 63.764 + +thomas 04/12 00:29:31 Finished test. Elapsed time: 447.7503 +thomas 04/12 00:29:31 Current best mIoU: 64.095 at iter 86000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/12 00:33:15 ===> Epoch[293](88040/301): Loss 0.1465 LR: 3.040e-02 Score 95.134 Data time: 2.1904, Total iter time: 5.5346 +thomas 04/12 00:36:49 ===> Epoch[293](88080/301): Loss 0.1539 LR: 3.037e-02 Score 95.037 Data time: 2.0671, Total iter time: 5.2635 +thomas 04/12 00:40:17 ===> Epoch[293](88120/301): Loss 0.1622 LR: 3.033e-02 Score 94.588 Data time: 2.0010, Total iter time: 5.1246 +thomas 04/12 00:43:38 ===> Epoch[293](88160/301): Loss 0.1705 LR: 3.030e-02 Score 94.381 Data time: 1.9289, Total iter time: 4.9420 +thomas 04/12 00:47:06 ===> Epoch[294](88200/301): Loss 0.1682 LR: 3.026e-02 Score 94.338 Data time: 2.0027, Total iter time: 5.1378 +thomas 04/12 00:50:25 ===> Epoch[294](88240/301): Loss 0.1561 LR: 3.023e-02 Score 94.963 Data time: 1.9098, Total iter time: 4.8819 +thomas 04/12 00:53:55 ===> Epoch[294](88280/301): Loss 0.1550 LR: 3.020e-02 Score 94.896 Data time: 2.0127, Total iter time: 5.1688 +thomas 04/12 00:57:23 ===> Epoch[294](88320/301): Loss 0.1565 LR: 3.016e-02 Score 94.854 Data time: 2.0219, Total iter time: 5.1488 +thomas 04/12 01:00:53 ===> Epoch[294](88360/301): Loss 0.1667 LR: 3.013e-02 Score 94.291 Data time: 2.0028, Total iter time: 5.1670 +thomas 04/12 01:04:32 ===> Epoch[294](88400/301): Loss 0.1745 LR: 3.009e-02 Score 94.309 Data time: 2.0959, Total iter time: 5.4084 +thomas 04/12 01:08:04 ===> Epoch[294](88440/301): Loss 0.1804 LR: 3.006e-02 Score 94.080 Data time: 2.0358, Total iter time: 5.2169 +thomas 04/12 01:11:39 ===> Epoch[294](88480/301): Loss 0.2101 LR: 3.002e-02 Score 93.273 Data time: 2.0759, Total iter time: 5.2920 +thomas 04/12 01:14:49 ===> Epoch[295](88520/301): Loss 0.1930 LR: 2.999e-02 Score 93.824 Data time: 1.8149, Total iter time: 4.6887 +thomas 04/12 01:18:14 ===> Epoch[295](88560/301): Loss 0.1785 LR: 2.996e-02 Score 94.250 Data time: 1.9474, Total iter time: 5.0359 +thomas 04/12 01:21:33 ===> Epoch[295](88600/301): Loss 0.2075 LR: 2.992e-02 Score 93.289 Data time: 1.9109, Total iter time: 4.8978 +thomas 04/12 01:25:03 ===> Epoch[295](88640/301): Loss 0.1615 LR: 2.989e-02 Score 94.671 Data time: 2.0205, Total iter time: 5.1751 +thomas 04/12 01:28:40 ===> Epoch[295](88680/301): Loss 0.1743 LR: 2.985e-02 Score 94.486 Data time: 2.0880, Total iter time: 5.3351 +thomas 04/12 01:32:13 ===> Epoch[295](88720/301): Loss 0.1701 LR: 2.982e-02 Score 94.207 Data time: 2.0405, Total iter time: 5.2467 +thomas 04/12 01:35:42 ===> Epoch[295](88760/301): Loss 0.1741 LR: 2.978e-02 Score 94.165 Data time: 2.0263, Total iter time: 5.1540 +thomas 04/12 01:39:10 ===> Epoch[296](88800/301): Loss 0.1785 LR: 2.975e-02 Score 94.064 Data time: 1.9942, Total iter time: 5.1367 +thomas 04/12 01:42:53 ===> Epoch[296](88840/301): Loss 0.1870 LR: 2.972e-02 Score 93.862 Data time: 2.1439, Total iter time: 5.4945 +thomas 04/12 01:46:24 ===> Epoch[296](88880/301): Loss 0.1454 LR: 2.968e-02 Score 95.196 Data time: 2.0396, Total iter time: 5.2009 +thomas 04/12 01:50:02 ===> Epoch[296](88920/301): Loss 0.1442 LR: 2.965e-02 Score 95.241 Data time: 2.1150, Total iter time: 5.3749 +thomas 04/12 01:53:32 ===> Epoch[296](88960/301): Loss 0.1679 LR: 2.961e-02 Score 94.508 Data time: 2.0261, Total iter time: 5.2031 +thomas 04/12 01:57:11 ===> Epoch[296](89000/301): Loss 0.1697 LR: 2.958e-02 Score 94.137 Data time: 2.0855, Total iter time: 5.3881 +thomas 04/12 01:57:12 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/12 01:57:12 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/12 01:59:02 101/312: Data time: 0.0034, Iter time: 0.4705 Loss 1.101 (AVG: 0.621) Score 77.502 (AVG: 85.254) mIOU 61.854 mAP 70.858 mAcc 71.537 +IOU: 78.930 96.332 60.908 74.680 90.139 70.658 70.778 42.668 26.733 71.943 11.070 47.053 59.191 64.469 36.401 66.904 87.182 51.238 86.036 43.772 +mAP: 79.445 97.416 67.846 77.488 90.642 77.639 72.867 59.953 45.126 57.011 38.081 52.310 69.746 85.395 38.312 91.993 88.480 80.406 88.593 58.403 +mAcc: 93.308 98.700 79.658 85.630 93.478 86.360 80.539 52.759 27.884 94.288 11.923 60.585 79.639 84.163 40.842 73.592 88.428 52.372 87.649 58.954 + +thomas 04/12 02:00:44 201/312: Data time: 0.0029, Iter time: 0.4580 Loss 0.161 (AVG: 0.604) Score 94.817 (AVG: 85.804) mIOU 62.483 mAP 72.224 mAcc 72.174 +IOU: 78.994 96.293 59.779 78.192 90.493 78.158 71.990 46.019 25.075 71.625 10.974 56.746 54.846 64.807 35.215 55.277 88.183 55.207 86.639 45.158 +mAP: 77.959 97.636 65.995 79.837 91.776 81.052 74.205 62.854 45.603 71.049 32.950 59.587 67.350 86.161 46.027 87.535 91.289 81.477 86.559 57.577 +mAcc: 92.669 98.628 76.633 89.215 93.432 91.745 80.962 57.675 25.888 90.364 12.346 67.883 76.224 87.019 45.218 59.859 89.958 56.655 89.499 61.606 + +thomas 04/12 02:02:28 301/312: Data time: 0.0032, Iter time: 0.3791 Loss 0.201 (AVG: 0.595) Score 94.969 (AVG: 86.170) mIOU 63.696 mAP 72.633 mAcc 73.066 +IOU: 79.209 96.244 61.239 77.016 89.682 78.858 71.891 47.192 27.605 73.824 12.009 59.115 59.378 67.301 44.362 56.780 86.442 56.974 83.915 44.883 +mAP: 78.351 97.621 64.763 78.017 91.114 82.353 73.918 63.233 47.048 70.988 34.631 59.041 69.535 83.896 53.571 88.161 90.893 84.259 83.715 57.543 +mAcc: 92.601 98.617 77.669 86.546 92.691 91.584 81.984 59.609 28.446 92.238 14.419 69.553 78.999 87.132 55.540 60.372 87.793 58.261 87.648 59.626 + +thomas 04/12 02:02:40 312/312: Data time: 0.0027, Iter time: 0.8685 Loss 1.509 (AVG: 0.604) Score 69.497 (AVG: 85.993) mIOU 63.173 mAP 72.580 mAcc 72.634 +IOU: 78.926 96.267 60.729 76.022 89.118 78.727 71.964 46.931 27.170 73.439 12.122 59.220 58.805 66.455 44.438 53.456 86.609 54.959 82.689 45.408 +mAP: 78.096 97.632 65.166 77.255 91.118 82.640 74.435 63.413 47.230 70.601 34.942 59.700 69.291 84.028 54.441 88.001 90.876 84.059 81.098 57.576 +mAcc: 92.690 98.634 77.892 85.367 92.086 91.622 82.151 58.769 28.141 92.257 14.538 70.023 79.088 87.231 55.933 56.442 87.943 56.132 86.619 59.123 + +thomas 04/12 02:02:40 Finished test. Elapsed time: 327.5651 +thomas 04/12 02:02:40 Current best mIoU: 64.095 at iter 86000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/12 02:06:05 ===> Epoch[296](89040/301): Loss 0.2005 LR: 2.954e-02 Score 93.369 Data time: 1.9757, Total iter time: 5.0454 +thomas 04/12 02:09:35 ===> Epoch[296](89080/301): Loss 0.1940 LR: 2.951e-02 Score 93.677 Data time: 2.0307, Total iter time: 5.1920 +thomas 04/12 02:13:08 ===> Epoch[297](89120/301): Loss 0.1781 LR: 2.948e-02 Score 93.937 Data time: 2.0306, Total iter time: 5.2322 +thomas 04/12 02:16:28 ===> Epoch[297](89160/301): Loss 0.1717 LR: 2.944e-02 Score 94.420 Data time: 1.9426, Total iter time: 4.9336 +thomas 04/12 02:20:01 ===> Epoch[297](89200/301): Loss 0.1823 LR: 2.941e-02 Score 94.099 Data time: 2.0406, Total iter time: 5.2472 +thomas 04/12 02:23:44 ===> Epoch[297](89240/301): Loss 0.1549 LR: 2.937e-02 Score 94.773 Data time: 2.1520, Total iter time: 5.4982 +thomas 04/12 02:27:17 ===> Epoch[297](89280/301): Loss 0.1560 LR: 2.934e-02 Score 94.775 Data time: 2.0532, Total iter time: 5.2408 +thomas 04/12 02:30:50 ===> Epoch[297](89320/301): Loss 0.1710 LR: 2.930e-02 Score 94.427 Data time: 2.0673, Total iter time: 5.2479 +thomas 04/12 02:34:24 ===> Epoch[297](89360/301): Loss 0.1933 LR: 2.927e-02 Score 93.718 Data time: 2.0535, Total iter time: 5.2921 +thomas 04/12 02:37:59 ===> Epoch[298](89400/301): Loss 0.1533 LR: 2.923e-02 Score 95.119 Data time: 2.0707, Total iter time: 5.2812 +thomas 04/12 02:41:15 ===> Epoch[298](89440/301): Loss 0.1561 LR: 2.920e-02 Score 94.804 Data time: 1.8983, Total iter time: 4.8241 +thomas 04/12 02:44:51 ===> Epoch[298](89480/301): Loss 0.1681 LR: 2.917e-02 Score 94.466 Data time: 2.0966, Total iter time: 5.3185 +thomas 04/12 02:48:15 ===> Epoch[298](89520/301): Loss 0.1563 LR: 2.913e-02 Score 94.789 Data time: 1.9844, Total iter time: 5.0403 +thomas 04/12 02:51:51 ===> Epoch[298](89560/301): Loss 0.1575 LR: 2.910e-02 Score 94.845 Data time: 2.0699, Total iter time: 5.3323 +thomas 04/12 02:55:20 ===> Epoch[298](89600/301): Loss 0.1902 LR: 2.906e-02 Score 93.871 Data time: 2.0345, Total iter time: 5.1471 +thomas 04/12 02:58:36 ===> Epoch[298](89640/301): Loss 0.1761 LR: 2.903e-02 Score 94.093 Data time: 1.8994, Total iter time: 4.8518 +thomas 04/12 03:02:09 ===> Epoch[298](89680/301): Loss 0.1592 LR: 2.899e-02 Score 94.696 Data time: 2.0586, Total iter time: 5.2404 +thomas 04/12 03:05:47 ===> Epoch[299](89720/301): Loss 0.1838 LR: 2.896e-02 Score 93.873 Data time: 2.0985, Total iter time: 5.3774 +thomas 04/12 03:09:28 ===> Epoch[299](89760/301): Loss 0.1678 LR: 2.892e-02 Score 94.388 Data time: 2.1545, Total iter time: 5.4480 +thomas 04/12 03:12:53 ===> Epoch[299](89800/301): Loss 0.1542 LR: 2.889e-02 Score 94.840 Data time: 1.9972, Total iter time: 5.0586 +thomas 04/12 03:16:18 ===> Epoch[299](89840/301): Loss 0.1605 LR: 2.886e-02 Score 94.670 Data time: 1.9696, Total iter time: 5.0453 +thomas 04/12 03:19:56 ===> Epoch[299](89880/301): Loss 0.1662 LR: 2.882e-02 Score 94.479 Data time: 2.1057, Total iter time: 5.3654 +thomas 04/12 03:23:27 ===> Epoch[299](89920/301): Loss 0.1615 LR: 2.879e-02 Score 94.626 Data time: 2.0557, Total iter time: 5.2240 +thomas 04/12 03:27:01 ===> Epoch[299](89960/301): Loss 0.1477 LR: 2.875e-02 Score 95.026 Data time: 2.0619, Total iter time: 5.2625 +thomas 04/12 03:30:35 ===> Epoch[300](90000/301): Loss 0.1584 LR: 2.872e-02 Score 94.758 Data time: 2.0511, Total iter time: 5.2733 +thomas 04/12 03:30:36 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/12 03:30:36 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/12 03:32:28 101/312: Data time: 0.0026, Iter time: 0.3729 Loss 0.329 (AVG: 0.736) Score 90.086 (AVG: 84.122) mIOU 60.853 mAP 69.009 mAcc 69.398 +IOU: 74.734 96.369 59.098 58.983 90.140 82.967 72.270 43.165 26.677 74.478 4.291 55.798 54.593 59.461 50.078 42.988 89.378 60.714 77.449 43.433 +mAP: 74.861 97.822 48.928 63.675 87.018 78.608 75.635 60.003 46.188 70.441 27.099 57.625 63.001 73.136 57.938 93.766 95.758 81.060 70.645 56.977 +mAcc: 95.163 98.608 67.557 72.781 94.299 94.502 86.645 51.297 28.817 87.941 4.636 80.267 67.397 70.087 55.747 43.090 91.142 64.276 78.990 54.724 + +thomas 04/12 03:34:07 201/312: Data time: 0.0025, Iter time: 0.2553 Loss 0.080 (AVG: 0.691) Score 98.008 (AVG: 85.140) mIOU 62.043 mAP 70.315 mAcc 70.361 +IOU: 77.062 96.264 58.427 66.212 90.199 81.872 71.496 43.922 24.764 75.551 7.702 54.402 59.521 70.587 48.891 44.240 87.492 56.114 82.210 43.930 +mAP: 77.785 97.729 54.367 68.654 89.995 82.201 73.957 58.588 45.103 72.809 29.564 59.322 67.066 77.576 58.515 83.496 91.673 82.175 78.017 57.701 +mAcc: 95.270 98.661 67.478 79.066 94.079 94.175 85.069 52.889 26.558 90.039 8.441 76.814 72.453 79.459 54.031 46.654 88.672 58.983 83.318 55.112 + +thomas 04/12 03:35:50 301/312: Data time: 0.0025, Iter time: 0.9732 Loss 0.211 (AVG: 0.649) Score 93.466 (AVG: 85.929) mIOU 63.003 mAP 70.536 mAcc 71.103 +IOU: 78.354 96.353 58.646 70.273 90.289 84.249 71.517 44.186 24.764 75.672 8.215 57.250 55.393 72.229 50.582 46.904 85.376 59.249 84.575 45.980 +mAP: 77.324 97.662 55.677 71.695 90.340 82.441 74.752 59.564 46.303 71.737 26.118 61.531 63.277 78.542 58.803 82.319 90.715 83.032 81.123 57.775 +mAcc: 95.197 98.671 67.275 83.038 94.129 95.215 84.822 54.076 26.966 88.069 9.124 79.040 69.654 80.124 55.299 49.252 86.440 62.483 85.604 57.592 + +thomas 04/12 03:36:04 312/312: Data time: 0.0031, Iter time: 0.4787 Loss 0.858 (AVG: 0.648) Score 80.457 (AVG: 85.976) mIOU 63.044 mAP 70.592 mAcc 71.168 +IOU: 78.531 96.362 59.133 70.527 90.148 83.004 71.567 44.618 25.650 76.643 8.765 56.988 56.245 72.395 48.476 48.123 85.518 58.386 84.986 44.809 +mAP: 77.551 97.696 55.658 72.084 90.473 82.363 74.658 59.960 46.757 71.167 27.565 61.514 63.604 78.987 56.661 82.879 90.578 82.692 81.525 57.468 +mAcc: 95.260 98.685 67.676 83.369 94.139 94.688 84.841 54.604 27.844 89.385 9.722 79.193 70.158 80.456 52.792 50.474 86.572 61.500 86.000 56.008 + +thomas 04/12 03:36:04 Finished test. Elapsed time: 328.0745 +thomas 04/12 03:36:04 Current best mIoU: 64.095 at iter 86000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/12 03:39:30 ===> Epoch[300](90040/301): Loss 0.1452 LR: 2.868e-02 Score 95.031 Data time: 1.9855, Total iter time: 5.0756 +thomas 04/12 03:43:17 ===> Epoch[300](90080/301): Loss 0.1521 LR: 2.865e-02 Score 94.923 Data time: 2.2045, Total iter time: 5.5724 +thomas 04/12 03:46:52 ===> Epoch[300](90120/301): Loss 0.1506 LR: 2.861e-02 Score 95.029 Data time: 2.0644, Total iter time: 5.2863 +thomas 04/12 03:50:16 ===> Epoch[300](90160/301): Loss 0.1527 LR: 2.858e-02 Score 94.884 Data time: 1.9878, Total iter time: 5.0387 +thomas 04/12 03:53:57 ===> Epoch[300](90200/301): Loss 0.1711 LR: 2.855e-02 Score 94.384 Data time: 2.1082, Total iter time: 5.4318 +thomas 04/12 03:57:33 ===> Epoch[300](90240/301): Loss 0.1392 LR: 2.851e-02 Score 95.348 Data time: 2.0836, Total iter time: 5.3442 +thomas 04/12 04:01:06 ===> Epoch[300](90280/301): Loss 0.1680 LR: 2.848e-02 Score 94.440 Data time: 2.0530, Total iter time: 5.2464 +thomas 04/12 04:04:26 ===> Epoch[301](90320/301): Loss 0.1585 LR: 2.844e-02 Score 94.881 Data time: 1.9171, Total iter time: 4.9142 +thomas 04/12 04:07:40 ===> Epoch[301](90360/301): Loss 0.1530 LR: 2.841e-02 Score 94.791 Data time: 1.8622, Total iter time: 4.7715 +thomas 04/12 04:11:17 ===> Epoch[301](90400/301): Loss 0.1513 LR: 2.837e-02 Score 94.793 Data time: 2.0873, Total iter time: 5.3569 +thomas 04/12 04:14:48 ===> Epoch[301](90440/301): Loss 0.1699 LR: 2.834e-02 Score 94.397 Data time: 2.0184, Total iter time: 5.2180 +thomas 04/12 04:18:16 ===> Epoch[301](90480/301): Loss 0.1481 LR: 2.830e-02 Score 95.151 Data time: 1.9682, Total iter time: 5.1211 +thomas 04/12 04:21:35 ===> Epoch[301](90520/301): Loss 0.1599 LR: 2.827e-02 Score 94.746 Data time: 1.8949, Total iter time: 4.8987 +thomas 04/12 04:25:09 ===> Epoch[301](90560/301): Loss 0.1519 LR: 2.824e-02 Score 94.811 Data time: 2.0410, Total iter time: 5.2823 +thomas 04/12 04:28:38 ===> Epoch[301](90600/301): Loss 0.1667 LR: 2.820e-02 Score 94.508 Data time: 1.9865, Total iter time: 5.1365 +thomas 04/12 04:32:03 ===> Epoch[302](90640/301): Loss 0.1917 LR: 2.817e-02 Score 93.670 Data time: 1.9659, Total iter time: 5.0660 +thomas 04/12 04:35:29 ===> Epoch[302](90680/301): Loss 0.1908 LR: 2.813e-02 Score 94.019 Data time: 1.9644, Total iter time: 5.0689 +thomas 04/12 04:39:08 ===> Epoch[302](90720/301): Loss 0.1521 LR: 2.810e-02 Score 94.961 Data time: 2.0860, Total iter time: 5.4013 +thomas 04/12 04:42:38 ===> Epoch[302](90760/301): Loss 0.1436 LR: 2.806e-02 Score 95.287 Data time: 1.9900, Total iter time: 5.1724 +thomas 04/12 04:46:06 ===> Epoch[302](90800/301): Loss 0.1622 LR: 2.803e-02 Score 94.609 Data time: 1.9934, Total iter time: 5.1271 +thomas 04/12 04:49:40 ===> Epoch[302](90840/301): Loss 0.1620 LR: 2.799e-02 Score 94.457 Data time: 2.0645, Total iter time: 5.2873 +thomas 04/12 04:52:56 ===> Epoch[302](90880/301): Loss 0.1661 LR: 2.796e-02 Score 94.458 Data time: 1.8977, Total iter time: 4.8194 +thomas 04/12 04:56:41 ===> Epoch[303](90920/301): Loss 0.1659 LR: 2.792e-02 Score 94.413 Data time: 2.1369, Total iter time: 5.5340 +thomas 04/12 04:59:59 ===> Epoch[303](90960/301): Loss 0.1659 LR: 2.789e-02 Score 94.453 Data time: 1.8844, Total iter time: 4.9000 +thomas 04/12 05:03:31 ===> Epoch[303](91000/301): Loss 0.1610 LR: 2.786e-02 Score 94.554 Data time: 2.0416, Total iter time: 5.2153 +thomas 04/12 05:03:32 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/12 05:03:32 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/12 05:05:28 101/312: Data time: 0.0030, Iter time: 0.5286 Loss 1.016 (AVG: 0.799) Score 83.985 (AVG: 84.212) mIOU 61.776 mAP 70.895 mAcc 69.521 +IOU: 74.793 95.771 64.021 71.377 89.714 89.204 63.609 48.995 19.020 73.014 10.827 57.808 60.713 69.552 40.168 42.588 77.343 57.587 84.568 44.859 +mAP: 75.230 97.078 58.175 85.308 92.503 83.927 62.498 61.126 43.828 69.945 27.075 51.999 67.054 83.742 60.707 86.961 82.664 81.434 88.438 58.201 +mAcc: 95.336 98.677 72.861 86.293 95.345 94.164 76.553 63.043 19.555 91.632 12.212 65.724 79.728 75.654 42.287 46.582 78.065 59.348 88.490 48.874 + +thomas 04/12 05:07:07 201/312: Data time: 0.0027, Iter time: 0.9220 Loss 0.464 (AVG: 0.720) Score 93.088 (AVG: 84.971) mIOU 61.325 mAP 70.492 mAcc 69.256 +IOU: 76.804 96.273 61.763 68.364 89.699 87.273 65.029 46.240 20.866 72.349 10.457 59.089 56.478 62.113 48.159 41.515 82.019 53.764 83.887 44.351 +mAP: 76.488 97.781 61.994 79.433 91.119 84.143 68.704 59.897 42.522 71.070 28.351 54.741 63.755 75.306 57.421 89.270 87.174 80.489 82.468 57.710 +mAcc: 95.683 98.670 70.375 83.326 94.659 93.193 77.623 57.256 21.512 92.613 11.996 69.153 78.289 69.736 50.540 43.757 82.949 56.119 87.796 49.867 + +thomas 04/12 05:08:48 301/312: Data time: 0.0023, Iter time: 0.5315 Loss 0.093 (AVG: 0.681) Score 97.221 (AVG: 85.709) mIOU 60.960 mAP 69.952 mAcc 68.900 +IOU: 78.012 96.320 61.542 70.236 90.915 85.250 69.583 46.897 18.183 71.876 11.805 58.320 59.099 65.036 34.847 41.275 81.367 54.349 82.532 41.749 +mAP: 78.044 97.510 62.878 75.555 91.663 81.846 72.387 60.272 41.442 71.510 27.887 57.112 66.249 75.131 48.610 83.707 88.359 80.491 80.694 57.693 +mAcc: 95.768 98.633 71.211 83.652 95.469 93.904 81.433 57.791 18.760 93.396 14.053 69.462 79.518 72.026 36.541 43.444 82.421 57.243 85.716 47.569 + +thomas 04/12 05:08:59 312/312: Data time: 0.0025, Iter time: 0.4837 Loss 0.469 (AVG: 0.681) Score 86.263 (AVG: 85.765) mIOU 61.357 mAP 70.273 mAcc 69.293 +IOU: 77.969 96.362 62.661 70.356 90.487 84.722 69.940 47.212 18.581 71.745 11.088 58.819 59.484 64.347 41.148 41.275 81.100 55.150 82.522 42.171 +mAP: 78.118 97.564 63.566 76.011 91.425 80.976 73.102 60.943 41.795 71.510 28.179 57.868 66.176 74.334 51.893 83.707 88.486 80.938 80.694 58.183 +mAcc: 95.738 98.642 72.485 83.955 94.926 93.287 81.833 58.123 19.164 93.396 12.932 69.565 79.738 71.490 43.094 43.444 82.122 58.079 85.716 48.126 + +thomas 04/12 05:08:59 Finished test. Elapsed time: 326.8688 +thomas 04/12 05:08:59 Current best mIoU: 64.095 at iter 86000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/12 05:12:33 ===> Epoch[303](91040/301): Loss 0.1734 LR: 2.782e-02 Score 94.319 Data time: 2.0475, Total iter time: 5.2644 +thomas 04/12 05:16:05 ===> Epoch[303](91080/301): Loss 0.1587 LR: 2.779e-02 Score 94.661 Data time: 2.0379, Total iter time: 5.2343 +thomas 04/12 05:19:25 ===> Epoch[303](91120/301): Loss 0.1493 LR: 2.775e-02 Score 95.188 Data time: 1.9067, Total iter time: 4.9324 +thomas 04/12 05:22:48 ===> Epoch[303](91160/301): Loss 0.1526 LR: 2.772e-02 Score 94.856 Data time: 1.9396, Total iter time: 5.0008 +thomas 04/12 05:26:23 ===> Epoch[303](91200/301): Loss 0.1513 LR: 2.768e-02 Score 95.041 Data time: 2.0753, Total iter time: 5.3025 +thomas 04/12 05:29:56 ===> Epoch[304](91240/301): Loss 0.1457 LR: 2.765e-02 Score 95.022 Data time: 2.0274, Total iter time: 5.2406 +thomas 04/12 05:33:22 ===> Epoch[304](91280/301): Loss 0.1510 LR: 2.761e-02 Score 94.997 Data time: 1.9423, Total iter time: 5.0727 +thomas 04/12 05:36:54 ===> Epoch[304](91320/301): Loss 0.1394 LR: 2.758e-02 Score 95.233 Data time: 2.0236, Total iter time: 5.2401 +thomas 04/12 05:40:24 ===> Epoch[304](91360/301): Loss 0.1464 LR: 2.754e-02 Score 95.159 Data time: 2.0015, Total iter time: 5.1588 +thomas 04/12 05:43:53 ===> Epoch[304](91400/301): Loss 0.1405 LR: 2.751e-02 Score 95.410 Data time: 2.0008, Total iter time: 5.1503 +thomas 04/12 05:47:25 ===> Epoch[304](91440/301): Loss 0.1507 LR: 2.747e-02 Score 94.856 Data time: 2.0106, Total iter time: 5.2257 +thomas 04/12 05:50:55 ===> Epoch[304](91480/301): Loss 0.1615 LR: 2.744e-02 Score 94.591 Data time: 2.0069, Total iter time: 5.1691 +thomas 04/12 05:54:24 ===> Epoch[305](91520/301): Loss 0.1542 LR: 2.741e-02 Score 94.978 Data time: 2.0147, Total iter time: 5.1566 +thomas 04/12 05:58:01 ===> Epoch[305](91560/301): Loss 0.1554 LR: 2.737e-02 Score 94.966 Data time: 2.0546, Total iter time: 5.3520 +thomas 04/12 06:01:19 ===> Epoch[305](91600/301): Loss 0.1435 LR: 2.734e-02 Score 95.189 Data time: 1.8901, Total iter time: 4.8797 +thomas 04/12 06:04:50 ===> Epoch[305](91640/301): Loss 0.1617 LR: 2.730e-02 Score 94.535 Data time: 2.0183, Total iter time: 5.2211 +thomas 04/12 06:08:17 ===> Epoch[305](91680/301): Loss 0.1539 LR: 2.727e-02 Score 94.866 Data time: 1.9773, Total iter time: 5.0967 +thomas 04/12 06:11:37 ===> Epoch[305](91720/301): Loss 0.1397 LR: 2.723e-02 Score 95.406 Data time: 1.9037, Total iter time: 4.9326 +thomas 04/12 06:14:54 ===> Epoch[305](91760/301): Loss 0.1395 LR: 2.720e-02 Score 95.219 Data time: 1.8808, Total iter time: 4.8438 +thomas 04/12 06:18:36 ===> Epoch[305](91800/301): Loss 0.1608 LR: 2.716e-02 Score 94.624 Data time: 2.1170, Total iter time: 5.4837 +thomas 04/12 06:22:02 ===> Epoch[306](91840/301): Loss 0.1460 LR: 2.713e-02 Score 95.052 Data time: 1.9770, Total iter time: 5.0764 +thomas 04/12 06:25:52 ===> Epoch[306](91880/301): Loss 0.1774 LR: 2.709e-02 Score 93.914 Data time: 2.1795, Total iter time: 5.6605 +thomas 04/12 06:29:24 ===> Epoch[306](91920/301): Loss 0.1412 LR: 2.706e-02 Score 95.266 Data time: 2.0264, Total iter time: 5.2253 +thomas 04/12 06:32:52 ===> Epoch[306](91960/301): Loss 0.1647 LR: 2.702e-02 Score 94.404 Data time: 2.0123, Total iter time: 5.1356 +thomas 04/12 06:36:32 ===> Epoch[306](92000/301): Loss 0.1610 LR: 2.699e-02 Score 94.626 Data time: 2.1812, Total iter time: 5.4216 +thomas 04/12 06:36:33 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/12 06:36:33 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/12 06:38:08 101/312: Data time: 0.0025, Iter time: 0.3408 Loss 0.079 (AVG: 0.538) Score 97.531 (AVG: 86.255) mIOU 65.457 mAP 73.658 mAcc 74.238 +IOU: 80.190 95.635 60.766 66.173 89.451 79.041 69.060 46.299 40.018 79.659 16.300 51.954 51.009 56.009 43.161 94.592 88.485 61.870 89.979 49.493 +mAP: 80.424 97.215 66.403 59.583 91.559 86.487 78.976 62.316 47.127 68.235 36.333 60.246 60.342 86.889 67.805 98.959 95.273 83.242 85.745 60.007 +mAcc: 91.976 99.283 77.453 76.864 92.329 89.946 87.121 58.747 46.471 87.424 18.768 58.948 61.316 85.157 53.389 96.068 89.131 62.964 90.713 60.683 + +thomas 04/12 06:39:53 201/312: Data time: 0.0024, Iter time: 0.5145 Loss 0.249 (AVG: 0.583) Score 89.475 (AVG: 85.834) mIOU 62.947 mAP 72.261 mAcc 72.055 +IOU: 78.413 95.396 59.793 70.838 90.076 84.305 69.479 46.203 40.423 78.394 8.496 58.752 51.266 58.940 52.451 51.014 84.173 57.113 79.486 43.931 +mAP: 79.238 96.838 63.798 67.831 90.783 84.607 76.262 62.046 50.707 75.530 30.928 58.433 59.523 84.224 71.342 82.272 89.927 81.650 83.623 55.651 +mAcc: 90.860 99.165 75.567 80.634 93.319 93.037 86.969 61.036 46.465 89.422 9.198 68.439 62.477 89.411 64.820 52.242 85.283 59.235 80.278 53.238 + +thomas 04/12 06:41:48 301/312: Data time: 0.0032, Iter time: 1.2231 Loss 0.394 (AVG: 0.610) Score 89.870 (AVG: 85.371) mIOU 61.460 mAP 71.454 mAcc 70.788 +IOU: 78.257 95.325 58.326 72.682 89.131 80.526 68.639 47.196 40.305 75.340 9.425 57.333 49.799 57.715 46.659 42.982 83.548 55.414 78.567 42.027 +mAP: 78.863 97.081 63.280 70.887 89.771 79.576 74.463 63.319 51.202 72.821 34.090 56.807 60.243 82.168 66.517 83.387 88.784 82.768 78.743 54.316 +mAcc: 90.562 99.154 74.596 82.148 93.570 89.083 88.033 64.668 46.107 89.965 10.283 66.493 58.216 85.176 60.700 45.605 84.871 57.233 79.263 50.035 + +thomas 04/12 06:41:59 312/312: Data time: 0.0025, Iter time: 0.5179 Loss 0.400 (AVG: 0.605) Score 86.827 (AVG: 85.458) mIOU 61.466 mAP 71.291 mAcc 70.791 +IOU: 78.455 95.348 58.597 72.397 89.078 81.094 68.733 46.995 39.989 75.761 9.379 56.924 50.278 57.432 46.386 42.982 83.540 55.230 78.567 42.165 +mAP: 78.852 97.125 62.797 70.432 89.833 80.502 73.437 62.967 51.586 72.995 33.369 56.807 60.490 81.719 65.124 83.387 88.784 82.768 78.743 54.095 +mAcc: 90.588 99.163 74.833 81.932 93.653 89.237 88.052 63.979 46.302 90.292 10.227 66.493 58.841 84.763 60.416 45.605 84.871 57.233 79.263 50.075 + +thomas 04/12 06:41:59 Finished test. Elapsed time: 325.8540 +thomas 04/12 06:41:59 Current best mIoU: 64.095 at iter 86000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/12 06:45:28 ===> Epoch[306](92040/301): Loss 0.1632 LR: 2.695e-02 Score 94.550 Data time: 1.9855, Total iter time: 5.1481 +thomas 04/12 06:48:56 ===> Epoch[306](92080/301): Loss 0.1540 LR: 2.692e-02 Score 94.848 Data time: 1.9904, Total iter time: 5.1187 +thomas 04/12 06:52:14 ===> Epoch[307](92120/301): Loss 0.1519 LR: 2.689e-02 Score 95.015 Data time: 1.9167, Total iter time: 4.8816 +thomas 04/12 06:55:41 ===> Epoch[307](92160/301): Loss 0.1627 LR: 2.685e-02 Score 94.562 Data time: 1.9803, Total iter time: 5.1209 +thomas 04/12 06:59:10 ===> Epoch[307](92200/301): Loss 0.1651 LR: 2.682e-02 Score 94.651 Data time: 2.0124, Total iter time: 5.1542 +thomas 04/12 07:02:54 ===> Epoch[307](92240/301): Loss 0.1410 LR: 2.678e-02 Score 95.293 Data time: 2.1554, Total iter time: 5.5073 +thomas 04/12 07:06:37 ===> Epoch[307](92280/301): Loss 0.1620 LR: 2.675e-02 Score 94.520 Data time: 2.1159, Total iter time: 5.4837 +thomas 04/12 07:10:07 ===> Epoch[307](92320/301): Loss 0.1606 LR: 2.671e-02 Score 94.535 Data time: 2.0165, Total iter time: 5.1718 +thomas 04/12 07:13:44 ===> Epoch[307](92360/301): Loss 0.1547 LR: 2.668e-02 Score 94.760 Data time: 2.0949, Total iter time: 5.3622 +thomas 04/12 07:17:22 ===> Epoch[307](92400/301): Loss 0.1451 LR: 2.664e-02 Score 95.057 Data time: 2.0790, Total iter time: 5.3766 +thomas 04/12 07:20:28 ===> Epoch[308](92440/301): Loss 0.1410 LR: 2.661e-02 Score 95.351 Data time: 1.7986, Total iter time: 4.5920 +thomas 04/12 07:23:49 ===> Epoch[308](92480/301): Loss 0.1504 LR: 2.657e-02 Score 94.960 Data time: 1.9222, Total iter time: 4.9449 +thomas 04/12 07:27:16 ===> Epoch[308](92520/301): Loss 0.1527 LR: 2.654e-02 Score 94.847 Data time: 1.9753, Total iter time: 5.0833 +thomas 04/12 07:30:37 ===> Epoch[308](92560/301): Loss 0.1406 LR: 2.650e-02 Score 95.222 Data time: 1.9335, Total iter time: 4.9585 +thomas 04/12 07:34:10 ===> Epoch[308](92600/301): Loss 0.1636 LR: 2.647e-02 Score 94.607 Data time: 2.0446, Total iter time: 5.2460 +thomas 04/12 07:37:46 ===> Epoch[308](92640/301): Loss 0.1493 LR: 2.643e-02 Score 95.078 Data time: 2.0795, Total iter time: 5.3229 +thomas 04/12 07:41:10 ===> Epoch[308](92680/301): Loss 0.1499 LR: 2.640e-02 Score 94.943 Data time: 1.9702, Total iter time: 5.0212 +thomas 04/12 07:44:53 ===> Epoch[309](92720/301): Loss 0.1599 LR: 2.636e-02 Score 94.723 Data time: 2.1360, Total iter time: 5.5052 +thomas 04/12 07:48:28 ===> Epoch[309](92760/301): Loss 0.1330 LR: 2.633e-02 Score 95.485 Data time: 2.0579, Total iter time: 5.2888 +thomas 04/12 07:51:56 ===> Epoch[309](92800/301): Loss 0.1509 LR: 2.629e-02 Score 94.991 Data time: 2.0037, Total iter time: 5.1464 +thomas 04/12 07:55:32 ===> Epoch[309](92840/301): Loss 0.1419 LR: 2.626e-02 Score 95.074 Data time: 2.0614, Total iter time: 5.3153 +thomas 04/12 07:59:04 ===> Epoch[309](92880/301): Loss 0.1503 LR: 2.622e-02 Score 94.966 Data time: 2.0436, Total iter time: 5.2098 +thomas 04/12 08:02:45 ===> Epoch[309](92920/301): Loss 0.1618 LR: 2.619e-02 Score 94.721 Data time: 2.1347, Total iter time: 5.4627 +thomas 04/12 08:06:18 ===> Epoch[309](92960/301): Loss 0.1470 LR: 2.615e-02 Score 95.063 Data time: 2.0593, Total iter time: 5.2366 +thomas 04/12 08:09:45 ===> Epoch[309](93000/301): Loss 0.1488 LR: 2.612e-02 Score 94.996 Data time: 2.0074, Total iter time: 5.1215 +thomas 04/12 08:09:47 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/12 08:09:47 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/12 08:11:28 101/312: Data time: 0.0027, Iter time: 0.3554 Loss 0.252 (AVG: 0.751) Score 91.777 (AVG: 84.842) mIOU 61.027 mAP 71.109 mAcc 69.709 +IOU: 78.540 95.954 58.871 69.738 90.836 69.676 65.439 43.135 27.450 68.298 9.554 54.781 52.679 76.135 53.828 40.268 85.816 60.219 81.152 38.168 +mAP: 79.586 97.681 63.645 64.353 92.521 76.479 75.256 60.858 39.894 67.064 37.148 63.021 62.371 88.230 57.385 89.331 92.778 82.143 77.099 55.329 +mAcc: 95.497 98.810 74.447 74.077 95.567 88.303 89.364 51.515 29.813 81.875 9.628 80.321 60.795 80.608 59.098 42.305 89.743 62.738 82.085 47.589 + +thomas 04/12 08:13:22 201/312: Data time: 0.0029, Iter time: 0.7810 Loss 0.527 (AVG: 0.712) Score 87.755 (AVG: 85.446) mIOU 61.124 mAP 70.856 mAcc 69.569 +IOU: 78.022 96.034 56.044 75.057 88.939 73.245 69.290 44.984 25.290 69.538 8.117 55.708 53.299 73.756 45.069 46.458 86.929 59.167 75.255 42.284 +mAP: 78.266 97.562 63.193 69.755 89.547 77.731 76.534 61.609 41.807 70.168 28.921 62.503 63.029 84.457 53.999 87.833 92.754 82.501 79.856 55.086 +mAcc: 94.634 98.702 72.194 79.400 92.815 91.010 88.472 56.084 26.509 82.568 8.162 81.495 62.874 77.535 48.994 48.445 91.045 61.655 75.951 52.840 + +thomas 04/12 08:15:06 301/312: Data time: 0.0024, Iter time: 0.4414 Loss 0.371 (AVG: 0.732) Score 89.494 (AVG: 85.278) mIOU 61.151 mAP 70.491 mAcc 69.398 +IOU: 77.830 96.213 55.918 75.604 89.131 74.808 68.050 45.797 23.082 71.866 9.682 56.082 50.796 69.654 43.704 47.899 87.714 58.552 78.069 42.579 +mAP: 76.977 97.750 61.229 71.843 90.027 78.467 75.617 61.792 40.773 70.418 30.477 61.065 60.911 77.105 54.129 88.371 94.323 82.526 78.646 57.384 +mAcc: 94.886 98.707 69.918 80.708 92.934 91.357 88.656 56.389 23.980 84.876 9.845 79.397 59.033 73.896 47.789 49.977 91.986 61.163 78.831 53.632 + +thomas 04/12 08:15:19 312/312: Data time: 0.0026, Iter time: 0.1709 Loss 0.236 (AVG: 0.732) Score 93.312 (AVG: 85.365) mIOU 61.264 mAP 70.341 mAcc 69.541 +IOU: 77.958 96.255 56.691 76.317 89.182 74.238 68.127 46.264 22.542 72.202 9.418 56.277 50.847 68.022 43.085 50.312 87.964 58.351 78.436 42.796 +mAP: 76.971 97.782 61.858 71.514 90.216 78.859 74.807 62.449 40.483 71.056 29.228 60.572 61.873 76.921 54.357 88.308 94.493 81.593 76.335 57.152 +mAcc: 94.961 98.710 70.506 81.488 93.007 91.555 88.375 56.953 23.455 85.211 9.568 79.913 59.203 72.186 47.073 52.436 92.115 60.980 79.234 53.881 + +thomas 04/12 08:15:19 Finished test. Elapsed time: 332.2539 +thomas 04/12 08:15:19 Current best mIoU: 64.095 at iter 86000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/12 08:19:10 ===> Epoch[310](93040/301): Loss 0.1562 LR: 2.609e-02 Score 94.596 Data time: 2.2190, Total iter time: 5.6995 +thomas 04/12 08:22:31 ===> Epoch[310](93080/301): Loss 0.1459 LR: 2.605e-02 Score 94.950 Data time: 1.9242, Total iter time: 4.9434 +thomas 04/12 08:26:16 ===> Epoch[310](93120/301): Loss 0.1485 LR: 2.602e-02 Score 94.940 Data time: 2.1619, Total iter time: 5.5331 +thomas 04/12 08:29:53 ===> Epoch[310](93160/301): Loss 0.1297 LR: 2.598e-02 Score 95.726 Data time: 2.1027, Total iter time: 5.3683 +thomas 04/12 08:33:17 ===> Epoch[310](93200/301): Loss 0.1414 LR: 2.595e-02 Score 95.130 Data time: 1.9603, Total iter time: 5.0245 +thomas 04/12 08:36:47 ===> Epoch[310](93240/301): Loss 0.1789 LR: 2.591e-02 Score 93.871 Data time: 2.0498, Total iter time: 5.1905 +thomas 04/12 08:40:33 ===> Epoch[310](93280/301): Loss 0.1787 LR: 2.588e-02 Score 94.109 Data time: 2.1965, Total iter time: 5.5620 +thomas 04/12 08:44:10 ===> Epoch[311](93320/301): Loss 0.1462 LR: 2.584e-02 Score 95.177 Data time: 2.0980, Total iter time: 5.3446 +thomas 04/12 08:47:26 ===> Epoch[311](93360/301): Loss 0.1563 LR: 2.581e-02 Score 94.782 Data time: 1.9166, Total iter time: 4.8490 +thomas 04/12 08:50:54 ===> Epoch[311](93400/301): Loss 0.1546 LR: 2.577e-02 Score 94.927 Data time: 2.0157, Total iter time: 5.1224 +thomas 04/12 08:54:21 ===> Epoch[311](93440/301): Loss 0.1478 LR: 2.574e-02 Score 95.124 Data time: 1.9787, Total iter time: 5.1077 +thomas 04/12 08:57:58 ===> Epoch[311](93480/301): Loss 0.1656 LR: 2.570e-02 Score 94.679 Data time: 2.0918, Total iter time: 5.3412 +thomas 04/12 09:01:28 ===> Epoch[311](93520/301): Loss 0.1631 LR: 2.567e-02 Score 94.642 Data time: 2.0228, Total iter time: 5.1615 +thomas 04/12 09:05:03 ===> Epoch[311](93560/301): Loss 0.1485 LR: 2.563e-02 Score 94.893 Data time: 2.0646, Total iter time: 5.3012 +thomas 04/12 09:08:40 ===> Epoch[311](93600/301): Loss 0.1510 LR: 2.560e-02 Score 94.928 Data time: 2.0928, Total iter time: 5.3588 +thomas 04/12 09:12:20 ===> Epoch[312](93640/301): Loss 0.1643 LR: 2.556e-02 Score 94.422 Data time: 2.1257, Total iter time: 5.4260 +thomas 04/12 09:15:49 ===> Epoch[312](93680/301): Loss 0.1663 LR: 2.553e-02 Score 94.502 Data time: 1.9843, Total iter time: 5.1437 +thomas 04/12 09:19:15 ===> Epoch[312](93720/301): Loss 0.1371 LR: 2.549e-02 Score 95.412 Data time: 1.9885, Total iter time: 5.0845 +thomas 04/12 09:22:45 ===> Epoch[312](93760/301): Loss 0.1394 LR: 2.546e-02 Score 95.293 Data time: 2.0174, Total iter time: 5.1992 +thomas 04/12 09:26:23 ===> Epoch[312](93800/301): Loss 0.1473 LR: 2.542e-02 Score 94.976 Data time: 2.0928, Total iter time: 5.3539 +thomas 04/12 09:29:46 ===> Epoch[312](93840/301): Loss 0.1482 LR: 2.539e-02 Score 95.045 Data time: 1.9598, Total iter time: 5.0025 +thomas 04/12 09:33:14 ===> Epoch[312](93880/301): Loss 0.1439 LR: 2.535e-02 Score 95.121 Data time: 1.9930, Total iter time: 5.1411 +thomas 04/12 09:36:43 ===> Epoch[313](93920/301): Loss 0.1434 LR: 2.532e-02 Score 95.201 Data time: 2.0050, Total iter time: 5.1381 +thomas 04/12 09:40:26 ===> Epoch[313](93960/301): Loss 0.1601 LR: 2.528e-02 Score 94.656 Data time: 2.1408, Total iter time: 5.5004 +thomas 04/12 09:44:08 ===> Epoch[313](94000/301): Loss 0.1432 LR: 2.525e-02 Score 95.175 Data time: 2.1276, Total iter time: 5.4672 +thomas 04/12 09:44:10 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/12 09:44:10 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/12 09:45:55 101/312: Data time: 0.0026, Iter time: 0.4981 Loss 1.530 (AVG: 0.606) Score 71.377 (AVG: 86.765) mIOU 64.080 mAP 72.869 mAcc 73.115 +IOU: 80.199 95.748 63.587 77.993 89.253 77.417 69.111 44.037 23.183 75.536 17.430 64.088 50.740 71.832 59.416 55.908 84.461 53.111 85.834 42.708 +mAP: 80.235 95.495 72.269 78.885 88.096 77.658 71.628 60.367 47.148 76.606 41.001 56.651 66.879 78.728 61.509 79.219 92.340 81.457 95.153 56.060 +mAcc: 91.658 98.774 84.369 84.973 95.068 94.114 82.864 62.717 24.644 85.869 21.096 78.011 70.253 83.220 65.103 55.950 85.447 57.846 86.593 53.737 + +thomas 04/12 09:47:39 201/312: Data time: 0.0031, Iter time: 0.4990 Loss 0.325 (AVG: 0.619) Score 88.683 (AVG: 86.138) mIOU 62.733 mAP 72.244 mAcc 71.964 +IOU: 78.725 96.091 58.512 78.060 89.539 79.290 70.513 48.830 25.920 75.661 18.722 60.619 54.578 66.815 48.121 38.538 82.771 56.007 82.287 45.063 +mAP: 78.524 96.277 66.645 81.002 90.024 79.471 75.123 65.166 45.938 69.855 41.592 52.962 66.362 78.983 63.120 83.797 92.119 83.936 74.169 59.826 +mAcc: 90.978 98.729 80.457 86.535 94.812 94.278 85.378 66.383 27.538 85.679 22.813 73.068 70.549 79.144 56.228 39.823 83.619 60.082 85.270 57.912 + +thomas 04/12 09:49:27 301/312: Data time: 0.0029, Iter time: 0.7071 Loss 0.101 (AVG: 0.610) Score 96.657 (AVG: 85.982) mIOU 63.788 mAP 72.237 mAcc 73.109 +IOU: 79.043 96.114 56.037 76.463 89.135 77.912 68.658 49.241 28.726 74.968 21.162 57.743 54.783 73.118 48.458 51.321 82.357 57.937 87.198 45.388 +mAP: 78.261 95.972 61.668 74.647 90.246 81.880 72.483 64.978 46.186 69.492 42.157 54.969 61.873 82.240 64.677 87.875 90.229 85.121 82.064 57.722 +mAcc: 91.117 98.706 76.071 85.238 94.197 94.676 84.266 68.521 30.378 83.703 25.233 72.703 70.246 83.540 56.930 53.760 83.271 61.272 90.130 58.225 + +thomas 04/12 09:49:40 312/312: Data time: 0.0025, Iter time: 0.3021 Loss 0.190 (AVG: 0.618) Score 93.281 (AVG: 85.795) mIOU 63.521 mAP 72.133 mAcc 72.849 +IOU: 79.039 96.064 55.431 75.714 89.096 77.806 68.084 49.356 29.146 74.372 18.849 57.791 54.454 73.033 46.805 51.321 82.926 58.420 87.198 45.513 +mAP: 78.219 95.909 61.107 74.090 90.137 81.599 72.267 65.176 46.576 69.492 42.572 55.520 61.558 82.240 63.261 87.875 90.269 84.442 82.064 58.278 +mAcc: 91.119 98.714 75.346 84.011 94.207 94.462 83.816 68.417 31.014 83.703 21.967 73.111 69.833 83.540 55.644 53.760 84.017 61.616 90.130 58.549 + +thomas 04/12 09:49:40 Finished test. Elapsed time: 330.0536 +thomas 04/12 09:49:40 Current best mIoU: 64.095 at iter 86000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/12 09:53:10 ===> Epoch[313](94040/301): Loss 0.1483 LR: 2.521e-02 Score 95.169 Data time: 2.0371, Total iter time: 5.1714 +thomas 04/12 09:56:55 ===> Epoch[313](94080/301): Loss 0.1531 LR: 2.518e-02 Score 94.896 Data time: 2.1551, Total iter time: 5.5419 +thomas 04/12 10:00:24 ===> Epoch[313](94120/301): Loss 0.1700 LR: 2.514e-02 Score 94.346 Data time: 2.0186, Total iter time: 5.1642 +thomas 04/12 10:03:52 ===> Epoch[313](94160/301): Loss 0.1526 LR: 2.511e-02 Score 94.787 Data time: 2.0101, Total iter time: 5.1055 +thomas 04/12 10:07:28 ===> Epoch[313](94200/301): Loss 0.1572 LR: 2.507e-02 Score 94.722 Data time: 2.0892, Total iter time: 5.3215 +thomas 04/12 10:10:56 ===> Epoch[314](94240/301): Loss 0.1387 LR: 2.504e-02 Score 95.390 Data time: 2.0072, Total iter time: 5.1211 +thomas 04/12 10:14:37 ===> Epoch[314](94280/301): Loss 0.1540 LR: 2.500e-02 Score 94.884 Data time: 2.1377, Total iter time: 5.4537 +thomas 04/12 10:18:24 ===> Epoch[314](94320/301): Loss 0.1525 LR: 2.497e-02 Score 94.926 Data time: 2.2236, Total iter time: 5.6090 +thomas 04/12 10:22:12 ===> Epoch[314](94360/301): Loss 0.1419 LR: 2.493e-02 Score 95.210 Data time: 2.2136, Total iter time: 5.6143 +thomas 04/12 10:25:46 ===> Epoch[314](94400/301): Loss 0.1488 LR: 2.490e-02 Score 95.098 Data time: 2.0503, Total iter time: 5.2648 +thomas 04/12 10:29:22 ===> Epoch[314](94440/301): Loss 0.1416 LR: 2.486e-02 Score 95.346 Data time: 2.0897, Total iter time: 5.3208 +thomas 04/12 10:32:43 ===> Epoch[314](94480/301): Loss 0.1380 LR: 2.483e-02 Score 95.308 Data time: 1.9480, Total iter time: 4.9647 +thomas 04/12 10:36:36 ===> Epoch[315](94520/301): Loss 0.1482 LR: 2.479e-02 Score 94.968 Data time: 2.2958, Total iter time: 5.7463 +thomas 04/12 10:40:10 ===> Epoch[315](94560/301): Loss 0.1639 LR: 2.476e-02 Score 94.498 Data time: 2.0605, Total iter time: 5.2637 +thomas 04/12 10:43:44 ===> Epoch[315](94600/301): Loss 0.1509 LR: 2.472e-02 Score 95.044 Data time: 2.0593, Total iter time: 5.2880 +thomas 04/12 10:47:23 ===> Epoch[315](94640/301): Loss 0.1610 LR: 2.469e-02 Score 94.477 Data time: 2.1494, Total iter time: 5.4043 +thomas 04/12 10:51:13 ===> Epoch[315](94680/301): Loss 0.1379 LR: 2.465e-02 Score 95.397 Data time: 2.2275, Total iter time: 5.6723 +thomas 04/12 10:54:51 ===> Epoch[315](94720/301): Loss 0.1434 LR: 2.462e-02 Score 95.183 Data time: 2.1058, Total iter time: 5.3694 +thomas 04/12 10:58:22 ===> Epoch[315](94760/301): Loss 0.1422 LR: 2.458e-02 Score 95.138 Data time: 2.0286, Total iter time: 5.1907 +thomas 04/12 11:02:00 ===> Epoch[315](94800/301): Loss 0.1338 LR: 2.455e-02 Score 95.520 Data time: 2.1061, Total iter time: 5.3768 +thomas 04/12 11:05:43 ===> Epoch[316](94840/301): Loss 0.1458 LR: 2.451e-02 Score 95.162 Data time: 2.2112, Total iter time: 5.5046 +thomas 04/12 11:09:18 ===> Epoch[316](94880/301): Loss 0.1394 LR: 2.448e-02 Score 95.231 Data time: 2.0560, Total iter time: 5.2913 +thomas 04/12 11:12:47 ===> Epoch[316](94920/301): Loss 0.1355 LR: 2.444e-02 Score 95.302 Data time: 2.0312, Total iter time: 5.1566 +thomas 04/12 11:16:26 ===> Epoch[316](94960/301): Loss 0.1476 LR: 2.441e-02 Score 95.029 Data time: 2.0941, Total iter time: 5.3985 +thomas 04/12 11:19:53 ===> Epoch[316](95000/301): Loss 0.1450 LR: 2.437e-02 Score 95.199 Data time: 1.9578, Total iter time: 5.1094 +thomas 04/12 11:19:55 Checkpoint saved to ./outputs/ScanNet-default/2020-04-05_08-43-59/checkpoint_NoneRes16UNet34C.pth +thomas 04/12 11:19:55 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/12 11:21:47 101/312: Data time: 0.0027, Iter time: 0.5435 Loss 0.573 (AVG: 0.631) Score 84.120 (AVG: 85.508) mIOU 62.009 mAP 69.912 mAcc 71.615 +IOU: 77.097 96.101 59.345 72.161 89.380 70.878 73.893 49.467 35.878 72.321 10.312 62.089 65.382 62.103 35.727 54.556 87.501 45.522 85.957 34.517 +mAP: 78.130 97.496 59.661 68.529 91.947 80.139 74.268 65.959 47.455 67.267 28.438 44.357 64.394 83.666 55.665 85.989 87.172 73.501 87.569 56.642 +mAcc: 88.578 98.947 75.404 76.812 92.709 95.195 87.334 73.711 38.262 92.197 11.951 75.863 76.378 81.745 42.624 57.854 88.314 46.934 87.674 43.819 + +thomas 04/12 11:23:34 201/312: Data time: 0.0024, Iter time: 0.2636 Loss 0.148 (AVG: 0.653) Score 95.419 (AVG: 85.012) mIOU 60.720 mAP 69.834 mAcc 70.536 +IOU: 77.267 96.176 56.774 71.401 89.293 72.949 70.070 45.809 33.283 70.539 10.090 57.909 58.555 63.480 37.898 41.055 86.148 50.501 82.370 42.826 +mAP: 77.883 97.970 57.906 71.590 90.921 81.645 71.489 63.450 47.992 69.319 31.374 54.422 59.599 77.888 51.078 82.167 88.171 78.580 83.226 60.011 +mAcc: 88.422 98.819 75.537 75.894 92.816 95.939 86.357 71.628 36.066 91.835 12.182 70.854 70.792 79.291 42.870 44.707 87.276 52.088 83.804 53.544 + +thomas 04/12 11:25:13 301/312: Data time: 0.0029, Iter time: 0.5490 Loss 0.321 (AVG: 0.633) Score 91.109 (AVG: 85.266) mIOU 61.146 mAP 70.317 mAcc 70.837 +IOU: 77.201 96.134 55.123 72.055 90.222 71.154 72.152 44.468 34.481 75.044 10.714 56.639 58.014 68.648 40.671 38.113 86.710 49.439 81.165 44.768 +mAP: 78.736 97.564 56.506 70.365 91.787 82.360 73.433 61.499 48.420 70.134 33.019 56.088 61.790 81.460 53.959 81.913 90.075 79.046 79.756 58.436 +mAcc: 88.375 98.799 73.673 76.860 93.693 96.652 87.002 71.097 37.264 91.238 12.646 71.269 70.671 85.594 45.143 39.976 87.759 51.021 82.582 55.426 + +thomas 04/12 11:25:26 312/312: Data time: 0.0030, Iter time: 1.1892 Loss 0.616 (AVG: 0.636) Score 86.992 (AVG: 85.213) mIOU 61.281 mAP 70.442 mAcc 71.016 +IOU: 77.141 96.175 54.945 72.373 90.225 71.566 71.972 44.449 34.617 74.284 10.371 57.415 58.655 67.362 39.893 39.460 86.735 50.580 81.633 45.770 +mAP: 78.718 97.614 56.677 71.551 91.732 82.498 72.670 61.705 48.608 71.283 33.469 55.805 62.090 81.500 52.798 82.582 90.015 78.661 80.392 58.470 +mAcc: 88.433 98.813 74.234 77.370 93.768 96.718 86.829 70.127 37.471 91.126 12.356 72.223 70.998 85.530 44.215 41.345 87.759 52.182 83.037 55.781 + +thomas 04/12 11:25:26 Finished test. Elapsed time: 331.6059 +thomas 04/12 11:25:26 Current best mIoU: 64.095 at iter 86000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/12 11:28:43 ===> Epoch[316](95040/301): Loss 0.1375 LR: 2.434e-02 Score 95.428 Data time: 1.9131, Total iter time: 4.8573 +thomas 04/12 11:32:38 ===> Epoch[316](95080/301): Loss 0.1512 LR: 2.430e-02 Score 94.931 Data time: 2.2681, Total iter time: 5.7911 +thomas 04/12 11:36:07 ===> Epoch[317](95120/301): Loss 0.1528 LR: 2.427e-02 Score 94.713 Data time: 2.0007, Total iter time: 5.1522 +thomas 04/12 11:39:42 ===> Epoch[317](95160/301): Loss 0.1581 LR: 2.423e-02 Score 94.682 Data time: 2.0849, Total iter time: 5.2940 +thomas 04/12 11:43:03 ===> Epoch[317](95200/301): Loss 0.1501 LR: 2.420e-02 Score 95.291 Data time: 1.9539, Total iter time: 4.9789 +thomas 04/12 11:46:29 ===> Epoch[317](95240/301): Loss 0.1458 LR: 2.416e-02 Score 95.363 Data time: 2.0192, Total iter time: 5.0625 +thomas 04/12 11:50:15 ===> Epoch[317](95280/301): Loss 0.1259 LR: 2.413e-02 Score 95.620 Data time: 2.2096, Total iter time: 5.5728 +thomas 04/12 11:54:12 ===> Epoch[317](95320/301): Loss 0.1654 LR: 2.409e-02 Score 94.510 Data time: 2.3292, Total iter time: 5.8637 +thomas 04/12 11:57:54 ===> Epoch[317](95360/301): Loss 0.1428 LR: 2.406e-02 Score 95.158 Data time: 2.1981, Total iter time: 5.4570 +thomas 04/12 12:01:24 ===> Epoch[317](95400/301): Loss 0.1370 LR: 2.402e-02 Score 95.370 Data time: 2.0706, Total iter time: 5.1978 +thomas 04/12 12:04:59 ===> Epoch[318](95440/301): Loss 0.1340 LR: 2.399e-02 Score 95.438 Data time: 2.0639, Total iter time: 5.2779 +thomas 04/12 12:08:46 ===> Epoch[318](95480/301): Loss 0.1505 LR: 2.395e-02 Score 94.898 Data time: 2.1872, Total iter time: 5.6098 +thomas 04/12 12:12:04 ===> Epoch[318](95520/301): Loss 0.1408 LR: 2.392e-02 Score 95.212 Data time: 1.9299, Total iter time: 4.8951 +thomas 04/12 12:15:32 ===> Epoch[318](95560/301): Loss 0.1450 LR: 2.388e-02 Score 94.943 Data time: 2.0057, Total iter time: 5.1382 +thomas 04/12 12:19:17 ===> Epoch[318](95600/301): Loss 0.1374 LR: 2.385e-02 Score 95.400 Data time: 2.1784, Total iter time: 5.5522 +thomas 04/12 12:22:55 ===> Epoch[318](95640/301): Loss 0.1502 LR: 2.381e-02 Score 95.027 Data time: 2.0950, Total iter time: 5.3785 +``` \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/SpatioTemporalSegmentation/scannet5cmRes16UNet34C.txt b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/SpatioTemporalSegmentation/scannet5cmRes16UNet34C.txt new file mode 100644 index 00000000..0ae61142 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/SpatioTemporalSegmentation/scannet5cmRes16UNet34C.txt @@ -0,0 +1,1602 @@ +``` +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/09 16:27:17 ===> Configurations +thomas 04/09 16:27:17 model: Res16UNet34C +thomas 04/09 16:27:17 conv1_kernel_size: 3 +thomas 04/09 16:27:17 weights: None +thomas 04/09 16:27:17 weights_for_inner_model: False +thomas 04/09 16:27:17 dilations: [1, 1, 1, 1] +thomas 04/09 16:27:17 wrapper_type: None +thomas 04/09 16:27:17 wrapper_region_type: 1 +thomas 04/09 16:27:17 wrapper_kernel_size: 3 +thomas 04/09 16:27:17 wrapper_lr: 0.1 +thomas 04/09 16:27:17 meanfield_iterations: 10 +thomas 04/09 16:27:17 crf_spatial_sigma: 1 +thomas 04/09 16:27:17 crf_chromatic_sigma: 12 +thomas 04/09 16:27:17 optimizer: SGD +thomas 04/09 16:27:17 lr: 0.1 +thomas 04/09 16:27:17 sgd_momentum: 0.9 +thomas 04/09 16:27:17 sgd_dampening: 0.1 +thomas 04/09 16:27:17 adam_beta1: 0.9 +thomas 04/09 16:27:17 adam_beta2: 0.999 +thomas 04/09 16:27:17 weight_decay: 0.0001 +thomas 04/09 16:27:17 param_histogram_freq: 100 +thomas 04/09 16:27:17 save_param_histogram: False +thomas 04/09 16:27:17 iter_size: 1 +thomas 04/09 16:27:17 bn_momentum: 0.02 +thomas 04/09 16:27:17 scheduler: PolyLR +thomas 04/09 16:27:17 max_iter: 120000 +thomas 04/09 16:27:17 step_size: 20000.0 +thomas 04/09 16:27:17 step_gamma: 0.1 +thomas 04/09 16:27:17 poly_power: 0.9 +thomas 04/09 16:27:17 exp_gamma: 0.95 +thomas 04/09 16:27:17 exp_step_size: 445 +thomas 04/09 16:27:17 log_dir: ./outputs/ScanNet/2020-04-09_16-27-14 +thomas 04/09 16:27:17 data_dir: data +thomas 04/09 16:27:17 dataset: ScannetVoxelizationDataset +thomas 04/09 16:27:17 temporal_dilation: 30 +thomas 04/09 16:27:17 temporal_numseq: 3 +thomas 04/09 16:27:17 point_lim: -1 +thomas 04/09 16:27:17 pre_point_lim: -1 +thomas 04/09 16:27:17 batch_size: 8 +thomas 04/09 16:27:17 val_batch_size: 1 +thomas 04/09 16:27:17 test_batch_size: 1 +thomas 04/09 16:27:17 cache_data: False +thomas 04/09 16:27:17 num_workers: 4 +thomas 04/09 16:27:17 num_val_workers: 1 +thomas 04/09 16:27:17 ignore_label: 255 +thomas 04/09 16:27:17 return_transformation: False +thomas 04/09 16:27:17 ignore_duplicate_class: False +thomas 04/09 16:27:17 partial_crop: 0.0 +thomas 04/09 16:27:17 train_limit_numpoints: 120000000 +thomas 04/09 16:27:17 synthia_path: /home/chrischoy/datasets/Synthia/Synthia4D +thomas 04/09 16:27:17 synthia_camera_path: /home/chrischoy/datasets/Synthia/%s/CameraParams/ +thomas 04/09 16:27:17 synthia_camera_intrinsic_file: intrinsics.txt +thomas 04/09 16:27:17 synthia_camera_extrinsics_file: Stereo_Right/Omni_F/%s.txt +thomas 04/09 16:27:17 temporal_rand_dilation: False +thomas 04/09 16:27:17 temporal_rand_numseq: False +thomas 04/09 16:27:17 scannet_path: /home/tcn02/SpatioTemporalSegmentation/data/scannet/processed/train +thomas 04/09 16:27:17 stanford3d_path: /home/chrischoy/datasets/Stanford3D +thomas 04/09 16:27:17 is_train: True +thomas 04/09 16:27:17 stat_freq: 40 +thomas 04/09 16:27:17 test_stat_freq: 100 +thomas 04/09 16:27:17 save_freq: 1000 +thomas 04/09 16:27:17 val_freq: 1000 +thomas 04/09 16:27:17 empty_cache_freq: 1 +thomas 04/09 16:27:17 train_phase: train +thomas 04/09 16:27:17 val_phase: val +thomas 04/09 16:27:17 overwrite_weights: True +thomas 04/09 16:27:17 resume: None +thomas 04/09 16:27:17 resume_optimizer: True +thomas 04/09 16:27:17 eval_upsample: False +thomas 04/09 16:27:17 lenient_weight_loading: False +thomas 04/09 16:27:17 use_feat_aug: True +thomas 04/09 16:27:17 data_aug_color_trans_ratio: 0.1 +thomas 04/09 16:27:17 data_aug_color_jitter_std: 0.05 +thomas 04/09 16:27:17 normalize_color: True +thomas 04/09 16:27:17 data_aug_scale_min: 0.9 +thomas 04/09 16:27:17 data_aug_scale_max: 1.1 +thomas 04/09 16:27:17 data_aug_hue_max: 0.5 +thomas 04/09 16:27:17 data_aug_saturation_max: 0.2 +thomas 04/09 16:27:17 visualize: False +thomas 04/09 16:27:17 test_temporal_average: False +thomas 04/09 16:27:17 visualize_path: outputs/visualize +thomas 04/09 16:27:17 save_prediction: False +thomas 04/09 16:27:17 save_pred_dir: outputs/pred +thomas 04/09 16:27:17 test_phase: test +thomas 04/09 16:27:17 evaluate_original_pointcloud: False +thomas 04/09 16:27:17 test_original_pointcloud: False +thomas 04/09 16:27:17 is_cuda: True +thomas 04/09 16:27:17 load_path: +thomas 04/09 16:27:17 log_step: 50 +thomas 04/09 16:27:17 log_level: INFO +thomas 04/09 16:27:17 num_gpu: 1 +thomas 04/09 16:27:17 seed: 123 +thomas 04/09 16:27:17 ===> Initializing dataloader +thomas 04/09 16:27:17 Loading ScannetVoxelizationDataset: scannetv2_train.txt +thomas 04/09 16:27:17 Loading ScannetVoxelizationDataset: scannetv2_val.txt +thomas 04/09 16:27:17 ===> Building model +thomas 04/09 16:27:17 ===> Number of trainable parameters: Res16UNet34C: 37846644 +thomas 04/09 16:27:17 Res16UNet34C( + (conv0p1s1): MinkowskiConvolution(in=3, out=32, region_type=RegionType.HYPERCUBE, kernel_size=[3, 3, 3], stride=[1, 1, 1], dilation=[1, 1, 1]) + (bn0): MinkowskiBatchNorm(32, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + (conv1p1s2): MinkowskiConvolution(in=32, out=32, region_type=RegionType.HYPERCUBE, kernel_size=[2, 2, 2], stride=[2, 2, 2], dilation=[1, 1, 1]) + (bn1): MinkowskiBatchNorm(32, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + (block1): Sequential( + (0): BasicBlock( + (conv1): MinkowskiConvolution(in=32, out=32, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=32, out=32, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + (1): BasicBlock( + (conv1): MinkowskiConvolution(in=32, out=32, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=32, out=32, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + ) + (conv2p2s2): MinkowskiConvolution(in=32, out=32, region_type=RegionType.HYPERCUBE, kernel_size=[2, 2, 2], stride=[2, 2, 2], dilation=[1, 1, 1]) + (bn2): MinkowskiBatchNorm(32, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + (block2): Sequential( + (0): BasicBlock( + (conv1): MinkowskiConvolution(in=32, out=64, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=64, out=64, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + (downsample): Sequential( + (0): MinkowskiConvolution(in=32, out=64, region_type=RegionType.HYPERCUBE, kernel_size=[1, 1, 1], stride=[1, 1, 1], dilation=[1, 1, 1]) + (1): MinkowskiBatchNorm(64, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + ) + ) + (1): BasicBlock( + (conv1): MinkowskiConvolution(in=64, out=64, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=64, out=64, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + (2): BasicBlock( + (conv1): MinkowskiConvolution(in=64, out=64, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=64, out=64, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + ) + (conv3p4s2): MinkowskiConvolution(in=64, out=64, region_type=RegionType.HYPERCUBE, kernel_size=[2, 2, 2], stride=[2, 2, 2], dilation=[1, 1, 1]) + (bn3): MinkowskiBatchNorm(64, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + (block3): Sequential( + (0): BasicBlock( + (conv1): MinkowskiConvolution(in=64, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=128, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + (downsample): Sequential( + (0): MinkowskiConvolution(in=64, out=128, region_type=RegionType.HYPERCUBE, kernel_size=[1, 1, 1], stride=[1, 1, 1], dilation=[1, 1, 1]) + (1): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + ) + ) + (1): BasicBlock( + (conv1): MinkowskiConvolution(in=128, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=128, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + (2): BasicBlock( + (conv1): MinkowskiConvolution(in=128, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=128, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + (3): BasicBlock( + (conv1): MinkowskiConvolution(in=128, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=128, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + ) + (conv4p8s2): MinkowskiConvolution(in=128, out=128, region_type=RegionType.HYPERCUBE, kernel_size=[2, 2, 2], stride=[2, 2, 2], dilation=[1, 1, 1]) + (bn4): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + (block4): Sequential( + (0): BasicBlock( + (conv1): MinkowskiConvolution(in=128, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + (downsample): Sequential( + (0): MinkowskiConvolution(in=128, out=256, region_type=RegionType.HYPERCUBE, kernel_size=[1, 1, 1], stride=[1, 1, 1], dilation=[1, 1, 1]) + (1): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + ) + ) + (1): BasicBlock( + (conv1): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + (2): BasicBlock( + (conv1): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + (3): BasicBlock( + (conv1): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + (4): BasicBlock( + (conv1): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + (5): BasicBlock( + (conv1): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + ) + (convtr4p16s2): MinkowskiConvolutionTranspose(in=256, out=256, region_type=RegionType.HYPERCUBE, kernel_size=[2, 2, 2], stride=[2, 2, 2], dilation=[1, 1, 1]) + (bntr4): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + (block5): Sequential( + (0): BasicBlock( + (conv1): MinkowskiConvolution(in=384, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + (downsample): Sequential( + (0): MinkowskiConvolution(in=384, out=256, region_type=RegionType.HYPERCUBE, kernel_size=[1, 1, 1], stride=[1, 1, 1], dilation=[1, 1, 1]) + (1): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + ) + ) + (1): BasicBlock( + (conv1): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=256, out=256, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + ) + (convtr5p8s2): MinkowskiConvolutionTranspose(in=256, out=128, region_type=RegionType.HYPERCUBE, kernel_size=[2, 2, 2], stride=[2, 2, 2], dilation=[1, 1, 1]) + (bntr5): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + (block6): Sequential( + (0): BasicBlock( + (conv1): MinkowskiConvolution(in=192, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=128, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + (downsample): Sequential( + (0): MinkowskiConvolution(in=192, out=128, region_type=RegionType.HYPERCUBE, kernel_size=[1, 1, 1], stride=[1, 1, 1], dilation=[1, 1, 1]) + (1): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + ) + ) + (1): BasicBlock( + (conv1): MinkowskiConvolution(in=128, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=128, out=128, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + ) + (convtr6p4s2): MinkowskiConvolutionTranspose(in=128, out=96, region_type=RegionType.HYPERCUBE, kernel_size=[2, 2, 2], stride=[2, 2, 2], dilation=[1, 1, 1]) + (bntr6): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + (block7): Sequential( + (0): BasicBlock( + (conv1): MinkowskiConvolution(in=128, out=96, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=96, out=96, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + (downsample): Sequential( + (0): MinkowskiConvolution(in=128, out=96, region_type=RegionType.HYPERCUBE, kernel_size=[1, 1, 1], stride=[1, 1, 1], dilation=[1, 1, 1]) + (1): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + ) + ) + (1): BasicBlock( + (conv1): MinkowskiConvolution(in=96, out=96, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=96, out=96, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + ) + (convtr7p2s2): MinkowskiConvolutionTranspose(in=96, out=96, region_type=RegionType.HYPERCUBE, kernel_size=[2, 2, 2], stride=[2, 2, 2], dilation=[1, 1, 1]) + (bntr7): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + (block8): Sequential( + (0): BasicBlock( + (conv1): MinkowskiConvolution(in=128, out=96, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=96, out=96, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + (downsample): Sequential( + (0): MinkowskiConvolution(in=128, out=96, region_type=RegionType.HYPERCUBE, kernel_size=[1, 1, 1], stride=[1, 1, 1], dilation=[1, 1, 1]) + (1): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.02, affine=True, track_running_stats=True) + ) + ) + (1): BasicBlock( + (conv1): MinkowskiConvolution(in=96, out=96, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm1): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (conv2): MinkowskiConvolution(in=96, out=96, region_type=RegionType.HYBRID, kernel_volume=27, stride=[1, 1, 1], dilation=[1, 1, 1]) + (norm2): MinkowskiBatchNorm(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (relu): MinkowskiReLU() + ) + ) + (final): MinkowskiConvolution(in=96, out=20, region_type=RegionType.HYPERCUBE, kernel_size=[1, 1, 1], stride=[1, 1, 1], dilation=[1, 1, 1]) + (relu): MinkowskiReLU() +) +thomas 04/09 16:27:20 ===> Start training +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/09 16:27:30 ===> Epoch[1](1/151): Loss 3.0608 LR: 1.000e-01 Score 5.415 Data time: 6.7538, Total iter time: 9.0705 +thomas 04/09 16:29:18 ===> Epoch[1](40/151): Loss 1.6836 LR: 9.997e-02 Score 57.488 Data time: 0.2522, Total iter time: 2.7288 +thomas 04/09 16:31:03 ===> Epoch[1](80/151): Loss 1.3631 LR: 9.994e-02 Score 63.212 Data time: 0.2210, Total iter time: 2.5687 +thomas 04/09 16:32:51 ===> Epoch[1](120/151): Loss 1.2555 LR: 9.991e-02 Score 65.215 Data time: 0.2471, Total iter time: 2.6253 +thomas 04/09 16:34:39 ===> Epoch[2](160/151): Loss 1.2095 LR: 9.988e-02 Score 65.911 Data time: 0.2292, Total iter time: 2.6691 +thomas 04/09 16:36:27 ===> Epoch[2](200/151): Loss 1.1866 LR: 9.985e-02 Score 65.864 Data time: 0.2383, Total iter time: 2.6325 +thomas 04/09 16:38:10 ===> Epoch[2](240/151): Loss 1.1562 LR: 9.982e-02 Score 67.593 Data time: 0.2216, Total iter time: 2.5219 +thomas 04/09 16:39:59 ===> Epoch[2](280/151): Loss 1.1593 LR: 9.979e-02 Score 67.024 Data time: 0.2331, Total iter time: 2.6797 +thomas 04/09 16:41:44 ===> Epoch[3](320/151): Loss 1.1379 LR: 9.976e-02 Score 66.245 Data time: 0.2334, Total iter time: 2.5691 +thomas 04/09 16:43:26 ===> Epoch[3](360/151): Loss 1.0875 LR: 9.973e-02 Score 67.858 Data time: 0.2220, Total iter time: 2.4861 +thomas 04/09 16:45:11 ===> Epoch[3](400/151): Loss 1.0406 LR: 9.970e-02 Score 69.480 Data time: 0.2273, Total iter time: 2.5631 +thomas 04/09 16:46:54 ===> Epoch[3](440/151): Loss 1.1035 LR: 9.967e-02 Score 67.746 Data time: 0.2403, Total iter time: 2.5240 +thomas 04/09 16:48:37 ===> Epoch[4](480/151): Loss 1.0934 LR: 9.964e-02 Score 67.677 Data time: 0.2405, Total iter time: 2.5050 +thomas 04/09 16:50:18 ===> Epoch[4](520/151): Loss 1.0448 LR: 9.961e-02 Score 68.949 Data time: 0.2138, Total iter time: 2.4852 +thomas 04/09 16:51:59 ===> Epoch[4](560/151): Loss 0.9993 LR: 9.958e-02 Score 70.001 Data time: 0.2300, Total iter time: 2.4794 +thomas 04/09 16:53:41 ===> Epoch[4](600/151): Loss 0.9702 LR: 9.955e-02 Score 71.358 Data time: 0.2375, Total iter time: 2.4831 +thomas 04/09 16:55:24 ===> Epoch[5](640/151): Loss 1.0020 LR: 9.952e-02 Score 69.651 Data time: 0.2266, Total iter time: 2.5081 +thomas 04/09 16:57:06 ===> Epoch[5](680/151): Loss 0.9361 LR: 9.949e-02 Score 72.049 Data time: 0.2341, Total iter time: 2.5108 +thomas 04/09 16:58:46 ===> Epoch[5](720/151): Loss 0.9645 LR: 9.946e-02 Score 70.988 Data time: 0.2400, Total iter time: 2.4440 +thomas 04/09 17:00:30 ===> Epoch[6](760/151): Loss 0.9414 LR: 9.943e-02 Score 71.333 Data time: 0.2486, Total iter time: 2.5273 +thomas 04/09 17:02:10 ===> Epoch[6](800/151): Loss 0.9005 LR: 9.940e-02 Score 72.731 Data time: 0.2254, Total iter time: 2.4628 +thomas 04/09 17:03:49 ===> Epoch[6](840/151): Loss 0.9013 LR: 9.937e-02 Score 72.816 Data time: 0.2193, Total iter time: 2.4278 +thomas 04/09 17:05:30 ===> Epoch[6](880/151): Loss 0.9540 LR: 9.934e-02 Score 71.038 Data time: 0.2493, Total iter time: 2.4647 +thomas 04/09 17:07:17 ===> Epoch[7](920/151): Loss 0.8273 LR: 9.931e-02 Score 74.257 Data time: 0.2275, Total iter time: 2.6120 +thomas 04/09 17:08:58 ===> Epoch[7](960/151): Loss 0.8765 LR: 9.928e-02 Score 73.439 Data time: 0.2456, Total iter time: 2.4567 +thomas 04/09 17:10:40 ===> Epoch[7](1000/151): Loss 0.8388 LR: 9.925e-02 Score 74.005 Data time: 0.2180, Total iter time: 2.5059 +thomas 04/09 17:10:40 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34C.pth +thomas 04/09 17:10:40 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/09 17:11:22 101/312: Data time: 0.0021, Iter time: 0.2311 Loss 0.839 (AVG: 1.055) Score 76.579 (AVG: 69.568) mIOU 21.556 mAP 42.482 mAcc 30.756 +IOU: 63.230 95.480 29.143 40.481 64.270 32.700 46.326 13.472 0.043 0.888 0.000 14.125 4.874 8.821 0.000 0.000 0.000 0.000 0.000 17.264 +mAP: 66.449 96.138 35.376 72.164 76.174 64.424 57.354 26.044 23.054 29.999 9.684 40.333 40.464 38.649 21.100 28.490 44.340 32.327 25.479 21.590 +mAcc: 83.512 98.503 42.385 77.754 77.951 88.452 51.326 24.989 0.043 0.894 0.000 18.856 4.960 13.747 0.000 0.000 0.000 0.000 0.000 31.738 + +thomas 04/09 17:12:06 201/312: Data time: 0.0022, Iter time: 0.2375 Loss 1.199 (AVG: 1.088) Score 67.008 (AVG: 68.793) mIOU 21.353 mAP 41.190 mAcc 30.699 +IOU: 62.628 96.076 28.692 36.119 64.472 31.791 48.487 16.661 0.037 0.970 0.000 11.424 4.860 12.082 0.000 0.000 0.000 0.000 0.000 12.753 +mAP: 66.078 95.896 38.982 68.848 76.492 61.957 56.778 28.575 20.620 29.205 9.945 36.450 36.877 36.080 16.936 34.962 37.635 27.515 24.198 19.765 +mAcc: 82.667 98.732 43.007 75.293 78.943 87.294 53.460 31.532 0.037 0.975 0.000 14.719 4.997 20.366 0.000 0.000 0.000 0.000 0.000 21.967 + +thomas 04/09 17:12:46 301/312: Data time: 0.0026, Iter time: 0.3853 Loss 0.864 (AVG: 1.070) Score 76.138 (AVG: 69.240) mIOU 21.682 mAP 42.526 mAcc 31.114 +IOU: 63.219 95.885 32.489 32.123 64.483 30.481 49.071 17.019 0.026 0.959 0.000 18.421 5.537 11.257 0.000 0.000 0.000 0.000 0.000 12.666 +mAP: 67.570 96.175 41.346 64.802 76.476 61.610 59.172 27.914 21.989 28.689 11.031 43.706 39.248 36.889 20.448 38.395 38.330 28.314 28.517 19.898 +mAcc: 82.985 98.614 46.473 72.092 78.098 88.757 53.803 32.066 0.026 0.964 0.000 21.665 5.674 18.854 0.000 0.000 0.000 0.000 0.000 22.217 + +thomas 04/09 17:12:51 312/312: Data time: 0.0023, Iter time: 0.3222 Loss 1.357 (AVG: 1.073) Score 58.057 (AVG: 69.181) mIOU 21.752 mAP 42.205 mAcc 31.108 +IOU: 63.172 95.895 32.824 31.865 64.919 30.998 49.459 16.837 0.025 0.959 0.000 18.730 5.426 11.396 0.000 0.000 0.000 0.000 0.000 12.531 +mAP: 67.607 96.238 42.042 62.540 76.565 61.639 58.875 28.062 21.889 28.689 10.885 42.984 38.433 36.616 20.695 36.569 37.904 28.438 27.706 19.726 +mAcc: 82.884 98.631 46.656 71.188 78.505 88.911 54.151 31.682 0.025 0.964 0.000 21.986 5.555 18.750 0.000 0.000 0.000 0.000 0.000 22.276 + +thomas 04/09 17:12:51 Finished test. Elapsed time: 130.5012 +thomas 04/09 17:12:51 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/09 17:12:51 Current best mIoU: 21.752 at iter 1000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/09 17:14:29 ===> Epoch[7](1040/151): Loss 0.8600 LR: 9.922e-02 Score 73.501 Data time: 0.2353, Total iter time: 2.3995 +thomas 04/09 17:16:11 ===> Epoch[8](1080/151): Loss 0.8592 LR: 9.919e-02 Score 73.802 Data time: 0.2271, Total iter time: 2.4920 +thomas 04/09 17:17:55 ===> Epoch[8](1120/151): Loss 0.8133 LR: 9.916e-02 Score 74.354 Data time: 0.2115, Total iter time: 2.5319 +thomas 04/09 17:19:35 ===> Epoch[8](1160/151): Loss 0.8096 LR: 9.913e-02 Score 74.978 Data time: 0.2294, Total iter time: 2.4463 +thomas 04/09 17:21:18 ===> Epoch[8](1200/151): Loss 0.8081 LR: 9.910e-02 Score 75.334 Data time: 0.2225, Total iter time: 2.5132 +thomas 04/09 17:22:56 ===> Epoch[9](1240/151): Loss 0.8345 LR: 9.907e-02 Score 74.246 Data time: 0.2182, Total iter time: 2.3973 +thomas 04/09 17:24:39 ===> Epoch[9](1280/151): Loss 0.7724 LR: 9.904e-02 Score 75.696 Data time: 0.2219, Total iter time: 2.5164 +thomas 04/09 17:26:21 ===> Epoch[9](1320/151): Loss 0.7874 LR: 9.901e-02 Score 75.842 Data time: 0.2342, Total iter time: 2.4900 +thomas 04/09 17:28:09 ===> Epoch[10](1360/151): Loss 0.7665 LR: 9.898e-02 Score 76.181 Data time: 0.2351, Total iter time: 2.6441 +thomas 04/09 17:29:54 ===> Epoch[10](1400/151): Loss 0.7755 LR: 9.895e-02 Score 75.801 Data time: 0.2331, Total iter time: 2.5628 +thomas 04/09 17:31:37 ===> Epoch[10](1440/151): Loss 0.7671 LR: 9.892e-02 Score 75.929 Data time: 0.2365, Total iter time: 2.5195 +thomas 04/09 17:33:21 ===> Epoch[10](1480/151): Loss 0.7461 LR: 9.889e-02 Score 76.996 Data time: 0.2245, Total iter time: 2.5501 +thomas 04/09 17:35:05 ===> Epoch[11](1520/151): Loss 0.7371 LR: 9.886e-02 Score 77.407 Data time: 0.2503, Total iter time: 2.5410 +thomas 04/09 17:36:51 ===> Epoch[11](1560/151): Loss 0.7208 LR: 9.883e-02 Score 77.265 Data time: 0.2530, Total iter time: 2.5831 +thomas 04/09 17:38:40 ===> Epoch[11](1600/151): Loss 0.7237 LR: 9.880e-02 Score 77.434 Data time: 0.2372, Total iter time: 2.6689 +thomas 04/09 17:40:19 ===> Epoch[11](1640/151): Loss 0.7360 LR: 9.877e-02 Score 77.984 Data time: 0.2461, Total iter time: 2.4291 +thomas 04/09 17:42:05 ===> Epoch[12](1680/151): Loss 0.6632 LR: 9.874e-02 Score 79.137 Data time: 0.2441, Total iter time: 2.5891 +thomas 04/09 17:43:47 ===> Epoch[12](1720/151): Loss 0.6913 LR: 9.871e-02 Score 78.510 Data time: 0.2325, Total iter time: 2.4762 +thomas 04/09 17:45:26 ===> Epoch[12](1760/151): Loss 0.7127 LR: 9.868e-02 Score 77.780 Data time: 0.2329, Total iter time: 2.4476 +thomas 04/09 17:47:10 ===> Epoch[12](1800/151): Loss 0.7244 LR: 9.865e-02 Score 77.142 Data time: 0.2432, Total iter time: 2.5184 +thomas 04/09 17:48:51 ===> Epoch[13](1840/151): Loss 0.6606 LR: 9.862e-02 Score 78.887 Data time: 0.2211, Total iter time: 2.4749 +thomas 04/09 17:50:38 ===> Epoch[13](1880/151): Loss 0.6714 LR: 9.859e-02 Score 79.286 Data time: 0.2185, Total iter time: 2.6157 +thomas 04/09 17:52:17 ===> Epoch[13](1920/151): Loss 0.6962 LR: 9.856e-02 Score 78.114 Data time: 0.2148, Total iter time: 2.4268 +thomas 04/09 17:53:56 ===> Epoch[13](1960/151): Loss 0.6848 LR: 9.853e-02 Score 78.278 Data time: 0.2252, Total iter time: 2.4172 +thomas 04/09 17:55:35 ===> Epoch[14](2000/151): Loss 0.6431 LR: 9.850e-02 Score 79.816 Data time: 0.2379, Total iter time: 2.4075 +thomas 04/09 17:55:36 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34C.pth +thomas 04/09 17:55:36 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/09 17:56:23 101/312: Data time: 0.0035, Iter time: 0.4126 Loss 1.318 (AVG: 0.887) Score 60.403 (AVG: 69.392) mIOU 30.832 mAP 54.122 mAcc 41.013 +IOU: 52.430 96.729 36.155 48.075 75.854 69.193 60.414 18.877 20.106 25.242 1.341 36.931 26.078 11.945 10.687 0.458 6.979 0.482 8.639 10.021 +mAP: 69.186 96.227 52.700 57.395 80.818 80.949 62.721 33.133 34.342 36.765 13.971 52.969 45.484 50.458 45.695 51.177 62.607 56.380 74.008 25.453 +mAcc: 60.123 98.974 72.478 66.791 91.622 77.009 65.265 72.824 22.863 29.569 1.854 51.119 44.144 23.765 12.008 0.460 6.983 0.482 8.639 13.284 + +thomas 04/09 17:57:01 201/312: Data time: 0.0025, Iter time: 0.2519 Loss 0.690 (AVG: 0.892) Score 79.169 (AVG: 69.278) mIOU 31.248 mAP 54.157 mAcc 41.581 +IOU: 52.093 96.426 34.139 56.161 72.345 68.769 58.631 19.085 18.553 24.268 1.505 37.528 26.421 13.248 14.936 2.060 7.618 0.480 8.715 11.983 +mAP: 69.388 96.447 51.889 63.276 81.277 78.332 65.626 30.812 32.124 39.192 13.455 50.318 47.492 46.960 44.855 49.522 72.517 58.837 64.874 25.944 +mAcc: 59.439 98.774 71.454 73.230 89.792 77.940 63.052 75.801 20.903 26.855 1.892 52.343 41.587 26.317 16.517 2.063 7.626 0.480 8.716 16.850 + +thomas 04/09 17:57:43 301/312: Data time: 0.0022, Iter time: 0.1855 Loss 0.104 (AVG: 0.924) Score 96.852 (AVG: 68.424) mIOU 31.099 mAP 53.276 mAcc 41.545 +IOU: 51.978 96.438 34.697 53.838 71.216 65.064 54.393 18.934 18.635 24.024 2.389 36.424 29.261 14.413 13.453 1.553 11.548 0.436 10.048 13.232 +mAP: 68.370 96.187 50.775 63.467 79.821 73.196 61.571 29.092 30.663 42.843 13.171 49.629 47.174 46.117 38.342 48.713 72.976 58.428 68.116 26.867 +mAcc: 59.721 98.819 72.898 71.408 88.832 74.888 60.229 75.570 20.882 26.109 2.783 49.903 43.573 28.066 14.769 1.560 11.555 0.436 10.049 18.859 + +thomas 04/09 17:57:47 312/312: Data time: 0.0025, Iter time: 0.1939 Loss 0.765 (AVG: 0.924) Score 75.016 (AVG: 68.352) mIOU 31.058 mAP 53.263 mAcc 41.494 +IOU: 51.897 96.471 34.226 54.587 70.796 65.511 54.124 18.724 18.294 24.115 2.325 36.090 29.222 14.300 13.443 1.553 12.002 0.426 10.048 12.998 +mAP: 68.080 96.219 50.317 64.251 79.730 72.711 61.660 28.844 30.341 43.398 13.092 48.658 47.267 46.420 38.342 48.713 73.835 58.804 68.116 26.457 +mAcc: 59.678 98.837 72.177 72.147 88.759 75.394 59.884 75.213 20.457 26.123 2.699 49.771 43.490 27.699 14.769 1.560 12.010 0.426 10.049 18.739 + +thomas 04/09 17:57:47 Finished test. Elapsed time: 130.8309 +thomas 04/09 17:57:49 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/09 17:57:49 Current best mIoU: 31.058 at iter 2000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/09 17:59:29 ===> Epoch[14](2040/151): Loss 0.7056 LR: 9.847e-02 Score 77.892 Data time: 0.2405, Total iter time: 2.4592 +thomas 04/09 18:01:11 ===> Epoch[14](2080/151): Loss 0.6493 LR: 9.844e-02 Score 79.299 Data time: 0.2430, Total iter time: 2.4867 +thomas 04/09 18:02:54 ===> Epoch[15](2120/151): Loss 0.6329 LR: 9.841e-02 Score 79.900 Data time: 0.2352, Total iter time: 2.5141 +thomas 04/09 18:04:38 ===> Epoch[15](2160/151): Loss 0.6406 LR: 9.838e-02 Score 79.794 Data time: 0.2277, Total iter time: 2.5556 +thomas 04/09 18:06:23 ===> Epoch[15](2200/151): Loss 0.6555 LR: 9.835e-02 Score 79.493 Data time: 0.2263, Total iter time: 2.5677 +thomas 04/09 18:08:05 ===> Epoch[15](2240/151): Loss 0.6474 LR: 9.832e-02 Score 79.586 Data time: 0.2438, Total iter time: 2.4865 +thomas 04/09 18:09:47 ===> Epoch[16](2280/151): Loss 0.6040 LR: 9.829e-02 Score 80.693 Data time: 0.2401, Total iter time: 2.5033 +thomas 04/09 18:11:30 ===> Epoch[16](2320/151): Loss 0.6390 LR: 9.826e-02 Score 79.915 Data time: 0.2218, Total iter time: 2.5050 +thomas 04/09 18:13:10 ===> Epoch[16](2360/151): Loss 0.6288 LR: 9.823e-02 Score 80.280 Data time: 0.2232, Total iter time: 2.4455 +thomas 04/09 18:14:54 ===> Epoch[16](2400/151): Loss 0.6402 LR: 9.820e-02 Score 79.471 Data time: 0.2379, Total iter time: 2.5572 +thomas 04/09 18:16:35 ===> Epoch[17](2440/151): Loss 0.6104 LR: 9.817e-02 Score 80.674 Data time: 0.2248, Total iter time: 2.4550 +thomas 04/09 18:18:19 ===> Epoch[17](2480/151): Loss 0.6054 LR: 9.814e-02 Score 80.716 Data time: 0.2355, Total iter time: 2.5366 +thomas 04/09 18:20:00 ===> Epoch[17](2520/151): Loss 0.6352 LR: 9.811e-02 Score 79.969 Data time: 0.2231, Total iter time: 2.4704 +thomas 04/09 18:21:41 ===> Epoch[17](2560/151): Loss 0.6086 LR: 9.808e-02 Score 80.702 Data time: 0.2130, Total iter time: 2.4698 +thomas 04/09 18:23:27 ===> Epoch[18](2600/151): Loss 0.5999 LR: 9.805e-02 Score 81.031 Data time: 0.2306, Total iter time: 2.5960 +thomas 04/09 18:25:09 ===> Epoch[18](2640/151): Loss 0.6297 LR: 9.802e-02 Score 79.907 Data time: 0.2242, Total iter time: 2.4948 +thomas 04/09 18:26:52 ===> Epoch[18](2680/151): Loss 0.5876 LR: 9.799e-02 Score 81.217 Data time: 0.2321, Total iter time: 2.5388 +thomas 04/09 18:28:34 ===> Epoch[19](2720/151): Loss 0.6103 LR: 9.796e-02 Score 80.629 Data time: 0.2152, Total iter time: 2.4825 +thomas 04/09 18:30:18 ===> Epoch[19](2760/151): Loss 0.6069 LR: 9.793e-02 Score 81.417 Data time: 0.2333, Total iter time: 2.5445 +thomas 04/09 18:31:58 ===> Epoch[19](2800/151): Loss 0.5918 LR: 9.790e-02 Score 81.326 Data time: 0.2313, Total iter time: 2.4414 +thomas 04/09 18:33:40 ===> Epoch[19](2840/151): Loss 0.5986 LR: 9.787e-02 Score 80.911 Data time: 0.2417, Total iter time: 2.5028 +thomas 04/09 18:35:26 ===> Epoch[20](2880/151): Loss 0.5742 LR: 9.784e-02 Score 81.470 Data time: 0.2209, Total iter time: 2.5958 +thomas 04/09 18:37:04 ===> Epoch[20](2920/151): Loss 0.5718 LR: 9.781e-02 Score 81.348 Data time: 0.2342, Total iter time: 2.4037 +thomas 04/09 18:38:43 ===> Epoch[20](2960/151): Loss 0.5805 LR: 9.778e-02 Score 81.087 Data time: 0.2205, Total iter time: 2.4171 +thomas 04/09 18:40:28 ===> Epoch[20](3000/151): Loss 0.5966 LR: 9.775e-02 Score 81.587 Data time: 0.2236, Total iter time: 2.5501 +thomas 04/09 18:40:29 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34C.pth +thomas 04/09 18:40:29 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/09 18:41:12 101/312: Data time: 0.0029, Iter time: 0.2045 Loss 1.466 (AVG: 0.851) Score 56.353 (AVG: 73.162) mIOU 38.523 mAP 56.796 mAcc 49.466 +IOU: 62.742 95.904 43.384 54.561 72.543 49.496 51.840 20.342 13.141 53.187 1.341 45.982 38.516 25.996 18.018 2.241 50.446 10.927 38.487 21.374 +mAP: 67.695 96.484 55.792 55.531 82.317 78.155 62.043 32.342 37.872 42.810 13.404 60.676 51.444 49.341 39.408 57.257 81.758 71.496 67.902 32.196 +mAcc: 80.584 98.348 50.751 61.381 78.878 95.785 59.377 54.145 13.295 61.021 1.436 72.219 73.920 30.648 18.666 2.252 50.568 10.944 38.687 36.423 + +thomas 04/09 18:41:54 201/312: Data time: 0.0026, Iter time: 0.3223 Loss 0.622 (AVG: 0.807) Score 78.566 (AVG: 74.615) mIOU 39.636 mAP 57.469 mAcc 50.148 +IOU: 64.768 96.462 41.302 55.933 70.962 53.445 57.935 21.357 12.597 47.901 0.861 41.605 40.732 23.758 11.367 8.487 60.133 12.817 47.008 23.286 +mAP: 69.269 97.214 55.195 58.286 79.469 76.647 61.386 33.804 34.775 45.794 15.530 58.311 52.646 50.545 33.579 61.855 85.551 74.396 71.887 33.243 +mAcc: 81.459 98.579 48.759 64.606 77.916 95.312 65.247 54.646 12.709 54.992 0.906 70.937 70.658 27.241 11.734 8.701 60.762 12.830 47.296 37.679 + +thomas 04/09 18:42:35 301/312: Data time: 0.0021, Iter time: 0.3116 Loss 0.679 (AVG: 0.781) Score 83.257 (AVG: 75.380) mIOU 39.589 mAP 57.618 mAcc 50.097 +IOU: 65.676 96.577 40.921 59.344 71.893 52.417 59.068 22.744 11.567 51.324 1.415 39.168 41.125 21.340 10.776 9.038 57.374 11.091 45.146 23.777 +mAP: 70.656 97.218 53.624 61.444 80.017 75.448 61.384 33.667 34.304 50.613 16.679 53.208 52.296 47.972 31.516 62.743 86.733 73.736 74.441 34.652 +mAcc: 81.781 98.649 49.344 67.074 79.515 95.331 65.024 55.521 11.795 59.138 1.583 69.213 68.092 25.018 11.045 9.249 57.927 11.101 45.417 40.126 + +thomas 04/09 18:42:39 312/312: Data time: 0.0029, Iter time: 0.1899 Loss 1.600 (AVG: 0.792) Score 54.209 (AVG: 75.047) mIOU 39.261 mAP 57.434 mAcc 49.779 +IOU: 65.230 96.579 40.493 58.747 71.405 51.422 58.878 22.658 11.205 51.324 1.359 39.825 41.252 20.312 11.468 8.501 55.372 10.524 45.146 23.511 +mAP: 70.306 97.259 52.787 61.181 79.649 75.465 60.901 33.755 34.672 53.500 16.305 51.999 52.522 46.400 32.052 61.006 86.441 73.485 74.441 34.550 +mAcc: 81.627 98.653 48.721 66.243 78.932 95.333 64.778 56.015 11.416 58.355 1.516 69.295 68.644 23.797 11.781 8.689 55.873 10.533 45.417 39.966 + +thomas 04/09 18:42:39 Finished test. Elapsed time: 129.4097 +thomas 04/09 18:42:40 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/09 18:42:40 Current best mIoU: 39.261 at iter 3000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/09 18:44:24 ===> Epoch[21](3040/151): Loss 0.5902 LR: 9.772e-02 Score 81.106 Data time: 0.2151, Total iter time: 2.5438 +thomas 04/09 18:46:01 ===> Epoch[21](3080/151): Loss 0.5834 LR: 9.769e-02 Score 81.618 Data time: 0.2172, Total iter time: 2.3694 +thomas 04/09 18:47:39 ===> Epoch[21](3120/151): Loss 0.5506 LR: 9.766e-02 Score 82.407 Data time: 0.2143, Total iter time: 2.3953 +thomas 04/09 18:49:17 ===> Epoch[21](3160/151): Loss 0.5693 LR: 9.763e-02 Score 81.718 Data time: 0.2208, Total iter time: 2.3966 +thomas 04/09 18:50:58 ===> Epoch[22](3200/151): Loss 0.5531 LR: 9.760e-02 Score 82.392 Data time: 0.2344, Total iter time: 2.4642 +thomas 04/09 18:52:39 ===> Epoch[22](3240/151): Loss 0.5636 LR: 9.757e-02 Score 82.049 Data time: 0.2481, Total iter time: 2.4636 +thomas 04/09 18:54:18 ===> Epoch[22](3280/151): Loss 0.5540 LR: 9.754e-02 Score 82.230 Data time: 0.2274, Total iter time: 2.4399 +thomas 04/09 18:55:57 ===> Epoch[22](3320/151): Loss 0.5631 LR: 9.751e-02 Score 82.079 Data time: 0.2145, Total iter time: 2.4018 +thomas 04/09 18:57:37 ===> Epoch[23](3360/151): Loss 0.5580 LR: 9.748e-02 Score 81.947 Data time: 0.2454, Total iter time: 2.4478 +thomas 04/09 18:59:21 ===> Epoch[23](3400/151): Loss 0.5443 LR: 9.745e-02 Score 82.623 Data time: 0.2326, Total iter time: 2.5555 +thomas 04/09 19:00:56 ===> Epoch[23](3440/151): Loss 0.5416 LR: 9.742e-02 Score 82.750 Data time: 0.2144, Total iter time: 2.3181 +thomas 04/09 19:02:38 ===> Epoch[24](3480/151): Loss 0.5439 LR: 9.739e-02 Score 82.385 Data time: 0.2242, Total iter time: 2.4946 +thomas 04/09 19:04:14 ===> Epoch[24](3520/151): Loss 0.5554 LR: 9.736e-02 Score 82.351 Data time: 0.2169, Total iter time: 2.3329 +thomas 04/09 19:05:52 ===> Epoch[24](3560/151): Loss 0.5522 LR: 9.733e-02 Score 82.248 Data time: 0.2196, Total iter time: 2.3949 +thomas 04/09 19:07:34 ===> Epoch[24](3600/151): Loss 0.5668 LR: 9.730e-02 Score 82.161 Data time: 0.2176, Total iter time: 2.4986 +thomas 04/09 19:09:14 ===> Epoch[25](3640/151): Loss 0.5344 LR: 9.727e-02 Score 82.918 Data time: 0.2412, Total iter time: 2.4377 +thomas 04/09 19:10:50 ===> Epoch[25](3680/151): Loss 0.5581 LR: 9.724e-02 Score 82.315 Data time: 0.2296, Total iter time: 2.3549 +thomas 04/09 19:12:33 ===> Epoch[25](3720/151): Loss 0.5006 LR: 9.721e-02 Score 83.796 Data time: 0.2404, Total iter time: 2.5105 +thomas 04/09 19:14:16 ===> Epoch[25](3760/151): Loss 0.5436 LR: 9.718e-02 Score 82.496 Data time: 0.2323, Total iter time: 2.5142 +thomas 04/09 19:15:55 ===> Epoch[26](3800/151): Loss 0.5115 LR: 9.715e-02 Score 83.496 Data time: 0.2304, Total iter time: 2.4249 +thomas 04/09 19:17:37 ===> Epoch[26](3840/151): Loss 0.5241 LR: 9.712e-02 Score 83.054 Data time: 0.2514, Total iter time: 2.4894 +thomas 04/09 19:19:18 ===> Epoch[26](3880/151): Loss 0.5326 LR: 9.709e-02 Score 83.287 Data time: 0.2319, Total iter time: 2.4877 +thomas 04/09 19:21:01 ===> Epoch[26](3920/151): Loss 0.5442 LR: 9.706e-02 Score 83.048 Data time: 0.2309, Total iter time: 2.5058 +thomas 04/09 19:22:41 ===> Epoch[27](3960/151): Loss 0.5415 LR: 9.703e-02 Score 82.815 Data time: 0.2154, Total iter time: 2.4350 +thomas 04/09 19:24:25 ===> Epoch[27](4000/151): Loss 0.5313 LR: 9.699e-02 Score 83.199 Data time: 0.2435, Total iter time: 2.5409 +thomas 04/09 19:24:26 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34C.pth +thomas 04/09 19:24:26 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/09 19:25:11 101/312: Data time: 0.0024, Iter time: 0.2443 Loss 0.358 (AVG: 0.777) Score 84.642 (AVG: 77.077) mIOU 39.298 mAP 58.660 mAcc 49.372 +IOU: 68.913 96.884 38.687 49.930 79.764 66.176 57.242 27.428 8.280 40.973 0.000 36.491 49.198 17.612 25.548 27.140 39.804 17.315 15.560 23.012 +mAP: 71.445 97.059 41.703 55.726 81.123 81.371 61.836 38.149 33.496 60.597 12.931 59.748 56.691 47.691 35.435 64.714 84.093 75.771 75.481 38.152 +mAcc: 86.110 98.943 60.215 73.733 83.646 69.028 68.572 55.723 8.300 44.640 0.000 56.699 80.326 20.537 42.934 29.168 39.818 17.456 15.560 36.028 + +thomas 04/09 19:25:50 201/312: Data time: 0.0031, Iter time: 0.2667 Loss 0.181 (AVG: 0.781) Score 96.788 (AVG: 76.817) mIOU 38.283 mAP 58.565 mAcc 49.104 +IOU: 69.178 96.659 41.955 52.249 78.586 70.356 55.610 26.610 7.826 29.812 0.032 39.289 44.213 15.341 27.981 17.479 37.889 20.466 13.715 20.410 +mAP: 72.789 96.956 46.439 63.287 82.493 79.559 61.893 41.409 32.573 54.350 14.036 52.226 53.263 43.806 45.120 70.377 84.016 71.539 71.426 33.742 +mAcc: 86.393 98.816 60.975 81.482 82.904 75.486 67.575 52.017 7.858 33.035 0.033 68.693 73.544 18.328 53.304 17.982 37.903 20.597 13.720 31.441 + +thomas 04/09 19:26:31 301/312: Data time: 0.0046, Iter time: 0.2452 Loss 0.528 (AVG: 0.787) Score 81.770 (AVG: 76.502) mIOU 38.742 mAP 58.706 mAcc 49.653 +IOU: 68.228 96.469 42.825 51.026 79.713 69.357 56.603 25.197 8.250 37.962 0.016 43.495 43.753 15.479 28.578 14.408 37.339 20.478 14.318 21.340 +mAP: 71.835 96.994 48.067 65.841 84.068 78.438 60.342 39.133 33.281 54.454 15.529 52.311 56.829 44.291 43.646 72.554 82.650 71.529 69.854 32.463 +mAcc: 86.030 98.815 61.816 80.735 84.080 75.902 67.438 51.915 8.290 40.272 0.017 73.094 73.612 18.178 53.160 14.830 37.389 20.645 14.321 32.515 + +thomas 04/09 19:26:36 312/312: Data time: 0.0023, Iter time: 0.2581 Loss 0.810 (AVG: 0.782) Score 76.702 (AVG: 76.632) mIOU 38.739 mAP 58.837 mAcc 49.683 +IOU: 68.199 96.523 42.405 50.755 79.881 69.204 57.368 24.784 8.602 38.156 0.015 43.044 43.754 15.090 28.577 14.408 37.339 20.580 14.318 21.781 +mAP: 71.766 96.994 48.113 65.959 84.209 78.522 61.470 38.452 33.863 54.832 15.193 52.311 56.538 45.389 43.646 72.554 82.650 71.858 69.854 32.572 +mAcc: 86.031 98.826 61.288 80.562 84.074 75.776 68.523 51.370 8.648 40.437 0.016 73.094 73.828 17.625 53.160 14.830 37.389 20.746 14.321 33.120 + +thomas 04/09 19:26:36 Finished test. Elapsed time: 129.3577 +thomas 04/09 19:26:36 Current best mIoU: 39.261 at iter 3000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/09 19:28:12 ===> Epoch[27](4040/151): Loss 0.5317 LR: 9.696e-02 Score 83.318 Data time: 0.2163, Total iter time: 2.3489 +thomas 04/09 19:29:57 ===> Epoch[28](4080/151): Loss 0.5043 LR: 9.693e-02 Score 83.752 Data time: 0.2500, Total iter time: 2.5809 +thomas 04/09 19:31:38 ===> Epoch[28](4120/151): Loss 0.5211 LR: 9.690e-02 Score 83.277 Data time: 0.2293, Total iter time: 2.4599 +thomas 04/09 19:33:20 ===> Epoch[28](4160/151): Loss 0.5322 LR: 9.687e-02 Score 83.250 Data time: 0.2262, Total iter time: 2.4977 +thomas 04/09 19:34:57 ===> Epoch[28](4200/151): Loss 0.5132 LR: 9.684e-02 Score 83.298 Data time: 0.2231, Total iter time: 2.3721 +thomas 04/09 19:36:36 ===> Epoch[29](4240/151): Loss 0.5178 LR: 9.681e-02 Score 83.491 Data time: 0.2237, Total iter time: 2.4305 +thomas 04/09 19:38:23 ===> Epoch[29](4280/151): Loss 0.5294 LR: 9.678e-02 Score 83.054 Data time: 0.2152, Total iter time: 2.6101 +thomas 04/09 19:40:00 ===> Epoch[29](4320/151): Loss 0.4829 LR: 9.675e-02 Score 84.485 Data time: 0.2305, Total iter time: 2.3641 +thomas 04/09 19:41:37 ===> Epoch[29](4360/151): Loss 0.5303 LR: 9.672e-02 Score 83.230 Data time: 0.2169, Total iter time: 2.3739 +thomas 04/09 19:43:14 ===> Epoch[30](4400/151): Loss 0.5102 LR: 9.669e-02 Score 83.811 Data time: 0.2159, Total iter time: 2.3670 +thomas 04/09 19:45:00 ===> Epoch[30](4440/151): Loss 0.4879 LR: 9.666e-02 Score 84.587 Data time: 0.2578, Total iter time: 2.5990 +thomas 04/09 19:46:47 ===> Epoch[30](4480/151): Loss 0.5137 LR: 9.663e-02 Score 83.693 Data time: 0.2460, Total iter time: 2.6309 +thomas 04/09 19:48:34 ===> Epoch[30](4520/151): Loss 0.5317 LR: 9.660e-02 Score 83.093 Data time: 0.2462, Total iter time: 2.6178 +thomas 04/09 19:50:20 ===> Epoch[31](4560/151): Loss 0.4781 LR: 9.657e-02 Score 84.583 Data time: 0.2645, Total iter time: 2.5938 +thomas 04/09 19:52:01 ===> Epoch[31](4600/151): Loss 0.5286 LR: 9.654e-02 Score 83.162 Data time: 0.2418, Total iter time: 2.4666 +thomas 04/09 19:53:41 ===> Epoch[31](4640/151): Loss 0.4897 LR: 9.651e-02 Score 84.227 Data time: 0.2343, Total iter time: 2.4355 +thomas 04/09 19:55:20 ===> Epoch[31](4680/151): Loss 0.4960 LR: 9.648e-02 Score 84.269 Data time: 0.2066, Total iter time: 2.4203 +thomas 04/09 19:56:55 ===> Epoch[32](4720/151): Loss 0.4569 LR: 9.645e-02 Score 85.345 Data time: 0.2025, Total iter time: 2.3289 +thomas 04/09 19:58:39 ===> Epoch[32](4760/151): Loss 0.5093 LR: 9.642e-02 Score 83.811 Data time: 0.2324, Total iter time: 2.5374 +thomas 04/09 20:00:21 ===> Epoch[32](4800/151): Loss 0.5353 LR: 9.639e-02 Score 83.349 Data time: 0.2466, Total iter time: 2.4884 +thomas 04/09 20:02:02 ===> Epoch[33](4840/151): Loss 0.4593 LR: 9.636e-02 Score 85.313 Data time: 0.2327, Total iter time: 2.4838 +thomas 04/09 20:03:40 ===> Epoch[33](4880/151): Loss 0.5060 LR: 9.633e-02 Score 83.892 Data time: 0.2281, Total iter time: 2.3754 +thomas 04/09 20:05:22 ===> Epoch[33](4920/151): Loss 0.4842 LR: 9.630e-02 Score 84.740 Data time: 0.2253, Total iter time: 2.5066 +thomas 04/09 20:07:05 ===> Epoch[33](4960/151): Loss 0.4939 LR: 9.627e-02 Score 84.262 Data time: 0.2274, Total iter time: 2.5146 +thomas 04/09 20:08:44 ===> Epoch[34](5000/151): Loss 0.4921 LR: 9.624e-02 Score 84.509 Data time: 0.2234, Total iter time: 2.4029 +thomas 04/09 20:08:45 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34C.pth +thomas 04/09 20:08:45 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/09 20:09:33 101/312: Data time: 0.0025, Iter time: 0.4066 Loss 0.965 (AVG: 0.813) Score 63.472 (AVG: 75.721) mIOU 38.764 mAP 56.461 mAcc 51.671 +IOU: 65.253 96.500 39.432 47.140 67.499 50.859 54.347 28.152 29.337 42.990 0.300 55.745 39.125 23.524 14.428 35.168 9.218 2.311 49.118 24.840 +mAP: 67.607 96.506 44.211 61.269 76.360 83.274 56.806 38.840 43.037 43.673 12.620 48.915 57.980 55.419 30.905 81.204 76.542 47.035 67.279 39.734 +mAcc: 77.895 99.102 48.921 72.885 88.359 98.738 56.130 57.376 33.107 45.439 0.305 67.490 57.841 66.149 28.161 35.238 9.218 2.311 56.104 32.642 + +thomas 04/09 20:10:11 201/312: Data time: 0.0024, Iter time: 0.3370 Loss 0.711 (AVG: 0.775) Score 81.994 (AVG: 76.494) mIOU 42.286 mAP 60.234 mAcc 55.579 +IOU: 66.928 96.554 45.368 49.950 66.446 47.266 52.950 28.114 29.116 46.941 0.694 53.727 41.442 32.497 26.286 46.610 16.632 4.380 69.589 24.232 +mAP: 70.329 96.846 50.437 54.768 77.610 79.549 61.569 40.386 42.987 49.899 16.889 52.717 61.594 65.008 43.580 82.326 78.219 58.155 83.405 38.404 +mAcc: 78.440 99.048 55.065 70.000 87.478 98.240 56.365 56.379 32.914 51.452 0.718 63.780 59.036 78.200 46.275 51.194 16.635 4.388 73.206 32.766 + +thomas 04/09 20:10:51 301/312: Data time: 0.0029, Iter time: 0.3344 Loss 1.264 (AVG: 0.797) Score 55.492 (AVG: 75.965) mIOU 41.926 mAP 59.708 mAcc 54.940 +IOU: 66.153 96.440 45.748 52.290 65.688 45.255 52.805 27.716 29.915 54.029 0.458 50.822 42.004 30.390 29.520 36.764 15.967 6.495 66.540 23.513 +mAP: 69.975 97.015 51.988 57.407 77.218 80.005 61.154 40.793 42.433 52.318 18.162 50.229 60.066 61.316 48.257 76.597 78.574 58.675 74.535 37.435 +mAcc: 78.020 99.014 56.229 71.705 87.356 97.857 56.187 54.351 34.788 58.896 0.476 59.867 56.811 73.503 50.031 40.096 15.979 6.504 69.699 31.441 + +thomas 04/09 20:10:54 312/312: Data time: 0.0044, Iter time: 0.1671 Loss 1.195 (AVG: 0.796) Score 57.127 (AVG: 75.989) mIOU 41.843 mAP 59.723 mAcc 54.932 +IOU: 66.286 96.396 45.988 52.235 65.945 44.720 53.416 27.365 29.886 53.845 0.450 49.594 41.574 30.670 28.894 36.699 16.283 6.281 66.907 23.423 +mAP: 70.141 96.873 52.696 57.272 77.112 80.005 61.750 40.657 41.791 52.318 17.623 50.229 59.976 61.325 47.587 76.597 78.951 58.633 75.478 37.441 +mAcc: 78.027 99.022 56.405 70.753 87.596 97.857 56.706 53.951 34.774 58.896 0.467 59.867 56.936 73.807 49.630 40.096 16.298 6.290 69.946 31.317 + +thomas 04/09 20:10:54 Finished test. Elapsed time: 129.1112 +thomas 04/09 20:10:56 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/09 20:10:56 Current best mIoU: 41.843 at iter 5000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/09 20:12:38 ===> Epoch[34](5040/151): Loss 0.4872 LR: 9.621e-02 Score 84.604 Data time: 0.2487, Total iter time: 2.5040 +thomas 04/09 20:14:21 ===> Epoch[34](5080/151): Loss 0.4893 LR: 9.618e-02 Score 84.313 Data time: 0.2437, Total iter time: 2.5057 +thomas 04/09 20:15:58 ===> Epoch[34](5120/151): Loss 0.5062 LR: 9.615e-02 Score 83.684 Data time: 0.1989, Total iter time: 2.3683 +thomas 04/09 20:17:41 ===> Epoch[35](5160/151): Loss 0.4966 LR: 9.612e-02 Score 84.262 Data time: 0.2250, Total iter time: 2.5303 +thomas 04/09 20:19:23 ===> Epoch[35](5200/151): Loss 0.4664 LR: 9.609e-02 Score 84.911 Data time: 0.2245, Total iter time: 2.4775 +thomas 04/09 20:20:59 ===> Epoch[35](5240/151): Loss 0.4587 LR: 9.606e-02 Score 85.349 Data time: 0.2102, Total iter time: 2.3647 +thomas 04/09 20:22:42 ===> Epoch[35](5280/151): Loss 0.4652 LR: 9.603e-02 Score 85.290 Data time: 0.2317, Total iter time: 2.5017 +thomas 04/09 20:24:22 ===> Epoch[36](5320/151): Loss 0.4607 LR: 9.600e-02 Score 85.147 Data time: 0.2315, Total iter time: 2.4476 +thomas 04/09 20:26:01 ===> Epoch[36](5360/151): Loss 0.4780 LR: 9.597e-02 Score 84.854 Data time: 0.2199, Total iter time: 2.4249 +thomas 04/09 20:27:44 ===> Epoch[36](5400/151): Loss 0.4870 LR: 9.594e-02 Score 84.660 Data time: 0.2246, Total iter time: 2.5196 +thomas 04/09 20:29:25 ===> Epoch[37](5440/151): Loss 0.4581 LR: 9.591e-02 Score 85.454 Data time: 0.2265, Total iter time: 2.4672 +thomas 04/09 20:31:07 ===> Epoch[37](5480/151): Loss 0.4606 LR: 9.588e-02 Score 85.334 Data time: 0.2292, Total iter time: 2.5023 +thomas 04/09 20:32:52 ===> Epoch[37](5520/151): Loss 0.4800 LR: 9.585e-02 Score 84.478 Data time: 0.2319, Total iter time: 2.5495 +thomas 04/09 20:34:27 ===> Epoch[37](5560/151): Loss 0.4967 LR: 9.582e-02 Score 84.550 Data time: 0.2132, Total iter time: 2.3252 +thomas 04/09 20:36:04 ===> Epoch[38](5600/151): Loss 0.4362 LR: 9.579e-02 Score 86.101 Data time: 0.2187, Total iter time: 2.3798 +thomas 04/09 20:37:48 ===> Epoch[38](5640/151): Loss 0.4956 LR: 9.576e-02 Score 84.279 Data time: 0.2322, Total iter time: 2.5383 +thomas 04/09 20:39:31 ===> Epoch[38](5680/151): Loss 0.4632 LR: 9.573e-02 Score 84.944 Data time: 0.2257, Total iter time: 2.5260 +thomas 04/09 20:41:11 ===> Epoch[38](5720/151): Loss 0.4435 LR: 9.570e-02 Score 85.725 Data time: 0.2339, Total iter time: 2.4381 +thomas 04/09 20:42:48 ===> Epoch[39](5760/151): Loss 0.4730 LR: 9.567e-02 Score 85.155 Data time: 0.2113, Total iter time: 2.3828 +thomas 04/09 20:44:33 ===> Epoch[39](5800/151): Loss 0.4545 LR: 9.564e-02 Score 85.715 Data time: 0.2261, Total iter time: 2.5522 +thomas 04/09 20:46:16 ===> Epoch[39](5840/151): Loss 0.4592 LR: 9.561e-02 Score 85.363 Data time: 0.2210, Total iter time: 2.5201 +thomas 04/09 20:47:56 ===> Epoch[39](5880/151): Loss 0.4531 LR: 9.558e-02 Score 85.334 Data time: 0.2461, Total iter time: 2.4421 +thomas 04/09 20:49:40 ===> Epoch[40](5920/151): Loss 0.4583 LR: 9.555e-02 Score 85.203 Data time: 0.2221, Total iter time: 2.5365 +thomas 04/09 20:51:18 ===> Epoch[40](5960/151): Loss 0.4396 LR: 9.552e-02 Score 85.940 Data time: 0.2254, Total iter time: 2.4077 +thomas 04/09 20:52:58 ===> Epoch[40](6000/151): Loss 0.4561 LR: 9.549e-02 Score 85.589 Data time: 0.2237, Total iter time: 2.4400 +thomas 04/09 20:52:59 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34C.pth +thomas 04/09 20:52:59 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/09 20:53:43 101/312: Data time: 0.0039, Iter time: 0.2370 Loss 0.419 (AVG: 0.718) Score 83.659 (AVG: 77.426) mIOU 45.957 mAP 60.082 mAcc 56.719 +IOU: 65.629 96.545 52.128 74.317 78.344 75.130 59.095 24.974 9.934 52.873 0.186 53.640 50.607 38.404 31.811 25.413 50.515 16.076 44.990 18.526 +mAP: 74.028 95.992 60.384 67.553 80.752 77.665 64.725 42.478 35.849 51.390 12.793 61.402 51.909 56.525 53.450 65.785 82.532 68.638 63.390 34.402 +mAcc: 75.868 98.953 68.154 79.684 88.762 93.983 65.911 77.185 10.235 55.411 0.191 56.356 61.293 60.616 62.042 26.902 50.532 16.190 45.076 41.031 + +thomas 04/09 20:54:23 201/312: Data time: 0.0023, Iter time: 0.3782 Loss 1.074 (AVG: 0.748) Score 67.276 (AVG: 76.910) mIOU 45.586 mAP 60.663 mAcc 56.117 +IOU: 65.500 96.419 49.456 70.910 79.537 71.150 59.237 25.993 14.263 49.536 0.675 47.011 47.327 33.091 25.971 27.622 55.321 18.306 52.437 21.965 +mAP: 72.610 96.549 56.446 64.428 83.436 79.870 66.106 41.921 37.453 53.898 11.938 57.152 51.714 53.508 49.122 72.617 87.041 73.080 69.896 34.473 +mAcc: 77.120 99.037 64.626 76.097 88.392 93.346 65.305 71.826 14.509 52.990 0.711 50.438 65.354 52.237 55.085 29.479 55.515 18.400 52.619 39.261 + +thomas 04/09 20:55:05 301/312: Data time: 0.0022, Iter time: 0.2283 Loss 0.660 (AVG: 0.727) Score 80.646 (AVG: 77.369) mIOU 46.332 mAP 61.261 mAcc 57.235 +IOU: 65.553 96.552 49.067 70.646 80.613 71.847 60.638 26.920 15.722 53.530 0.420 48.205 46.425 31.958 28.782 27.637 53.706 17.608 55.619 25.183 +mAP: 72.596 96.552 57.268 65.123 83.945 81.159 66.199 42.199 38.493 52.522 12.649 54.750 51.483 54.966 52.750 71.572 86.062 72.724 76.716 35.484 +mAcc: 76.752 99.111 64.657 76.153 88.832 93.593 66.448 74.346 15.962 57.824 0.437 53.301 65.637 53.221 58.704 30.461 53.838 17.695 55.825 41.896 + +thomas 04/09 20:55:10 312/312: Data time: 0.0030, Iter time: 0.3136 Loss 1.206 (AVG: 0.735) Score 57.614 (AVG: 77.119) mIOU 45.935 mAP 61.016 mAcc 56.897 +IOU: 65.313 96.570 47.536 68.535 80.642 71.487 61.320 26.696 15.351 52.745 0.404 46.108 46.513 31.285 28.195 27.637 53.706 17.083 55.619 25.960 +mAP: 72.144 96.628 57.291 63.972 83.925 80.748 66.751 42.694 38.377 52.094 12.536 53.881 51.496 53.779 51.890 71.572 86.062 72.522 76.716 35.234 +mAcc: 76.814 99.073 64.844 74.541 88.678 93.578 67.084 74.829 15.584 56.859 0.420 51.549 66.100 51.936 58.466 30.461 53.838 17.164 55.825 40.295 + +thomas 04/09 20:55:10 Finished test. Elapsed time: 130.3805 +thomas 04/09 20:55:11 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/09 20:55:11 Current best mIoU: 45.935 at iter 6000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/09 20:56:50 ===> Epoch[40](6040/151): Loss 0.4331 LR: 9.546e-02 Score 86.023 Data time: 0.2295, Total iter time: 2.4055 +thomas 04/09 20:58:31 ===> Epoch[41](6080/151): Loss 0.4393 LR: 9.543e-02 Score 85.689 Data time: 0.2204, Total iter time: 2.4711 +thomas 04/09 21:00:14 ===> Epoch[41](6120/151): Loss 0.4789 LR: 9.540e-02 Score 85.070 Data time: 0.2353, Total iter time: 2.5164 +thomas 04/09 21:01:53 ===> Epoch[41](6160/151): Loss 0.4520 LR: 9.537e-02 Score 85.687 Data time: 0.2189, Total iter time: 2.4185 +thomas 04/09 21:03:35 ===> Epoch[42](6200/151): Loss 0.4224 LR: 9.534e-02 Score 86.635 Data time: 0.2306, Total iter time: 2.4881 +thomas 04/09 21:05:11 ===> Epoch[42](6240/151): Loss 0.4489 LR: 9.531e-02 Score 85.675 Data time: 0.2492, Total iter time: 2.3477 +thomas 04/09 21:06:52 ===> Epoch[42](6280/151): Loss 0.4381 LR: 9.528e-02 Score 85.763 Data time: 0.2262, Total iter time: 2.4636 +thomas 04/09 21:08:36 ===> Epoch[42](6320/151): Loss 0.4465 LR: 9.525e-02 Score 85.732 Data time: 0.2192, Total iter time: 2.5542 +thomas 04/09 21:10:19 ===> Epoch[43](6360/151): Loss 0.4436 LR: 9.522e-02 Score 85.823 Data time: 0.2309, Total iter time: 2.5202 +thomas 04/09 21:12:02 ===> Epoch[43](6400/151): Loss 0.4397 LR: 9.519e-02 Score 86.075 Data time: 0.2374, Total iter time: 2.5043 +thomas 04/09 21:13:44 ===> Epoch[43](6440/151): Loss 0.4366 LR: 9.516e-02 Score 86.408 Data time: 0.2428, Total iter time: 2.5029 +thomas 04/09 21:15:21 ===> Epoch[43](6480/151): Loss 0.4425 LR: 9.513e-02 Score 85.880 Data time: 0.2148, Total iter time: 2.3832 +thomas 04/09 21:17:03 ===> Epoch[44](6520/151): Loss 0.4197 LR: 9.510e-02 Score 86.605 Data time: 0.2121, Total iter time: 2.4717 +thomas 04/09 21:18:44 ===> Epoch[44](6560/151): Loss 0.4523 LR: 9.507e-02 Score 85.731 Data time: 0.2424, Total iter time: 2.4772 +thomas 04/09 21:20:24 ===> Epoch[44](6600/151): Loss 0.4343 LR: 9.504e-02 Score 86.070 Data time: 0.2393, Total iter time: 2.4377 +thomas 04/09 21:22:03 ===> Epoch[44](6640/151): Loss 0.4284 LR: 9.501e-02 Score 86.450 Data time: 0.2232, Total iter time: 2.4202 +thomas 04/09 21:23:44 ===> Epoch[45](6680/151): Loss 0.4385 LR: 9.498e-02 Score 85.746 Data time: 0.2089, Total iter time: 2.4662 +thomas 04/09 21:25:24 ===> Epoch[45](6720/151): Loss 0.4553 LR: 9.495e-02 Score 85.285 Data time: 0.2152, Total iter time: 2.4517 +thomas 04/09 21:27:01 ===> Epoch[45](6760/151): Loss 0.4206 LR: 9.492e-02 Score 86.557 Data time: 0.2062, Total iter time: 2.3801 +thomas 04/09 21:28:40 ===> Epoch[46](6800/151): Loss 0.4333 LR: 9.489e-02 Score 86.182 Data time: 0.2257, Total iter time: 2.3963 +thomas 04/09 21:30:24 ===> Epoch[46](6840/151): Loss 0.4039 LR: 9.486e-02 Score 86.924 Data time: 0.2380, Total iter time: 2.5532 +thomas 04/09 21:32:03 ===> Epoch[46](6880/151): Loss 0.4304 LR: 9.482e-02 Score 86.609 Data time: 0.2277, Total iter time: 2.4239 +thomas 04/09 21:33:43 ===> Epoch[46](6920/151): Loss 0.4195 LR: 9.479e-02 Score 86.741 Data time: 0.2173, Total iter time: 2.4319 +thomas 04/09 21:35:23 ===> Epoch[47](6960/151): Loss 0.4304 LR: 9.476e-02 Score 86.240 Data time: 0.2312, Total iter time: 2.4550 +thomas 04/09 21:37:03 ===> Epoch[47](7000/151): Loss 0.4090 LR: 9.473e-02 Score 87.187 Data time: 0.2326, Total iter time: 2.4537 +thomas 04/09 21:37:05 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34C.pth +thomas 04/09 21:37:05 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/09 21:37:47 101/312: Data time: 0.0028, Iter time: 0.2017 Loss 0.445 (AVG: 0.639) Score 86.207 (AVG: 80.955) mIOU 45.850 mAP 62.657 mAcc 56.027 +IOU: 74.028 96.770 35.148 69.217 80.671 69.024 62.001 27.703 18.314 60.502 0.305 37.664 33.601 36.800 20.697 31.519 67.720 4.455 60.622 30.230 +mAP: 74.067 95.411 41.007 71.568 87.003 78.372 77.400 47.105 35.640 67.767 12.575 46.808 45.801 60.713 40.406 80.295 94.495 72.069 82.242 42.398 +mAcc: 91.820 98.506 54.017 80.044 83.580 95.462 85.210 34.955 19.480 65.911 0.323 44.288 43.338 62.255 50.918 32.568 68.256 4.455 62.444 42.704 + +thomas 04/09 21:38:28 201/312: Data time: 0.0024, Iter time: 0.2465 Loss 1.080 (AVG: 0.677) Score 62.240 (AVG: 79.988) mIOU 45.846 mAP 61.089 mAcc 56.234 +IOU: 72.543 96.856 43.048 60.013 78.890 62.778 65.505 26.858 16.202 58.374 0.210 47.634 31.946 34.005 24.340 39.409 64.468 3.485 62.543 27.814 +mAP: 73.805 95.727 42.817 60.503 85.205 75.691 72.420 46.440 37.488 61.878 11.220 55.435 47.562 52.830 45.466 81.018 88.469 67.186 83.563 37.050 +mAcc: 91.626 98.694 60.939 72.863 81.963 93.725 85.034 35.595 16.861 62.310 0.228 55.917 41.578 52.570 62.715 41.031 64.881 3.485 63.685 38.973 + +thomas 04/09 21:39:10 301/312: Data time: 0.0025, Iter time: 0.2701 Loss 0.715 (AVG: 0.689) Score 74.349 (AVG: 79.623) mIOU 45.420 mAP 60.535 mAcc 55.485 +IOU: 72.648 96.715 42.408 61.367 77.690 60.615 62.761 25.641 19.017 56.923 0.302 44.362 35.113 34.223 24.425 38.005 62.258 3.388 63.339 27.190 +mAP: 74.420 95.843 43.745 63.814 84.758 77.457 68.096 43.637 38.950 59.126 10.823 52.863 49.065 54.170 45.005 76.566 87.563 66.990 79.554 38.250 +mAcc: 92.369 98.811 61.512 74.387 80.721 93.276 81.560 34.640 19.613 61.368 0.324 50.058 45.886 49.358 57.948 40.597 62.525 3.388 64.178 37.189 + +thomas 04/09 21:39:14 312/312: Data time: 0.0025, Iter time: 0.1663 Loss 0.648 (AVG: 0.689) Score 78.284 (AVG: 79.563) mIOU 45.484 mAP 60.394 mAcc 55.625 +IOU: 72.541 96.724 43.324 61.229 77.627 60.413 62.419 25.415 19.504 56.916 0.293 45.409 35.532 34.542 24.532 37.282 62.063 3.042 63.609 27.267 +mAP: 74.169 95.815 44.720 63.527 84.704 77.046 67.209 43.333 38.626 59.126 10.753 51.932 49.115 54.417 46.088 75.030 88.185 65.818 80.033 38.235 +mAcc: 92.369 98.809 62.086 74.048 80.661 93.367 81.293 34.226 20.102 61.368 0.314 51.844 46.191 49.927 58.655 39.765 62.583 3.042 64.433 37.425 + +thomas 04/09 21:39:14 Finished test. Elapsed time: 129.1476 +thomas 04/09 21:39:14 Current best mIoU: 45.935 at iter 6000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/09 21:40:56 ===> Epoch[47](7040/151): Loss 0.4094 LR: 9.470e-02 Score 86.917 Data time: 0.2217, Total iter time: 2.4870 +thomas 04/09 21:42:34 ===> Epoch[47](7080/151): Loss 0.4319 LR: 9.467e-02 Score 86.422 Data time: 0.2341, Total iter time: 2.4100 +thomas 04/09 21:44:13 ===> Epoch[48](7120/151): Loss 0.4322 LR: 9.464e-02 Score 86.379 Data time: 0.2208, Total iter time: 2.4098 +thomas 04/09 21:45:54 ===> Epoch[48](7160/151): Loss 0.4176 LR: 9.461e-02 Score 86.470 Data time: 0.2111, Total iter time: 2.4739 +thomas 04/09 21:47:37 ===> Epoch[48](7200/151): Loss 0.4442 LR: 9.458e-02 Score 85.773 Data time: 0.2317, Total iter time: 2.5235 +thomas 04/09 21:49:23 ===> Epoch[48](7240/151): Loss 0.4096 LR: 9.455e-02 Score 87.011 Data time: 0.2373, Total iter time: 2.5857 +thomas 04/09 21:51:05 ===> Epoch[49](7280/151): Loss 0.4292 LR: 9.452e-02 Score 86.461 Data time: 0.2206, Total iter time: 2.4745 +thomas 04/09 21:52:45 ===> Epoch[49](7320/151): Loss 0.4269 LR: 9.449e-02 Score 86.355 Data time: 0.2343, Total iter time: 2.4572 +thomas 04/09 21:54:28 ===> Epoch[49](7360/151): Loss 0.4311 LR: 9.446e-02 Score 86.372 Data time: 0.2259, Total iter time: 2.5069 +thomas 04/09 21:56:14 ===> Epoch[50](7400/151): Loss 0.4231 LR: 9.443e-02 Score 86.767 Data time: 0.2582, Total iter time: 2.6133 +thomas 04/09 21:57:58 ===> Epoch[50](7440/151): Loss 0.4246 LR: 9.440e-02 Score 86.539 Data time: 0.2372, Total iter time: 2.5417 +thomas 04/09 21:59:40 ===> Epoch[50](7480/151): Loss 0.3861 LR: 9.437e-02 Score 87.361 Data time: 0.2306, Total iter time: 2.4998 +thomas 04/09 22:01:26 ===> Epoch[50](7520/151): Loss 0.4140 LR: 9.434e-02 Score 86.798 Data time: 0.2413, Total iter time: 2.5753 +thomas 04/09 22:03:05 ===> Epoch[51](7560/151): Loss 0.4127 LR: 9.431e-02 Score 86.876 Data time: 0.2253, Total iter time: 2.4386 +thomas 04/09 22:04:45 ===> Epoch[51](7600/151): Loss 0.4304 LR: 9.428e-02 Score 86.247 Data time: 0.2337, Total iter time: 2.4294 +thomas 04/09 22:06:28 ===> Epoch[51](7640/151): Loss 0.4077 LR: 9.425e-02 Score 87.192 Data time: 0.2433, Total iter time: 2.5313 +thomas 04/09 22:08:06 ===> Epoch[51](7680/151): Loss 0.4145 LR: 9.422e-02 Score 86.689 Data time: 0.2167, Total iter time: 2.3915 +thomas 04/09 22:09:50 ===> Epoch[52](7720/151): Loss 0.4279 LR: 9.419e-02 Score 86.279 Data time: 0.2392, Total iter time: 2.5383 +thomas 04/09 22:11:33 ===> Epoch[52](7760/151): Loss 0.3982 LR: 9.416e-02 Score 87.478 Data time: 0.2198, Total iter time: 2.5085 +thomas 04/09 22:13:10 ===> Epoch[52](7800/151): Loss 0.4239 LR: 9.413e-02 Score 86.385 Data time: 0.2327, Total iter time: 2.3694 +thomas 04/09 22:14:52 ===> Epoch[52](7840/151): Loss 0.4087 LR: 9.410e-02 Score 86.847 Data time: 0.2444, Total iter time: 2.4919 +thomas 04/09 22:16:32 ===> Epoch[53](7880/151): Loss 0.4231 LR: 9.407e-02 Score 86.675 Data time: 0.2164, Total iter time: 2.4570 +thomas 04/09 22:18:13 ===> Epoch[53](7920/151): Loss 0.4132 LR: 9.404e-02 Score 86.642 Data time: 0.2285, Total iter time: 2.4514 +thomas 04/09 22:19:54 ===> Epoch[53](7960/151): Loss 0.4317 LR: 9.401e-02 Score 86.247 Data time: 0.2292, Total iter time: 2.4735 +thomas 04/09 22:21:35 ===> Epoch[53](8000/151): Loss 0.3997 LR: 9.398e-02 Score 87.234 Data time: 0.2204, Total iter time: 2.4648 +thomas 04/09 22:21:36 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34C.pth +thomas 04/09 22:21:36 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/09 22:22:20 101/312: Data time: 0.0025, Iter time: 0.2229 Loss 0.255 (AVG: 0.765) Score 91.943 (AVG: 78.218) mIOU 44.351 mAP 60.385 mAcc 54.582 +IOU: 72.288 96.092 31.887 61.300 82.515 71.367 59.661 28.065 31.546 53.385 0.000 40.723 46.922 18.437 19.867 22.927 60.246 8.164 65.263 16.357 +mAP: 72.541 96.830 55.016 64.922 88.698 78.781 65.615 39.819 49.166 57.953 9.962 52.268 59.621 61.066 29.890 75.150 73.047 67.459 74.060 35.836 +mAcc: 88.987 99.164 76.674 64.856 88.462 88.774 67.639 39.078 33.686 78.755 0.000 53.508 91.744 18.929 23.943 24.194 60.327 8.169 65.330 19.424 + +thomas 04/09 22:23:01 201/312: Data time: 0.0021, Iter time: 0.4997 Loss 1.221 (AVG: 0.701) Score 57.605 (AVG: 79.616) mIOU 46.960 mAP 61.702 mAcc 57.162 +IOU: 74.401 96.461 36.004 63.951 83.795 71.897 59.627 26.905 32.389 52.329 0.006 53.772 48.766 16.144 31.634 34.846 70.909 5.109 64.101 16.147 +mAP: 75.158 97.239 51.510 64.137 87.636 75.201 64.660 43.036 45.577 57.419 11.179 54.370 55.378 54.626 42.295 78.303 83.606 74.871 81.820 36.023 +mAcc: 89.558 99.151 77.175 67.104 89.212 88.820 69.253 37.002 35.062 79.541 0.006 66.090 87.608 17.994 41.273 38.052 71.327 5.114 64.222 19.674 + +thomas 04/09 22:23:42 301/312: Data time: 0.0022, Iter time: 0.2592 Loss 0.291 (AVG: 0.699) Score 86.674 (AVG: 79.845) mIOU 47.024 mAP 61.627 mAcc 57.229 +IOU: 73.578 96.672 39.680 63.493 84.926 72.937 61.437 26.710 29.602 48.716 0.003 54.261 44.851 22.259 35.483 31.445 68.542 8.678 60.647 16.563 +mAP: 74.434 96.600 54.061 62.517 87.368 77.777 65.205 43.497 43.179 56.762 10.923 54.398 54.728 55.076 46.213 78.806 85.868 69.773 80.922 34.429 +mAcc: 88.694 99.069 80.428 66.640 90.457 87.635 70.027 36.706 32.294 77.741 0.003 64.813 88.519 24.614 44.190 33.797 68.960 8.688 60.779 20.516 + +thomas 04/09 22:23:46 312/312: Data time: 0.0036, Iter time: 0.3073 Loss 1.442 (AVG: 0.709) Score 52.597 (AVG: 79.591) mIOU 46.857 mAP 61.373 mAcc 57.051 +IOU: 73.434 96.596 39.290 62.704 84.984 73.710 61.313 26.907 29.072 47.839 0.003 55.026 44.395 21.868 34.907 31.445 68.235 8.477 60.647 16.284 +mAP: 74.501 96.530 53.352 62.234 87.518 78.231 64.985 43.501 42.479 56.460 10.626 54.488 54.437 53.458 44.893 78.806 86.096 69.884 80.922 34.064 +mAcc: 88.458 99.063 79.586 65.739 90.485 87.995 69.964 37.113 31.765 77.395 0.003 65.824 88.395 24.179 43.120 33.797 68.641 8.486 60.779 20.227 + +thomas 04/09 22:23:46 Finished test. Elapsed time: 130.1038 +thomas 04/09 22:23:48 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/09 22:23:48 Current best mIoU: 46.857 at iter 8000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/09 22:25:30 ===> Epoch[54](8040/151): Loss 0.4017 LR: 9.395e-02 Score 87.394 Data time: 0.2237, Total iter time: 2.4980 +thomas 04/09 22:27:11 ===> Epoch[54](8080/151): Loss 0.4061 LR: 9.392e-02 Score 87.048 Data time: 0.2188, Total iter time: 2.4715 +thomas 04/09 22:28:51 ===> Epoch[54](8120/151): Loss 0.3862 LR: 9.389e-02 Score 87.570 Data time: 0.2374, Total iter time: 2.4477 +thomas 04/09 22:30:31 ===> Epoch[55](8160/151): Loss 0.4050 LR: 9.386e-02 Score 86.998 Data time: 0.2300, Total iter time: 2.4401 +thomas 04/09 22:32:14 ===> Epoch[55](8200/151): Loss 0.3758 LR: 9.383e-02 Score 88.126 Data time: 0.2230, Total iter time: 2.5070 +thomas 04/09 22:33:56 ===> Epoch[55](8240/151): Loss 0.3911 LR: 9.380e-02 Score 87.363 Data time: 0.2365, Total iter time: 2.4888 +thomas 04/09 22:35:36 ===> Epoch[55](8280/151): Loss 0.4214 LR: 9.377e-02 Score 86.703 Data time: 0.2518, Total iter time: 2.4500 +thomas 04/09 22:37:15 ===> Epoch[56](8320/151): Loss 0.3802 LR: 9.374e-02 Score 87.829 Data time: 0.2123, Total iter time: 2.4356 +thomas 04/09 22:38:57 ===> Epoch[56](8360/151): Loss 0.4128 LR: 9.371e-02 Score 86.789 Data time: 0.2183, Total iter time: 2.4883 +thomas 04/09 22:40:39 ===> Epoch[56](8400/151): Loss 0.4064 LR: 9.368e-02 Score 87.137 Data time: 0.2448, Total iter time: 2.4974 +thomas 04/09 22:42:18 ===> Epoch[56](8440/151): Loss 0.3967 LR: 9.365e-02 Score 86.956 Data time: 0.2166, Total iter time: 2.4038 +thomas 04/09 22:43:58 ===> Epoch[57](8480/151): Loss 0.4007 LR: 9.362e-02 Score 87.573 Data time: 0.2198, Total iter time: 2.4506 +thomas 04/09 22:45:41 ===> Epoch[57](8520/151): Loss 0.4049 LR: 9.359e-02 Score 87.042 Data time: 0.2346, Total iter time: 2.5160 +thomas 04/09 22:47:21 ===> Epoch[57](8560/151): Loss 0.3694 LR: 9.356e-02 Score 88.005 Data time: 0.2375, Total iter time: 2.4544 +thomas 04/09 22:49:03 ===> Epoch[57](8600/151): Loss 0.3842 LR: 9.353e-02 Score 87.363 Data time: 0.2371, Total iter time: 2.4842 +thomas 04/09 22:50:42 ===> Epoch[58](8640/151): Loss 0.3887 LR: 9.350e-02 Score 87.654 Data time: 0.2193, Total iter time: 2.4390 +thomas 04/09 22:52:25 ===> Epoch[58](8680/151): Loss 0.3979 LR: 9.347e-02 Score 87.339 Data time: 0.2172, Total iter time: 2.5072 +thomas 04/09 22:54:06 ===> Epoch[58](8720/151): Loss 0.3836 LR: 9.344e-02 Score 87.805 Data time: 0.2335, Total iter time: 2.4666 +thomas 04/09 22:55:50 ===> Epoch[59](8760/151): Loss 0.4046 LR: 9.341e-02 Score 86.910 Data time: 0.2185, Total iter time: 2.5396 +thomas 04/09 22:57:32 ===> Epoch[59](8800/151): Loss 0.4330 LR: 9.338e-02 Score 86.234 Data time: 0.2318, Total iter time: 2.4857 +thomas 04/09 22:59:11 ===> Epoch[59](8840/151): Loss 0.4223 LR: 9.334e-02 Score 86.608 Data time: 0.2198, Total iter time: 2.4372 +thomas 04/09 23:00:49 ===> Epoch[59](8880/151): Loss 0.3911 LR: 9.331e-02 Score 87.244 Data time: 0.2227, Total iter time: 2.3856 +thomas 04/09 23:02:28 ===> Epoch[60](8920/151): Loss 0.4132 LR: 9.328e-02 Score 86.927 Data time: 0.2278, Total iter time: 2.4059 +thomas 04/09 23:04:10 ===> Epoch[60](8960/151): Loss 0.4047 LR: 9.325e-02 Score 87.435 Data time: 0.2224, Total iter time: 2.5096 +thomas 04/09 23:05:50 ===> Epoch[60](9000/151): Loss 0.3891 LR: 9.322e-02 Score 87.605 Data time: 0.2289, Total iter time: 2.4385 +thomas 04/09 23:05:52 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34C.pth +thomas 04/09 23:05:52 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/09 23:06:36 101/312: Data time: 0.0038, Iter time: 0.2690 Loss 0.176 (AVG: 0.635) Score 94.839 (AVG: 81.480) mIOU 50.975 mAP 64.776 mAcc 62.451 +IOU: 74.395 96.436 54.947 63.257 83.071 67.032 63.150 26.228 31.615 42.721 1.318 59.903 48.076 40.873 39.185 54.019 50.490 27.094 67.592 28.096 +mAP: 73.966 97.443 53.089 65.529 88.620 78.099 66.260 48.629 47.103 57.995 13.855 48.350 50.349 70.146 63.734 87.380 88.371 75.633 82.191 38.790 +mAcc: 90.697 98.583 81.236 75.612 87.044 92.268 77.197 31.454 37.143 46.696 1.368 71.114 69.435 67.600 75.449 64.807 50.524 27.396 67.874 35.526 + +thomas 04/09 23:07:17 201/312: Data time: 0.0359, Iter time: 0.2637 Loss 0.344 (AVG: 0.625) Score 88.194 (AVG: 81.449) mIOU 49.459 mAP 62.626 mAcc 60.742 +IOU: 74.818 96.851 50.047 55.372 82.040 63.109 65.814 24.911 35.316 49.148 0.937 54.060 50.009 42.949 29.733 47.786 46.426 26.643 65.922 27.296 +mAP: 75.020 97.493 52.907 59.560 87.912 75.314 67.325 47.411 44.313 50.881 12.081 51.853 51.497 70.916 51.152 80.178 88.063 72.971 77.276 38.402 +mAcc: 90.819 98.826 78.740 65.168 86.367 89.724 79.459 30.907 41.922 52.438 0.990 66.855 70.423 69.668 62.077 56.396 46.448 27.163 66.493 33.953 + +thomas 04/09 23:07:57 301/312: Data time: 0.0028, Iter time: 0.3530 Loss 0.607 (AVG: 0.654) Score 80.741 (AVG: 80.413) mIOU 48.465 mAP 62.551 mAcc 60.230 +IOU: 73.900 96.768 43.351 55.194 82.448 65.690 65.876 24.117 32.721 47.239 0.660 48.361 48.771 38.661 28.487 48.918 50.693 25.662 66.367 25.408 +mAP: 74.238 97.231 53.453 58.411 87.239 77.844 66.915 48.643 44.422 55.436 12.972 52.562 50.669 65.871 51.751 79.778 87.446 70.125 76.374 39.640 +mAcc: 90.365 98.797 75.838 65.193 86.588 91.298 79.035 29.762 38.415 51.132 0.693 66.002 70.752 68.060 60.887 56.879 50.714 26.194 66.941 31.065 + +thomas 04/09 23:08:02 312/312: Data time: 0.0024, Iter time: 0.1712 Loss 0.080 (AVG: 0.658) Score 98.643 (AVG: 80.250) mIOU 48.353 mAP 62.652 mAcc 60.202 +IOU: 73.639 96.792 42.076 55.481 82.702 65.532 66.048 24.953 31.942 46.759 0.645 47.207 47.842 38.532 28.613 49.764 51.323 25.461 67.017 24.742 +mAP: 74.019 97.277 52.901 59.726 87.349 77.965 66.912 49.257 44.189 54.521 13.442 52.562 50.152 65.541 51.758 80.519 87.608 70.415 76.994 39.923 +mAcc: 90.395 98.805 75.589 66.167 86.824 91.337 79.106 30.782 37.299 50.590 0.676 66.002 69.419 67.553 61.061 57.696 51.343 25.991 67.581 29.832 + +thomas 04/09 23:08:02 Finished test. Elapsed time: 130.1471 +thomas 04/09 23:08:03 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/09 23:08:04 Current best mIoU: 48.353 at iter 9000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/09 23:09:44 ===> Epoch[60](9040/151): Loss 0.3847 LR: 9.319e-02 Score 87.490 Data time: 0.2257, Total iter time: 2.4472 +thomas 04/09 23:11:27 ===> Epoch[61](9080/151): Loss 0.3830 LR: 9.316e-02 Score 87.651 Data time: 0.2296, Total iter time: 2.5242 +thomas 04/09 23:13:05 ===> Epoch[61](9120/151): Loss 0.3979 LR: 9.313e-02 Score 87.064 Data time: 0.2398, Total iter time: 2.3925 +thomas 04/09 23:14:46 ===> Epoch[61](9160/151): Loss 0.3935 LR: 9.310e-02 Score 87.539 Data time: 0.2355, Total iter time: 2.4773 +thomas 04/09 23:16:30 ===> Epoch[61](9200/151): Loss 0.3728 LR: 9.307e-02 Score 88.031 Data time: 0.2543, Total iter time: 2.5281 +thomas 04/09 23:18:12 ===> Epoch[62](9240/151): Loss 0.3696 LR: 9.304e-02 Score 87.996 Data time: 0.2317, Total iter time: 2.5116 +thomas 04/09 23:19:50 ===> Epoch[62](9280/151): Loss 0.3839 LR: 9.301e-02 Score 87.780 Data time: 0.2139, Total iter time: 2.4023 +thomas 04/09 23:21:27 ===> Epoch[62](9320/151): Loss 0.3733 LR: 9.298e-02 Score 88.181 Data time: 0.2170, Total iter time: 2.3429 +thomas 04/09 23:23:11 ===> Epoch[62](9360/151): Loss 0.3513 LR: 9.295e-02 Score 88.558 Data time: 0.2412, Total iter time: 2.5674 +thomas 04/09 23:24:52 ===> Epoch[63](9400/151): Loss 0.3639 LR: 9.292e-02 Score 88.314 Data time: 0.2493, Total iter time: 2.4629 +thomas 04/09 23:26:33 ===> Epoch[63](9440/151): Loss 0.3856 LR: 9.289e-02 Score 87.922 Data time: 0.2290, Total iter time: 2.4699 +thomas 04/09 23:28:17 ===> Epoch[63](9480/151): Loss 0.3917 LR: 9.286e-02 Score 87.449 Data time: 0.2349, Total iter time: 2.5334 +thomas 04/09 23:30:00 ===> Epoch[64](9520/151): Loss 0.3645 LR: 9.283e-02 Score 88.287 Data time: 0.2361, Total iter time: 2.5234 +thomas 04/09 23:31:41 ===> Epoch[64](9560/151): Loss 0.3905 LR: 9.280e-02 Score 87.543 Data time: 0.2225, Total iter time: 2.4711 +thomas 04/09 23:33:19 ===> Epoch[64](9600/151): Loss 0.3675 LR: 9.277e-02 Score 87.898 Data time: 0.2171, Total iter time: 2.3968 +thomas 04/09 23:34:56 ===> Epoch[64](9640/151): Loss 0.3703 LR: 9.274e-02 Score 88.076 Data time: 0.2041, Total iter time: 2.3564 +thomas 04/09 23:36:34 ===> Epoch[65](9680/151): Loss 0.3599 LR: 9.271e-02 Score 88.279 Data time: 0.2373, Total iter time: 2.3945 +thomas 04/09 23:38:18 ===> Epoch[65](9720/151): Loss 0.4048 LR: 9.268e-02 Score 87.434 Data time: 0.2371, Total iter time: 2.5518 +thomas 04/09 23:40:01 ===> Epoch[65](9760/151): Loss 0.3868 LR: 9.265e-02 Score 87.587 Data time: 0.2336, Total iter time: 2.5236 +thomas 04/09 23:41:43 ===> Epoch[65](9800/151): Loss 0.3935 LR: 9.262e-02 Score 87.484 Data time: 0.2404, Total iter time: 2.4920 +thomas 04/09 23:43:21 ===> Epoch[66](9840/151): Loss 0.3690 LR: 9.259e-02 Score 87.997 Data time: 0.2202, Total iter time: 2.3910 +thomas 04/09 23:45:05 ===> Epoch[66](9880/151): Loss 0.3899 LR: 9.256e-02 Score 87.764 Data time: 0.2307, Total iter time: 2.5468 +thomas 04/09 23:46:41 ===> Epoch[66](9920/151): Loss 0.3563 LR: 9.253e-02 Score 88.736 Data time: 0.2262, Total iter time: 2.3363 +thomas 04/09 23:48:22 ===> Epoch[66](9960/151): Loss 0.3632 LR: 9.250e-02 Score 88.482 Data time: 0.2189, Total iter time: 2.4682 +thomas 04/09 23:50:02 ===> Epoch[67](10000/151): Loss 0.3850 LR: 9.247e-02 Score 87.873 Data time: 0.2140, Total iter time: 2.4618 +thomas 04/09 23:50:04 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34C.pth +thomas 04/09 23:50:04 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/09 23:50:51 101/312: Data time: 0.0023, Iter time: 0.3511 Loss 0.782 (AVG: 0.656) Score 75.547 (AVG: 81.623) mIOU 49.146 mAP 62.368 mAcc 58.349 +IOU: 73.997 96.945 43.752 64.611 81.255 68.030 65.000 21.325 20.625 62.290 0.000 54.992 44.018 43.266 33.189 30.387 73.536 12.994 63.021 29.694 +mAP: 72.701 97.154 56.437 64.387 85.903 81.299 62.459 42.590 38.832 68.192 6.238 52.242 59.331 62.926 46.329 75.360 85.735 64.608 86.114 38.514 +mAcc: 91.321 98.859 86.333 68.474 91.298 81.678 83.029 24.297 21.764 86.250 0.000 60.226 55.861 53.540 39.147 34.640 74.430 13.047 66.873 35.915 + +thomas 04/09 23:51:30 201/312: Data time: 0.0034, Iter time: 0.2907 Loss 0.507 (AVG: 0.684) Score 83.905 (AVG: 80.726) mIOU 47.417 mAP 61.603 mAcc 56.457 +IOU: 72.953 96.943 42.003 53.312 82.552 65.109 63.430 20.405 19.827 61.341 0.000 54.585 42.817 39.382 41.608 15.341 72.085 18.011 58.681 27.947 +mAP: 73.265 97.304 56.255 58.401 86.671 80.656 67.171 41.426 43.001 63.168 5.631 53.272 55.535 59.436 53.904 68.750 82.157 68.330 80.453 37.268 +mAcc: 91.170 98.863 82.344 56.746 92.422 83.547 81.524 24.061 20.513 79.221 0.000 61.548 53.008 50.859 50.681 16.037 73.056 18.163 60.745 34.635 + +thomas 04/09 23:52:10 301/312: Data time: 0.0023, Iter time: 0.2015 Loss 0.452 (AVG: 0.709) Score 85.349 (AVG: 80.094) mIOU 46.723 mAP 60.722 mAcc 56.013 +IOU: 72.456 96.759 41.789 50.484 83.161 66.705 64.980 19.960 20.541 60.579 0.000 50.072 43.027 35.199 37.598 13.962 75.447 17.809 57.447 26.486 +mAP: 72.888 97.184 54.245 54.638 86.303 77.899 68.272 40.511 44.299 61.946 5.668 51.034 54.359 56.641 51.479 65.455 86.434 68.052 80.825 36.307 +mAcc: 90.693 98.828 81.959 53.690 92.722 86.203 81.686 24.010 21.306 79.134 0.000 58.145 55.962 48.332 46.290 14.383 77.443 17.921 59.079 32.479 + +thomas 04/09 23:52:14 312/312: Data time: 0.0039, Iter time: 0.2643 Loss 0.446 (AVG: 0.703) Score 89.709 (AVG: 80.215) mIOU 47.017 mAP 60.991 mAcc 56.252 +IOU: 72.599 96.785 41.865 51.154 83.131 67.034 64.959 20.358 21.932 60.505 0.000 49.802 42.903 35.051 38.621 13.662 76.364 18.012 58.844 26.746 +mAP: 72.851 97.088 54.264 55.555 86.526 77.876 68.635 41.319 44.488 61.643 5.741 50.622 55.000 56.325 52.458 65.532 86.685 68.900 81.473 36.847 +mAcc: 90.757 98.836 81.494 54.412 92.745 86.024 81.894 24.425 22.734 78.875 0.000 58.078 55.327 48.134 47.507 14.071 78.347 18.124 60.444 32.818 + +thomas 04/09 23:52:14 Finished test. Elapsed time: 130.1443 +thomas 04/09 23:52:14 Current best mIoU: 48.353 at iter 9000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/09 23:53:55 ===> Epoch[67](10040/151): Loss 0.3777 LR: 9.244e-02 Score 87.647 Data time: 0.2266, Total iter time: 2.4621 +thomas 04/09 23:55:34 ===> Epoch[67](10080/151): Loss 0.3847 LR: 9.241e-02 Score 87.633 Data time: 0.1995, Total iter time: 2.4152 +thomas 04/09 23:57:17 ===> Epoch[68](10120/151): Loss 0.3659 LR: 9.238e-02 Score 88.170 Data time: 0.2281, Total iter time: 2.5185 +thomas 04/09 23:58:56 ===> Epoch[68](10160/151): Loss 0.3778 LR: 9.235e-02 Score 88.128 Data time: 0.2422, Total iter time: 2.4174 +thomas 04/10 00:00:37 ===> Epoch[68](10200/151): Loss 0.3895 LR: 9.232e-02 Score 87.578 Data time: 0.2258, Total iter time: 2.4663 +thomas 04/10 00:02:20 ===> Epoch[68](10240/151): Loss 0.3637 LR: 9.229e-02 Score 88.345 Data time: 0.2394, Total iter time: 2.5230 +thomas 04/10 00:03:59 ===> Epoch[69](10280/151): Loss 0.3907 LR: 9.226e-02 Score 87.761 Data time: 0.2165, Total iter time: 2.4300 +thomas 04/10 00:05:43 ===> Epoch[69](10320/151): Loss 0.3709 LR: 9.223e-02 Score 88.050 Data time: 0.2521, Total iter time: 2.5312 +thomas 04/10 00:07:25 ===> Epoch[69](10360/151): Loss 0.3758 LR: 9.220e-02 Score 87.711 Data time: 0.2396, Total iter time: 2.5131 +thomas 04/10 00:09:11 ===> Epoch[69](10400/151): Loss 0.3830 LR: 9.217e-02 Score 87.725 Data time: 0.2536, Total iter time: 2.5910 +thomas 04/10 00:10:54 ===> Epoch[70](10440/151): Loss 0.3416 LR: 9.213e-02 Score 89.125 Data time: 0.2617, Total iter time: 2.5180 +thomas 04/10 00:12:41 ===> Epoch[70](10480/151): Loss 0.4030 LR: 9.210e-02 Score 87.205 Data time: 0.2390, Total iter time: 2.5968 +thomas 04/10 00:14:20 ===> Epoch[70](10520/151): Loss 0.3632 LR: 9.207e-02 Score 88.461 Data time: 0.2288, Total iter time: 2.4369 +thomas 04/10 00:16:01 ===> Epoch[70](10560/151): Loss 0.3632 LR: 9.204e-02 Score 88.590 Data time: 0.2220, Total iter time: 2.4718 +thomas 04/10 00:17:42 ===> Epoch[71](10600/151): Loss 0.3614 LR: 9.201e-02 Score 88.164 Data time: 0.2312, Total iter time: 2.4578 +thomas 04/10 00:19:20 ===> Epoch[71](10640/151): Loss 0.3604 LR: 9.198e-02 Score 88.631 Data time: 0.2478, Total iter time: 2.4059 +thomas 04/10 00:21:00 ===> Epoch[71](10680/151): Loss 0.3487 LR: 9.195e-02 Score 88.977 Data time: 0.2198, Total iter time: 2.4368 +thomas 04/10 00:22:41 ===> Epoch[71](10720/151): Loss 0.3422 LR: 9.192e-02 Score 88.896 Data time: 0.2196, Total iter time: 2.4712 +thomas 04/10 00:24:19 ===> Epoch[72](10760/151): Loss 0.3749 LR: 9.189e-02 Score 87.963 Data time: 0.2298, Total iter time: 2.4042 +thomas 04/10 00:25:59 ===> Epoch[72](10800/151): Loss 0.3553 LR: 9.186e-02 Score 88.487 Data time: 0.2285, Total iter time: 2.4269 +thomas 04/10 00:27:41 ===> Epoch[72](10840/151): Loss 0.3714 LR: 9.183e-02 Score 88.184 Data time: 0.2374, Total iter time: 2.5075 +thomas 04/10 00:29:23 ===> Epoch[73](10880/151): Loss 0.3547 LR: 9.180e-02 Score 88.461 Data time: 0.2270, Total iter time: 2.4837 +thomas 04/10 00:31:05 ===> Epoch[73](10920/151): Loss 0.3734 LR: 9.177e-02 Score 88.210 Data time: 0.2329, Total iter time: 2.4878 +thomas 04/10 00:32:46 ===> Epoch[73](10960/151): Loss 0.3515 LR: 9.174e-02 Score 88.890 Data time: 0.2330, Total iter time: 2.4850 +thomas 04/10 00:34:28 ===> Epoch[73](11000/151): Loss 0.3473 LR: 9.171e-02 Score 88.774 Data time: 0.2299, Total iter time: 2.4808 +thomas 04/10 00:34:29 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 00:34:30 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 00:35:13 101/312: Data time: 0.0048, Iter time: 0.1975 Loss 1.278 (AVG: 0.700) Score 56.652 (AVG: 79.614) mIOU 50.544 mAP 63.957 mAcc 61.794 +IOU: 71.697 96.528 45.063 70.287 74.063 48.170 62.055 28.451 13.195 68.558 0.000 46.281 52.703 44.254 30.465 35.599 70.965 50.136 68.123 34.291 +mAP: 72.522 97.484 56.302 69.971 85.240 75.807 65.180 48.905 44.290 58.201 8.106 52.355 57.092 64.678 46.303 81.011 88.751 78.700 86.657 41.592 +mAcc: 90.472 98.472 54.063 81.240 76.869 95.162 69.417 40.412 13.418 79.305 0.000 71.990 80.822 71.015 32.949 37.929 71.246 53.628 68.683 48.777 + +thomas 04/10 00:35:53 201/312: Data time: 0.0027, Iter time: 0.2578 Loss 0.377 (AVG: 0.689) Score 87.122 (AVG: 79.770) mIOU 50.736 mAP 64.306 mAcc 62.338 +IOU: 71.220 96.641 47.424 63.160 74.807 52.000 61.903 28.774 16.074 63.166 0.000 53.391 52.136 45.269 36.996 38.569 68.240 42.686 67.803 34.456 +mAP: 72.396 97.304 59.212 65.492 84.903 80.294 61.673 50.136 43.365 60.822 10.050 55.383 58.007 60.696 57.117 81.349 87.108 78.834 80.151 41.830 +mAcc: 90.697 98.327 55.820 78.322 77.357 95.954 67.756 41.031 16.508 78.679 0.000 82.059 85.502 68.636 40.433 40.740 68.461 44.399 68.287 47.796 + +thomas 04/10 00:36:36 301/312: Data time: 0.0027, Iter time: 0.5067 Loss 0.542 (AVG: 0.660) Score 83.092 (AVG: 80.525) mIOU 50.694 mAP 64.422 mAcc 62.091 +IOU: 72.082 96.696 48.649 65.032 77.004 59.814 62.280 29.071 14.581 62.429 0.006 51.660 48.001 39.081 37.426 42.291 67.151 39.932 64.975 35.709 +mAP: 72.478 97.379 59.121 65.540 85.302 79.731 63.914 49.118 40.088 64.244 10.968 54.983 56.923 60.141 58.564 82.405 86.941 76.425 81.022 43.160 +mAcc: 91.156 98.310 56.930 79.752 79.481 96.391 68.227 41.114 15.077 75.754 0.006 82.611 84.302 59.267 41.313 47.168 67.346 41.578 65.292 50.735 + +thomas 04/10 00:36:40 312/312: Data time: 0.0024, Iter time: 0.1726 Loss 0.130 (AVG: 0.661) Score 96.330 (AVG: 80.458) mIOU 50.492 mAP 64.172 mAcc 61.993 +IOU: 72.141 96.755 48.690 64.838 77.146 59.437 61.648 28.803 14.554 62.389 0.006 50.282 47.254 39.458 36.624 42.291 67.599 38.870 65.462 35.586 +mAP: 72.605 97.315 58.690 64.784 85.321 79.686 63.410 48.860 39.976 64.244 10.968 53.326 57.324 59.591 57.722 82.405 87.090 75.624 81.465 43.037 +mAcc: 91.148 98.336 57.020 79.603 79.624 96.402 67.355 40.703 15.036 75.754 0.006 82.224 84.547 59.839 40.339 47.168 67.793 40.419 65.774 50.767 + +thomas 04/10 00:36:40 Finished test. Elapsed time: 130.3862 +thomas 04/10 00:36:41 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/10 00:36:42 Current best mIoU: 50.492 at iter 11000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 00:38:22 ===> Epoch[74](11040/151): Loss 0.3416 LR: 9.168e-02 Score 89.000 Data time: 0.2123, Total iter time: 2.4513 +thomas 04/10 00:40:01 ===> Epoch[74](11080/151): Loss 0.3544 LR: 9.165e-02 Score 88.659 Data time: 0.2277, Total iter time: 2.4190 +thomas 04/10 00:41:44 ===> Epoch[74](11120/151): Loss 0.3655 LR: 9.162e-02 Score 88.467 Data time: 0.2388, Total iter time: 2.5160 +thomas 04/10 00:43:22 ===> Epoch[74](11160/151): Loss 0.3466 LR: 9.159e-02 Score 88.645 Data time: 0.2349, Total iter time: 2.4018 +thomas 04/10 00:45:01 ===> Epoch[75](11200/151): Loss 0.3675 LR: 9.156e-02 Score 88.217 Data time: 0.2258, Total iter time: 2.4231 +thomas 04/10 00:46:42 ===> Epoch[75](11240/151): Loss 0.3529 LR: 9.153e-02 Score 88.884 Data time: 0.2269, Total iter time: 2.4616 +thomas 04/10 00:48:21 ===> Epoch[75](11280/151): Loss 0.3430 LR: 9.150e-02 Score 88.929 Data time: 0.2103, Total iter time: 2.4431 +thomas 04/10 00:50:01 ===> Epoch[75](11320/151): Loss 0.3528 LR: 9.147e-02 Score 88.900 Data time: 0.2250, Total iter time: 2.4456 +thomas 04/10 00:51:41 ===> Epoch[76](11360/151): Loss 0.3702 LR: 9.144e-02 Score 87.986 Data time: 0.2197, Total iter time: 2.4385 +thomas 04/10 00:53:20 ===> Epoch[76](11400/151): Loss 0.3588 LR: 9.141e-02 Score 88.327 Data time: 0.2377, Total iter time: 2.4239 +thomas 04/10 00:55:03 ===> Epoch[76](11440/151): Loss 0.3486 LR: 9.138e-02 Score 89.040 Data time: 0.2309, Total iter time: 2.5131 +thomas 04/10 00:56:45 ===> Epoch[77](11480/151): Loss 0.3510 LR: 9.135e-02 Score 88.861 Data time: 0.2366, Total iter time: 2.4942 +thomas 04/10 00:58:24 ===> Epoch[77](11520/151): Loss 0.3627 LR: 9.132e-02 Score 88.458 Data time: 0.2252, Total iter time: 2.4035 +thomas 04/10 01:00:01 ===> Epoch[77](11560/151): Loss 0.3488 LR: 9.129e-02 Score 88.793 Data time: 0.2134, Total iter time: 2.3689 +thomas 04/10 01:01:46 ===> Epoch[77](11600/151): Loss 0.3325 LR: 9.126e-02 Score 89.291 Data time: 0.2208, Total iter time: 2.5708 +thomas 04/10 01:03:25 ===> Epoch[78](11640/151): Loss 0.3552 LR: 9.123e-02 Score 88.454 Data time: 0.2294, Total iter time: 2.4349 +thomas 04/10 01:05:03 ===> Epoch[78](11680/151): Loss 0.3513 LR: 9.120e-02 Score 88.876 Data time: 0.2398, Total iter time: 2.3853 +thomas 04/10 01:06:40 ===> Epoch[78](11720/151): Loss 0.3379 LR: 9.117e-02 Score 89.213 Data time: 0.2244, Total iter time: 2.3689 +thomas 04/10 01:08:20 ===> Epoch[78](11760/151): Loss 0.3453 LR: 9.114e-02 Score 88.925 Data time: 0.2094, Total iter time: 2.4415 +thomas 04/10 01:10:03 ===> Epoch[79](11800/151): Loss 0.3539 LR: 9.110e-02 Score 88.705 Data time: 0.2301, Total iter time: 2.5220 +thomas 04/10 01:11:42 ===> Epoch[79](11840/151): Loss 0.3566 LR: 9.107e-02 Score 88.744 Data time: 0.2345, Total iter time: 2.4305 +thomas 04/10 01:13:20 ===> Epoch[79](11880/151): Loss 0.3729 LR: 9.104e-02 Score 87.993 Data time: 0.2285, Total iter time: 2.3895 +thomas 04/10 01:15:00 ===> Epoch[79](11920/151): Loss 0.3569 LR: 9.101e-02 Score 88.682 Data time: 0.2214, Total iter time: 2.4355 +thomas 04/10 01:16:41 ===> Epoch[80](11960/151): Loss 0.3517 LR: 9.098e-02 Score 88.572 Data time: 0.2228, Total iter time: 2.4929 +thomas 04/10 01:18:22 ===> Epoch[80](12000/151): Loss 0.3348 LR: 9.095e-02 Score 89.239 Data time: 0.2374, Total iter time: 2.4573 +thomas 04/10 01:18:24 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 01:18:24 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 01:19:07 101/312: Data time: 0.0026, Iter time: 0.2601 Loss 0.223 (AVG: 0.621) Score 95.164 (AVG: 82.134) mIOU 49.610 mAP 63.863 mAcc 59.365 +IOU: 73.879 96.795 46.027 77.191 81.131 66.861 68.813 29.461 37.019 49.122 0.000 44.085 59.982 34.625 29.036 31.143 76.112 11.116 45.817 33.989 +mAP: 72.996 97.754 56.651 90.458 86.782 77.397 71.985 48.090 46.492 66.909 14.959 52.513 63.454 44.202 42.567 74.242 93.877 66.501 64.058 45.373 +mAcc: 89.643 98.949 75.517 91.851 87.666 93.792 72.235 39.538 43.057 54.271 0.000 53.429 71.874 47.960 36.224 33.155 88.684 11.166 46.399 51.888 + +thomas 04/10 01:19:47 201/312: Data time: 0.0256, Iter time: 0.1965 Loss 0.241 (AVG: 0.661) Score 91.782 (AVG: 81.099) mIOU 50.879 mAP 63.961 mAcc 61.269 +IOU: 72.344 96.576 50.690 67.827 81.305 68.456 68.425 30.794 33.371 48.341 0.000 55.171 52.472 31.819 36.885 37.240 73.650 15.867 62.645 33.702 +mAP: 72.870 97.247 54.205 73.851 86.039 81.034 72.368 47.765 41.858 58.345 14.125 53.178 62.923 55.067 52.697 79.553 92.216 65.403 73.500 44.981 +mAcc: 88.360 98.927 77.344 81.559 89.076 95.000 72.295 40.843 38.597 54.317 0.000 66.124 62.276 53.713 47.631 40.172 89.592 15.966 64.115 49.468 + +thomas 04/10 01:20:30 301/312: Data time: 0.0023, Iter time: 0.3333 Loss 0.610 (AVG: 0.653) Score 80.190 (AVG: 81.361) mIOU 50.522 mAP 63.280 mAcc 60.929 +IOU: 73.129 96.816 48.842 65.405 81.550 67.429 64.713 29.924 36.414 57.500 0.003 52.001 50.175 35.402 32.337 35.389 70.605 15.096 64.617 33.101 +mAP: 73.575 97.403 53.269 69.588 85.840 82.676 69.233 46.994 43.374 60.510 14.479 50.948 61.640 59.111 45.473 80.917 89.158 64.061 73.873 43.486 +mAcc: 88.630 99.014 75.400 75.703 89.860 95.241 69.636 38.850 43.531 64.584 0.003 63.844 61.378 55.993 41.843 38.301 84.258 15.234 66.374 50.901 + +thomas 04/10 01:20:34 312/312: Data time: 0.0021, Iter time: 0.2437 Loss 0.661 (AVG: 0.657) Score 80.706 (AVG: 81.285) mIOU 50.323 mAP 63.185 mAcc 60.678 +IOU: 73.097 96.742 48.488 66.085 81.382 68.238 64.840 29.954 35.976 55.753 0.003 51.378 50.102 35.636 32.251 34.563 70.816 15.306 62.946 32.901 +mAP: 73.302 97.350 52.261 69.895 85.627 83.165 68.909 46.902 43.393 60.947 14.489 50.928 61.497 58.434 45.473 80.607 88.796 64.443 73.855 43.418 +mAcc: 88.649 98.959 75.301 76.354 89.814 95.592 69.850 38.895 42.926 62.368 0.003 63.266 61.238 55.858 41.843 37.336 84.304 15.444 64.598 50.970 + +thomas 04/10 01:20:34 Finished test. Elapsed time: 129.9142 +thomas 04/10 01:20:34 Current best mIoU: 50.492 at iter 11000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 01:22:12 ===> Epoch[80](12040/151): Loss 0.3403 LR: 9.092e-02 Score 89.125 Data time: 0.2228, Total iter time: 2.4171 +thomas 04/10 01:23:53 ===> Epoch[80](12080/151): Loss 0.3441 LR: 9.089e-02 Score 88.995 Data time: 0.2172, Total iter time: 2.4659 +thomas 04/10 01:25:36 ===> Epoch[81](12120/151): Loss 0.3332 LR: 9.086e-02 Score 89.456 Data time: 0.2412, Total iter time: 2.5035 +thomas 04/10 01:27:16 ===> Epoch[81](12160/151): Loss 0.3561 LR: 9.083e-02 Score 88.454 Data time: 0.2316, Total iter time: 2.4611 +thomas 04/10 01:29:03 ===> Epoch[81](12200/151): Loss 0.3266 LR: 9.080e-02 Score 89.608 Data time: 0.2441, Total iter time: 2.6093 +thomas 04/10 01:30:49 ===> Epoch[82](12240/151): Loss 0.3278 LR: 9.077e-02 Score 89.509 Data time: 0.2408, Total iter time: 2.5879 +thomas 04/10 01:32:28 ===> Epoch[82](12280/151): Loss 0.3362 LR: 9.074e-02 Score 89.411 Data time: 0.2335, Total iter time: 2.4300 +thomas 04/10 01:34:01 ===> Epoch[82](12320/151): Loss 0.3440 LR: 9.071e-02 Score 88.728 Data time: 0.2221, Total iter time: 2.2580 +thomas 04/10 01:35:44 ===> Epoch[82](12360/151): Loss 0.3381 LR: 9.068e-02 Score 89.220 Data time: 0.2123, Total iter time: 2.5142 +thomas 04/10 01:37:26 ===> Epoch[83](12400/151): Loss 0.3607 LR: 9.065e-02 Score 88.354 Data time: 0.2297, Total iter time: 2.5033 +thomas 04/10 01:39:04 ===> Epoch[83](12440/151): Loss 0.3202 LR: 9.062e-02 Score 89.618 Data time: 0.2298, Total iter time: 2.4002 +thomas 04/10 01:40:47 ===> Epoch[83](12480/151): Loss 0.3730 LR: 9.059e-02 Score 88.092 Data time: 0.2420, Total iter time: 2.5213 +thomas 04/10 01:42:28 ===> Epoch[83](12520/151): Loss 0.3385 LR: 9.056e-02 Score 89.415 Data time: 0.2294, Total iter time: 2.4588 +thomas 04/10 01:44:08 ===> Epoch[84](12560/151): Loss 0.3338 LR: 9.053e-02 Score 89.498 Data time: 0.2456, Total iter time: 2.4412 +thomas 04/10 01:45:52 ===> Epoch[84](12600/151): Loss 0.3370 LR: 9.050e-02 Score 89.133 Data time: 0.2513, Total iter time: 2.5545 +thomas 04/10 01:47:32 ===> Epoch[84](12640/151): Loss 0.3436 LR: 9.047e-02 Score 88.809 Data time: 0.2242, Total iter time: 2.4309 +thomas 04/10 01:49:13 ===> Epoch[84](12680/151): Loss 0.3475 LR: 9.044e-02 Score 88.928 Data time: 0.2270, Total iter time: 2.4814 +thomas 04/10 01:50:53 ===> Epoch[85](12720/151): Loss 0.3378 LR: 9.041e-02 Score 89.149 Data time: 0.2241, Total iter time: 2.4328 +thomas 04/10 01:52:33 ===> Epoch[85](12760/151): Loss 0.3417 LR: 9.038e-02 Score 88.975 Data time: 0.2302, Total iter time: 2.4599 +thomas 04/10 01:54:17 ===> Epoch[85](12800/151): Loss 0.3237 LR: 9.035e-02 Score 89.501 Data time: 0.2430, Total iter time: 2.5232 +thomas 04/10 01:55:55 ===> Epoch[86](12840/151): Loss 0.3724 LR: 9.032e-02 Score 88.216 Data time: 0.2278, Total iter time: 2.3973 +thomas 04/10 01:57:35 ===> Epoch[86](12880/151): Loss 0.3539 LR: 9.029e-02 Score 88.706 Data time: 0.2253, Total iter time: 2.4440 +thomas 04/10 01:59:16 ===> Epoch[86](12920/151): Loss 0.3341 LR: 9.026e-02 Score 89.180 Data time: 0.2296, Total iter time: 2.4897 +thomas 04/10 02:00:56 ===> Epoch[86](12960/151): Loss 0.3149 LR: 9.023e-02 Score 89.879 Data time: 0.2114, Total iter time: 2.4350 +thomas 04/10 02:02:35 ===> Epoch[87](13000/151): Loss 0.3287 LR: 9.020e-02 Score 89.537 Data time: 0.2173, Total iter time: 2.4303 +thomas 04/10 02:02:37 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 02:02:37 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 02:03:22 101/312: Data time: 0.0022, Iter time: 0.4130 Loss 0.279 (AVG: 0.641) Score 91.781 (AVG: 81.540) mIOU 50.131 mAP 63.037 mAcc 61.981 +IOU: 71.254 96.603 51.436 50.685 82.965 58.910 64.387 29.114 30.302 81.488 0.000 51.177 45.651 38.269 34.223 32.333 45.681 37.720 67.515 32.899 +mAP: 69.596 97.446 52.164 64.898 88.588 82.663 68.921 46.376 40.004 58.764 16.947 54.035 52.106 62.450 51.426 72.895 87.204 72.733 76.836 44.689 +mAcc: 86.821 98.601 64.988 73.169 86.735 95.236 90.080 45.929 32.772 92.743 0.000 58.085 62.424 82.277 46.177 33.571 45.681 39.771 67.928 36.642 + +thomas 04/10 02:04:02 201/312: Data time: 0.0031, Iter time: 0.2584 Loss 0.291 (AVG: 0.623) Score 90.736 (AVG: 81.998) mIOU 51.385 mAP 63.664 mAcc 63.130 +IOU: 73.801 96.763 51.344 55.442 84.134 64.954 63.014 29.056 38.529 68.392 0.000 53.075 51.384 37.029 32.484 32.162 48.350 41.132 71.458 35.194 +mAP: 72.569 96.932 52.145 61.425 89.156 81.369 69.046 46.490 42.599 58.600 14.653 57.276 54.317 65.626 54.988 73.115 84.627 72.841 80.200 45.302 +mAcc: 87.237 98.833 63.187 71.534 88.744 95.727 87.052 46.167 41.903 88.798 0.000 62.546 67.714 74.289 48.566 36.470 48.375 44.017 71.821 39.620 + +thomas 04/10 02:04:43 301/312: Data time: 0.0030, Iter time: 0.2309 Loss 0.595 (AVG: 0.640) Score 76.362 (AVG: 81.513) mIOU 51.084 mAP 63.162 mAcc 62.794 +IOU: 73.411 96.809 51.180 58.759 83.643 64.820 63.035 30.037 35.965 66.059 0.000 55.222 48.989 35.582 32.918 36.173 48.844 40.088 66.438 33.709 +mAP: 73.076 97.173 49.568 60.554 88.221 82.898 67.669 46.422 41.658 56.486 14.764 56.204 53.988 64.552 55.166 71.945 87.602 71.269 80.345 43.687 +mAcc: 86.989 98.868 61.837 71.713 88.671 95.218 86.843 48.051 39.499 84.397 0.000 67.561 66.534 71.271 50.689 39.987 48.867 42.848 66.791 39.248 + +thomas 04/10 02:04:47 312/312: Data time: 0.0029, Iter time: 0.2547 Loss 0.806 (AVG: 0.639) Score 77.257 (AVG: 81.551) mIOU 51.297 mAP 63.331 mAcc 63.014 +IOU: 73.592 96.826 52.028 58.975 83.197 64.618 63.254 30.556 35.865 65.497 0.000 55.510 49.160 36.305 35.516 35.793 48.804 40.979 66.204 33.263 +mAP: 73.343 97.182 49.831 60.979 87.994 82.916 67.553 46.842 41.232 57.255 14.915 56.077 54.101 64.485 55.472 72.223 87.985 72.065 80.666 43.507 +mAcc: 87.078 98.891 62.982 71.800 88.108 95.225 86.945 48.541 39.317 84.422 0.000 67.132 66.789 72.384 53.149 39.405 48.825 43.717 66.551 39.027 + +thomas 04/10 02:04:47 Finished test. Elapsed time: 129.7234 +thomas 04/10 02:04:48 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/10 02:04:48 Current best mIoU: 51.297 at iter 13000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 02:06:26 ===> Epoch[87](13040/151): Loss 0.3385 LR: 9.016e-02 Score 89.143 Data time: 0.2086, Total iter time: 2.3933 +thomas 04/10 02:08:07 ===> Epoch[87](13080/151): Loss 0.3385 LR: 9.013e-02 Score 89.214 Data time: 0.2431, Total iter time: 2.4766 +thomas 04/10 02:09:50 ===> Epoch[87](13120/151): Loss 0.3225 LR: 9.010e-02 Score 89.619 Data time: 0.2355, Total iter time: 2.5074 +thomas 04/10 02:11:30 ===> Epoch[88](13160/151): Loss 0.3563 LR: 9.007e-02 Score 88.876 Data time: 0.2162, Total iter time: 2.4335 +thomas 04/10 02:13:06 ===> Epoch[88](13200/151): Loss 0.3119 LR: 9.004e-02 Score 90.121 Data time: 0.2233, Total iter time: 2.3694 +thomas 04/10 02:14:47 ===> Epoch[88](13240/151): Loss 0.3252 LR: 9.001e-02 Score 89.538 Data time: 0.2372, Total iter time: 2.4597 +thomas 04/10 02:16:26 ===> Epoch[88](13280/151): Loss 0.3311 LR: 8.998e-02 Score 89.414 Data time: 0.2224, Total iter time: 2.4166 +thomas 04/10 02:18:07 ===> Epoch[89](13320/151): Loss 0.3271 LR: 8.995e-02 Score 89.387 Data time: 0.2414, Total iter time: 2.4844 +thomas 04/10 02:19:54 ===> Epoch[89](13360/151): Loss 0.3331 LR: 8.992e-02 Score 89.347 Data time: 0.2439, Total iter time: 2.5991 +thomas 04/10 02:21:38 ===> Epoch[89](13400/151): Loss 0.3244 LR: 8.989e-02 Score 89.419 Data time: 0.2321, Total iter time: 2.5496 +thomas 04/10 02:23:20 ===> Epoch[90](13440/151): Loss 0.3099 LR: 8.986e-02 Score 90.126 Data time: 0.2707, Total iter time: 2.5064 +thomas 04/10 02:25:07 ===> Epoch[90](13480/151): Loss 0.3340 LR: 8.983e-02 Score 88.922 Data time: 0.2632, Total iter time: 2.6154 +thomas 04/10 02:26:47 ===> Epoch[90](13520/151): Loss 0.3410 LR: 8.980e-02 Score 89.255 Data time: 0.2354, Total iter time: 2.4285 +thomas 04/10 02:28:28 ===> Epoch[90](13560/151): Loss 0.3017 LR: 8.977e-02 Score 90.180 Data time: 0.2166, Total iter time: 2.4784 +thomas 04/10 02:30:10 ===> Epoch[91](13600/151): Loss 0.3267 LR: 8.974e-02 Score 89.318 Data time: 0.2483, Total iter time: 2.4862 +thomas 04/10 02:31:48 ===> Epoch[91](13640/151): Loss 0.3298 LR: 8.971e-02 Score 89.440 Data time: 0.2312, Total iter time: 2.3909 +thomas 04/10 02:33:27 ===> Epoch[91](13680/151): Loss 0.3339 LR: 8.968e-02 Score 89.344 Data time: 0.2243, Total iter time: 2.4167 +thomas 04/10 02:35:04 ===> Epoch[91](13720/151): Loss 0.3304 LR: 8.965e-02 Score 89.517 Data time: 0.2247, Total iter time: 2.3855 +thomas 04/10 02:36:47 ===> Epoch[92](13760/151): Loss 0.3202 LR: 8.962e-02 Score 89.807 Data time: 0.2434, Total iter time: 2.5220 +thomas 04/10 02:38:29 ===> Epoch[92](13800/151): Loss 0.3305 LR: 8.959e-02 Score 89.397 Data time: 0.2168, Total iter time: 2.4920 +thomas 04/10 02:40:13 ===> Epoch[92](13840/151): Loss 0.3137 LR: 8.956e-02 Score 89.973 Data time: 0.2331, Total iter time: 2.5406 +thomas 04/10 02:41:52 ===> Epoch[92](13880/151): Loss 0.3297 LR: 8.953e-02 Score 89.434 Data time: 0.2228, Total iter time: 2.4069 +thomas 04/10 02:43:26 ===> Epoch[93](13920/151): Loss 0.3370 LR: 8.950e-02 Score 89.126 Data time: 0.2215, Total iter time: 2.3173 +thomas 04/10 02:45:14 ===> Epoch[93](13960/151): Loss 0.3223 LR: 8.947e-02 Score 89.572 Data time: 0.2560, Total iter time: 2.6209 +thomas 04/10 02:47:01 ===> Epoch[93](14000/151): Loss 0.3158 LR: 8.944e-02 Score 89.693 Data time: 0.2252, Total iter time: 2.6415 +thomas 04/10 02:47:03 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 02:47:03 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 02:47:48 101/312: Data time: 0.0027, Iter time: 0.2443 Loss 0.371 (AVG: 0.748) Score 87.321 (AVG: 78.822) mIOU 51.707 mAP 64.726 mAcc 65.508 +IOU: 64.093 97.182 44.529 64.078 88.458 74.595 72.192 22.360 16.362 53.645 4.764 50.142 45.821 50.469 19.228 37.244 84.314 33.538 80.574 30.550 +mAP: 72.857 97.732 59.659 65.930 90.351 82.155 72.462 42.108 43.992 65.898 18.936 46.529 56.583 59.317 62.776 73.083 87.730 71.490 83.201 41.728 +mAcc: 74.898 98.860 82.367 73.299 96.778 95.268 75.952 25.515 16.737 93.741 8.240 61.869 76.885 65.447 63.318 41.594 86.102 34.829 82.762 55.689 + +thomas 04/10 02:48:31 201/312: Data time: 0.0023, Iter time: 0.2658 Loss 0.974 (AVG: 0.805) Score 68.666 (AVG: 77.657) mIOU 49.961 mAP 62.829 mAcc 63.845 +IOU: 65.800 96.462 41.824 61.179 84.710 70.923 63.256 21.727 14.136 53.262 5.078 56.000 47.704 45.319 20.707 40.632 76.808 30.835 71.867 30.989 +mAP: 73.416 97.140 53.016 58.389 87.942 80.233 67.860 45.319 41.346 63.075 16.566 46.747 53.177 58.929 60.240 73.440 85.866 66.777 81.490 45.602 +mAcc: 77.004 98.674 79.075 68.428 94.681 89.208 67.314 24.149 14.344 90.950 9.280 70.224 73.734 62.928 64.669 45.728 80.096 31.695 73.431 61.294 + +thomas 04/10 02:49:13 301/312: Data time: 0.0029, Iter time: 0.2242 Loss 0.284 (AVG: 0.783) Score 94.644 (AVG: 78.025) mIOU 50.064 mAP 62.506 mAcc 63.404 +IOU: 66.903 96.658 41.521 62.994 81.809 67.493 63.535 20.991 12.801 57.702 4.235 52.393 47.726 44.123 23.015 44.844 76.686 35.684 69.072 31.092 +mAP: 72.861 97.230 53.224 60.641 86.121 79.055 68.449 43.936 38.972 61.354 14.344 48.596 53.052 59.188 56.195 76.302 88.149 67.965 80.647 43.837 +mAcc: 78.070 98.715 77.956 71.588 93.609 83.092 67.689 23.188 13.058 90.712 8.190 65.710 73.485 60.860 62.801 49.847 81.064 36.425 70.912 61.119 + +thomas 04/10 02:49:17 312/312: Data time: 0.0030, Iter time: 0.1454 Loss 0.824 (AVG: 0.784) Score 72.417 (AVG: 78.122) mIOU 49.923 mAP 62.538 mAcc 63.159 +IOU: 66.924 96.677 42.062 62.722 82.416 67.490 64.370 21.024 13.101 57.737 3.956 52.951 48.164 42.682 23.506 40.735 77.014 35.281 68.502 31.150 +mAP: 73.044 97.267 53.825 60.177 86.364 79.306 68.880 44.091 38.525 61.150 14.278 50.223 53.652 57.756 57.678 76.390 88.357 67.880 78.111 43.804 +mAcc: 78.166 98.717 78.592 71.237 93.868 83.497 68.478 23.292 13.446 90.772 7.612 65.872 73.757 58.518 63.676 44.662 81.398 36.006 70.340 61.268 + +thomas 04/10 02:49:17 Finished test. Elapsed time: 134.2140 +thomas 04/10 02:49:17 Current best mIoU: 51.297 at iter 13000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 02:51:01 ===> Epoch[93](14040/151): Loss 0.3145 LR: 8.941e-02 Score 90.090 Data time: 0.2519, Total iter time: 2.5427 +thomas 04/10 02:52:45 ===> Epoch[94](14080/151): Loss 0.3607 LR: 8.938e-02 Score 88.374 Data time: 0.2349, Total iter time: 2.5403 +thomas 04/10 02:54:26 ===> Epoch[94](14120/151): Loss 0.3169 LR: 8.934e-02 Score 89.883 Data time: 0.2294, Total iter time: 2.4671 +thomas 04/10 02:56:13 ===> Epoch[94](14160/151): Loss 0.3220 LR: 8.931e-02 Score 89.502 Data time: 0.2623, Total iter time: 2.6113 +thomas 04/10 02:57:57 ===> Epoch[95](14200/151): Loss 0.3444 LR: 8.928e-02 Score 88.985 Data time: 0.2142, Total iter time: 2.5530 +thomas 04/10 02:59:45 ===> Epoch[95](14240/151): Loss 0.3202 LR: 8.925e-02 Score 89.660 Data time: 0.2305, Total iter time: 2.6492 +thomas 04/10 03:01:25 ===> Epoch[95](14280/151): Loss 0.3161 LR: 8.922e-02 Score 89.780 Data time: 0.2356, Total iter time: 2.4330 +thomas 04/10 03:03:06 ===> Epoch[95](14320/151): Loss 0.3077 LR: 8.919e-02 Score 89.940 Data time: 0.2382, Total iter time: 2.4884 +thomas 04/10 03:04:53 ===> Epoch[96](14360/151): Loss 0.3387 LR: 8.916e-02 Score 89.358 Data time: 0.2419, Total iter time: 2.6096 +thomas 04/10 03:06:42 ===> Epoch[96](14400/151): Loss 0.3348 LR: 8.913e-02 Score 89.261 Data time: 0.2368, Total iter time: 2.6754 +thomas 04/10 03:08:27 ===> Epoch[96](14440/151): Loss 0.3317 LR: 8.910e-02 Score 89.656 Data time: 0.2481, Total iter time: 2.5635 +thomas 04/10 03:10:16 ===> Epoch[96](14480/151): Loss 0.3223 LR: 8.907e-02 Score 89.671 Data time: 0.2480, Total iter time: 2.6670 +thomas 04/10 03:12:02 ===> Epoch[97](14520/151): Loss 0.3305 LR: 8.904e-02 Score 89.393 Data time: 0.2372, Total iter time: 2.5799 +thomas 04/10 03:13:46 ===> Epoch[97](14560/151): Loss 0.3376 LR: 8.901e-02 Score 89.182 Data time: 0.2440, Total iter time: 2.5396 +thomas 04/10 03:15:29 ===> Epoch[97](14600/151): Loss 0.3130 LR: 8.898e-02 Score 89.873 Data time: 0.2366, Total iter time: 2.5388 +thomas 04/10 03:17:14 ===> Epoch[97](14640/151): Loss 0.3146 LR: 8.895e-02 Score 89.776 Data time: 0.2395, Total iter time: 2.5687 +thomas 04/10 03:19:01 ===> Epoch[98](14680/151): Loss 0.3059 LR: 8.892e-02 Score 90.095 Data time: 0.2225, Total iter time: 2.6043 +thomas 04/10 03:20:45 ===> Epoch[98](14720/151): Loss 0.3395 LR: 8.889e-02 Score 89.246 Data time: 0.2520, Total iter time: 2.5498 +thomas 04/10 03:22:29 ===> Epoch[98](14760/151): Loss 0.3130 LR: 8.886e-02 Score 89.921 Data time: 0.2375, Total iter time: 2.5411 +thomas 04/10 03:24:15 ===> Epoch[99](14800/151): Loss 0.3240 LR: 8.883e-02 Score 89.780 Data time: 0.2189, Total iter time: 2.6101 +thomas 04/10 03:26:01 ===> Epoch[99](14840/151): Loss 0.2995 LR: 8.880e-02 Score 90.367 Data time: 0.2477, Total iter time: 2.5888 +thomas 04/10 03:27:44 ===> Epoch[99](14880/151): Loss 0.3151 LR: 8.877e-02 Score 89.820 Data time: 0.2512, Total iter time: 2.5222 +thomas 04/10 03:29:32 ===> Epoch[99](14920/151): Loss 0.2910 LR: 8.874e-02 Score 90.648 Data time: 0.2208, Total iter time: 2.6353 +thomas 04/10 03:31:16 ===> Epoch[100](14960/151): Loss 0.3194 LR: 8.871e-02 Score 89.771 Data time: 0.2216, Total iter time: 2.5578 +thomas 04/10 03:33:00 ===> Epoch[100](15000/151): Loss 0.3208 LR: 8.868e-02 Score 89.870 Data time: 0.2555, Total iter time: 2.5276 +thomas 04/10 03:33:01 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 03:33:01 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 03:33:47 101/312: Data time: 0.0023, Iter time: 0.3882 Loss 0.411 (AVG: 0.637) Score 87.320 (AVG: 81.603) mIOU 55.732 mAP 66.049 mAcc 69.681 +IOU: 72.121 96.267 51.824 68.681 83.530 72.882 70.783 29.797 35.476 57.157 0.744 52.073 56.081 28.306 38.467 51.159 83.503 36.083 89.346 40.367 +mAP: 74.061 96.044 53.851 66.459 87.127 84.952 66.483 47.699 44.501 61.411 19.446 50.116 59.450 55.209 70.631 82.895 90.613 78.715 84.785 46.520 +mAcc: 83.636 99.169 76.268 76.705 87.467 96.206 79.023 45.740 42.520 67.261 0.789 85.272 73.051 52.781 74.526 82.933 85.905 36.336 91.432 56.598 + +thomas 04/10 03:34:30 201/312: Data time: 0.0024, Iter time: 0.3257 Loss 0.802 (AVG: 0.628) Score 78.894 (AVG: 81.639) mIOU 54.901 mAP 65.160 mAcc 67.982 +IOU: 73.264 96.339 51.068 69.155 82.674 68.214 66.147 31.180 34.863 63.619 0.517 52.681 50.062 33.971 38.108 53.782 81.338 33.053 79.329 38.645 +mAP: 73.025 96.337 57.047 67.589 86.280 83.129 64.995 48.698 44.585 59.980 16.757 50.164 60.710 56.273 63.823 78.539 88.306 76.967 83.692 46.311 +mAcc: 84.979 99.059 77.732 77.211 86.672 96.724 75.448 47.590 40.311 78.161 0.544 79.563 71.670 60.862 63.466 72.463 83.499 33.245 81.778 48.664 + +thomas 04/10 03:35:11 301/312: Data time: 0.0046, Iter time: 0.2452 Loss 0.504 (AVG: 0.624) Score 85.073 (AVG: 81.786) mIOU 54.279 mAP 65.526 mAcc 67.215 +IOU: 73.162 96.753 48.995 66.375 83.189 67.683 67.919 32.649 34.514 62.132 0.383 51.702 48.483 35.222 38.003 50.002 79.672 32.700 78.939 37.103 +mAP: 74.055 96.008 57.778 68.432 87.030 81.116 69.296 49.015 45.818 61.416 17.149 50.809 57.118 59.770 63.427 79.240 88.357 76.947 81.122 46.617 +mAcc: 85.735 99.141 76.728 75.963 87.100 97.128 77.879 48.894 39.700 73.844 0.400 78.194 70.407 62.621 63.978 63.711 82.247 32.929 82.060 45.642 + +thomas 04/10 03:35:16 312/312: Data time: 0.0025, Iter time: 0.2103 Loss 0.566 (AVG: 0.626) Score 78.125 (AVG: 81.680) mIOU 54.122 mAP 65.357 mAcc 67.129 +IOU: 73.274 96.719 49.010 66.402 83.545 67.227 68.076 32.547 34.376 61.227 0.378 51.669 48.510 34.544 37.492 49.962 79.672 32.260 78.939 36.616 +mAP: 74.392 96.018 57.852 67.354 86.889 81.449 68.669 48.925 45.904 61.416 17.124 51.066 56.390 60.460 62.123 79.240 88.357 76.336 81.122 46.061 +mAcc: 85.823 99.129 76.744 76.046 87.444 97.166 77.876 48.133 39.240 73.844 0.395 78.445 70.740 63.107 62.757 63.711 82.247 32.480 82.060 45.188 + +thomas 04/10 03:35:16 Finished test. Elapsed time: 134.3144 +thomas 04/10 03:35:17 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/10 03:35:17 Current best mIoU: 54.122 at iter 15000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 03:36:59 ===> Epoch[100](15040/151): Loss 0.3070 LR: 8.865e-02 Score 90.004 Data time: 0.2329, Total iter time: 2.4950 +thomas 04/10 03:38:45 ===> Epoch[100](15080/151): Loss 0.3043 LR: 8.862e-02 Score 90.255 Data time: 0.2397, Total iter time: 2.5955 +thomas 04/10 03:40:30 ===> Epoch[101](15120/151): Loss 0.3201 LR: 8.859e-02 Score 89.663 Data time: 0.2452, Total iter time: 2.5577 +thomas 04/10 03:42:15 ===> Epoch[101](15160/151): Loss 0.3311 LR: 8.855e-02 Score 89.648 Data time: 0.2416, Total iter time: 2.5758 +thomas 04/10 03:44:04 ===> Epoch[101](15200/151): Loss 0.3240 LR: 8.852e-02 Score 89.757 Data time: 0.2284, Total iter time: 2.6650 +thomas 04/10 03:45:45 ===> Epoch[101](15240/151): Loss 0.3100 LR: 8.849e-02 Score 89.865 Data time: 0.2367, Total iter time: 2.4829 +thomas 04/10 03:47:29 ===> Epoch[102](15280/151): Loss 0.3031 LR: 8.846e-02 Score 90.427 Data time: 0.2254, Total iter time: 2.5289 +thomas 04/10 03:49:05 ===> Epoch[102](15320/151): Loss 0.3362 LR: 8.843e-02 Score 89.522 Data time: 0.2271, Total iter time: 2.3660 +thomas 04/10 03:50:47 ===> Epoch[102](15360/151): Loss 0.3202 LR: 8.840e-02 Score 89.659 Data time: 0.2251, Total iter time: 2.4773 +thomas 04/10 03:52:28 ===> Epoch[102](15400/151): Loss 0.3046 LR: 8.837e-02 Score 90.047 Data time: 0.2346, Total iter time: 2.4702 +thomas 04/10 03:54:06 ===> Epoch[103](15440/151): Loss 0.3202 LR: 8.834e-02 Score 89.850 Data time: 0.2122, Total iter time: 2.4031 +thomas 04/10 03:55:53 ===> Epoch[103](15480/151): Loss 0.3299 LR: 8.831e-02 Score 89.349 Data time: 0.2325, Total iter time: 2.6161 +thomas 04/10 03:57:34 ===> Epoch[103](15520/151): Loss 0.3345 LR: 8.828e-02 Score 89.045 Data time: 0.2304, Total iter time: 2.4847 +thomas 04/10 03:59:17 ===> Epoch[104](15560/151): Loss 0.3272 LR: 8.825e-02 Score 89.784 Data time: 0.2377, Total iter time: 2.5066 +thomas 04/10 04:00:59 ===> Epoch[104](15600/151): Loss 0.3038 LR: 8.822e-02 Score 90.217 Data time: 0.2468, Total iter time: 2.5034 +thomas 04/10 04:02:48 ===> Epoch[104](15640/151): Loss 0.3120 LR: 8.819e-02 Score 90.133 Data time: 0.2415, Total iter time: 2.6655 +thomas 04/10 04:04:31 ===> Epoch[104](15680/151): Loss 0.2978 LR: 8.816e-02 Score 90.295 Data time: 0.2353, Total iter time: 2.5235 +thomas 04/10 04:06:17 ===> Epoch[105](15720/151): Loss 0.3175 LR: 8.813e-02 Score 89.838 Data time: 0.2377, Total iter time: 2.5947 +thomas 04/10 04:07:59 ===> Epoch[105](15760/151): Loss 0.3304 LR: 8.810e-02 Score 89.360 Data time: 0.2427, Total iter time: 2.4950 +thomas 04/10 04:09:42 ===> Epoch[105](15800/151): Loss 0.3061 LR: 8.807e-02 Score 90.382 Data time: 0.2369, Total iter time: 2.5253 +thomas 04/10 04:11:29 ===> Epoch[105](15840/151): Loss 0.3124 LR: 8.804e-02 Score 89.768 Data time: 0.2332, Total iter time: 2.6215 +thomas 04/10 04:13:10 ===> Epoch[106](15880/151): Loss 0.3137 LR: 8.801e-02 Score 89.755 Data time: 0.2225, Total iter time: 2.4712 +thomas 04/10 04:14:53 ===> Epoch[106](15920/151): Loss 0.3466 LR: 8.798e-02 Score 89.221 Data time: 0.2316, Total iter time: 2.5207 +thomas 04/10 04:16:40 ===> Epoch[106](15960/151): Loss 0.3071 LR: 8.795e-02 Score 90.337 Data time: 0.2305, Total iter time: 2.6027 +thomas 04/10 04:18:24 ===> Epoch[106](16000/151): Loss 0.3097 LR: 8.792e-02 Score 90.343 Data time: 0.2205, Total iter time: 2.5514 +thomas 04/10 04:18:26 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 04:18:26 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 04:19:12 101/312: Data time: 0.0026, Iter time: 0.2455 Loss 0.258 (AVG: 0.746) Score 90.605 (AVG: 78.508) mIOU 52.560 mAP 64.078 mAcc 65.917 +IOU: 67.227 96.600 45.001 52.274 78.078 49.738 63.133 32.703 38.589 73.778 0.000 56.658 51.529 38.081 44.207 56.401 77.268 33.911 62.639 33.384 +mAP: 70.921 96.947 53.053 66.104 84.114 72.583 63.325 49.749 44.154 63.994 11.675 51.468 61.370 68.016 61.181 75.353 82.765 79.069 75.987 49.737 +mAcc: 76.861 99.065 70.877 82.939 79.745 98.562 72.501 57.958 45.531 85.819 0.000 69.094 64.062 68.439 47.801 74.807 79.430 34.219 63.028 47.600 + +thomas 04/10 04:19:55 201/312: Data time: 0.0025, Iter time: 0.2719 Loss 0.762 (AVG: 0.719) Score 75.231 (AVG: 79.470) mIOU 51.885 mAP 64.127 mAcc 65.463 +IOU: 68.288 96.637 46.983 57.318 74.773 45.840 68.143 36.947 32.224 67.915 0.000 53.645 53.224 45.198 36.548 52.721 70.751 36.156 61.114 33.271 +mAP: 70.254 97.243 54.430 67.477 81.819 75.374 65.083 52.733 42.658 59.457 10.219 52.440 63.454 71.519 59.887 74.785 85.122 76.855 75.357 46.371 +mAcc: 76.761 99.119 72.288 85.256 76.684 98.790 76.844 62.823 36.496 85.652 0.000 64.177 65.743 78.392 40.361 66.546 72.078 36.485 61.595 53.169 + +thomas 04/10 04:20:36 301/312: Data time: 0.0024, Iter time: 0.2611 Loss 0.528 (AVG: 0.764) Score 84.296 (AVG: 78.541) mIOU 50.381 mAP 63.408 mAcc 64.099 +IOU: 67.848 96.502 46.117 53.824 75.126 44.628 67.389 35.690 29.246 65.069 0.000 53.516 51.455 38.256 35.801 49.816 72.808 36.193 56.259 32.083 +mAP: 71.080 96.933 53.908 65.368 81.729 75.732 65.611 50.867 42.177 58.382 11.325 53.354 61.183 68.609 52.484 74.908 87.291 76.660 74.018 46.544 +mAcc: 76.606 99.100 70.507 82.763 76.894 98.238 76.050 60.975 32.427 83.952 0.000 66.496 64.312 73.459 39.438 61.699 74.378 36.526 56.897 51.259 + +thomas 04/10 04:20:41 312/312: Data time: 0.0023, Iter time: 0.1840 Loss 1.156 (AVG: 0.765) Score 75.968 (AVG: 78.584) mIOU 50.292 mAP 63.470 mAcc 64.001 +IOU: 67.939 96.434 45.533 54.109 74.525 44.775 67.542 35.935 29.049 64.743 0.000 53.323 50.952 38.188 35.452 49.816 73.239 37.311 54.925 32.056 +mAP: 70.965 96.820 53.509 65.818 81.828 76.090 65.870 50.838 42.206 58.534 11.648 53.354 60.517 69.142 52.484 74.908 87.620 77.458 73.438 46.357 +mAcc: 76.744 99.122 70.471 82.609 76.258 98.238 76.227 61.098 32.304 82.008 0.000 66.496 64.124 73.584 39.438 61.699 75.008 37.669 55.542 51.370 + +thomas 04/10 04:20:41 Finished test. Elapsed time: 134.7875 +thomas 04/10 04:20:41 Current best mIoU: 54.122 at iter 15000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 04:22:25 ===> Epoch[107](16040/151): Loss 0.3332 LR: 8.789e-02 Score 89.245 Data time: 0.2321, Total iter time: 2.5545 +thomas 04/10 04:24:02 ===> Epoch[107](16080/151): Loss 0.2984 LR: 8.786e-02 Score 90.081 Data time: 0.2314, Total iter time: 2.3734 +thomas 04/10 04:25:40 ===> Epoch[107](16120/151): Loss 0.3061 LR: 8.782e-02 Score 90.202 Data time: 0.2109, Total iter time: 2.4046 +thomas 04/10 04:27:20 ===> Epoch[108](16160/151): Loss 0.3240 LR: 8.779e-02 Score 89.574 Data time: 0.2428, Total iter time: 2.4420 +thomas 04/10 04:29:07 ===> Epoch[108](16200/151): Loss 0.3367 LR: 8.776e-02 Score 89.261 Data time: 0.2338, Total iter time: 2.6206 +thomas 04/10 04:30:50 ===> Epoch[108](16240/151): Loss 0.3143 LR: 8.773e-02 Score 89.965 Data time: 0.2207, Total iter time: 2.5206 +thomas 04/10 04:32:34 ===> Epoch[108](16280/151): Loss 0.3008 LR: 8.770e-02 Score 90.424 Data time: 0.2281, Total iter time: 2.5287 +thomas 04/10 04:34:19 ===> Epoch[109](16320/151): Loss 0.2846 LR: 8.767e-02 Score 90.878 Data time: 0.2596, Total iter time: 2.5793 +thomas 04/10 04:36:08 ===> Epoch[109](16360/151): Loss 0.3280 LR: 8.764e-02 Score 89.534 Data time: 0.2667, Total iter time: 2.6730 +thomas 04/10 04:37:53 ===> Epoch[109](16400/151): Loss 0.2979 LR: 8.761e-02 Score 90.467 Data time: 0.2321, Total iter time: 2.5655 +thomas 04/10 04:39:41 ===> Epoch[109](16440/151): Loss 0.3125 LR: 8.758e-02 Score 89.872 Data time: 0.2399, Total iter time: 2.6300 +thomas 04/10 04:41:28 ===> Epoch[110](16480/151): Loss 0.3122 LR: 8.755e-02 Score 90.143 Data time: 0.2353, Total iter time: 2.6306 +thomas 04/10 04:43:12 ===> Epoch[110](16520/151): Loss 0.2930 LR: 8.752e-02 Score 90.675 Data time: 0.2379, Total iter time: 2.5338 +thomas 04/10 04:44:49 ===> Epoch[110](16560/151): Loss 0.3432 LR: 8.749e-02 Score 88.702 Data time: 0.2291, Total iter time: 2.3887 +thomas 04/10 04:46:37 ===> Epoch[110](16600/151): Loss 0.3057 LR: 8.746e-02 Score 89.921 Data time: 0.2471, Total iter time: 2.6337 +thomas 04/10 04:48:19 ===> Epoch[111](16640/151): Loss 0.2986 LR: 8.743e-02 Score 90.476 Data time: 0.2310, Total iter time: 2.4781 +thomas 04/10 04:50:07 ===> Epoch[111](16680/151): Loss 0.3005 LR: 8.740e-02 Score 90.321 Data time: 0.2503, Total iter time: 2.6473 +thomas 04/10 04:51:51 ===> Epoch[111](16720/151): Loss 0.3189 LR: 8.737e-02 Score 89.846 Data time: 0.2359, Total iter time: 2.5459 +thomas 04/10 04:53:33 ===> Epoch[111](16760/151): Loss 0.3231 LR: 8.734e-02 Score 89.620 Data time: 0.2374, Total iter time: 2.4932 +thomas 04/10 04:55:17 ===> Epoch[112](16800/151): Loss 0.3051 LR: 8.731e-02 Score 90.261 Data time: 0.2151, Total iter time: 2.5529 +thomas 04/10 04:56:57 ===> Epoch[112](16840/151): Loss 0.3130 LR: 8.728e-02 Score 89.900 Data time: 0.2389, Total iter time: 2.4465 +thomas 04/10 04:58:38 ===> Epoch[112](16880/151): Loss 0.2985 LR: 8.725e-02 Score 90.323 Data time: 0.2400, Total iter time: 2.4702 +thomas 04/10 05:00:20 ===> Epoch[113](16920/151): Loss 0.2919 LR: 8.722e-02 Score 90.699 Data time: 0.2147, Total iter time: 2.4819 +thomas 04/10 05:02:02 ===> Epoch[113](16960/151): Loss 0.2959 LR: 8.719e-02 Score 90.474 Data time: 0.2197, Total iter time: 2.4914 +thomas 04/10 05:03:39 ===> Epoch[113](17000/151): Loss 0.2936 LR: 8.715e-02 Score 90.790 Data time: 0.2245, Total iter time: 2.3845 +thomas 04/10 05:03:41 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 05:03:41 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 05:04:25 101/312: Data time: 0.0034, Iter time: 0.2515 Loss 0.419 (AVG: 0.655) Score 84.648 (AVG: 81.626) mIOU 47.978 mAP 61.063 mAcc 58.009 +IOU: 74.050 96.450 40.262 64.923 87.754 71.138 69.240 29.136 18.394 43.662 0.000 31.904 55.204 47.151 38.502 27.905 65.102 8.218 60.355 30.219 +mAP: 72.921 96.906 54.036 68.020 87.651 73.638 69.192 48.482 31.052 38.884 13.622 54.367 61.315 70.795 58.178 71.676 81.433 57.824 73.037 38.236 +mAcc: 89.175 98.787 83.354 74.992 92.479 93.717 79.830 35.418 18.984 50.247 0.000 33.554 79.267 79.307 49.371 28.883 65.606 8.275 60.798 38.130 + +thomas 04/10 05:05:05 201/312: Data time: 0.0023, Iter time: 0.2890 Loss 0.569 (AVG: 0.643) Score 84.290 (AVG: 81.688) mIOU 48.814 mAP 62.892 mAcc 59.019 +IOU: 75.068 96.895 41.424 61.892 85.198 73.081 68.338 29.655 15.881 49.803 0.000 27.566 52.669 44.098 36.753 35.546 68.697 14.940 67.404 31.371 +mAP: 74.175 97.128 55.103 69.823 87.315 77.345 70.484 45.721 33.479 50.512 14.362 49.386 58.154 70.801 57.129 76.859 84.380 65.692 78.010 41.990 +mAcc: 89.894 98.833 83.664 78.517 90.620 92.930 80.362 36.698 16.481 56.583 0.000 29.121 77.059 73.884 47.579 38.275 69.466 15.008 67.846 37.556 + +thomas 04/10 05:05:46 301/312: Data time: 0.0026, Iter time: 0.3915 Loss 0.732 (AVG: 0.673) Score 81.238 (AVG: 81.099) mIOU 48.647 mAP 62.663 mAcc 58.846 +IOU: 74.154 96.631 41.185 63.609 85.200 72.847 68.931 27.841 14.736 52.766 0.000 28.637 54.045 40.851 37.674 31.873 68.134 16.140 67.132 30.559 +mAP: 73.932 96.845 53.377 70.163 87.076 78.798 70.006 42.989 36.169 55.559 14.442 45.950 60.758 67.396 55.846 72.497 86.788 66.045 77.383 41.237 +mAcc: 89.579 98.831 83.936 76.818 90.477 92.180 80.885 34.701 15.243 60.968 0.000 30.715 78.412 70.968 49.464 33.978 68.967 16.209 67.619 36.963 + +thomas 04/10 05:05:51 312/312: Data time: 0.0022, Iter time: 0.2489 Loss 1.661 (AVG: 0.685) Score 60.736 (AVG: 80.790) mIOU 48.321 mAP 62.709 mAcc 58.612 +IOU: 73.499 96.658 41.817 62.981 85.154 72.585 68.983 27.169 14.852 53.129 0.000 27.756 53.847 37.755 37.659 31.178 68.375 16.093 66.084 30.845 +mAP: 73.660 96.883 53.967 68.872 87.089 79.073 70.078 42.726 36.342 55.373 14.593 46.372 60.696 67.242 55.846 73.153 87.041 66.399 77.529 41.237 +mAcc: 88.959 98.822 84.513 75.910 90.334 92.443 80.677 33.876 15.437 61.489 0.000 29.666 78.700 69.541 49.464 33.149 69.197 16.162 66.563 37.329 + +thomas 04/10 05:05:51 Finished test. Elapsed time: 130.2952 +thomas 04/10 05:05:51 Current best mIoU: 54.122 at iter 15000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 05:07:35 ===> Epoch[113](17040/151): Loss 0.3062 LR: 8.712e-02 Score 90.174 Data time: 0.2477, Total iter time: 2.5229 +thomas 04/10 05:09:14 ===> Epoch[114](17080/151): Loss 0.2996 LR: 8.709e-02 Score 90.316 Data time: 0.2285, Total iter time: 2.4328 +thomas 04/10 05:10:58 ===> Epoch[114](17120/151): Loss 0.3007 LR: 8.706e-02 Score 90.399 Data time: 0.2282, Total iter time: 2.5357 +thomas 04/10 05:12:37 ===> Epoch[114](17160/151): Loss 0.2971 LR: 8.703e-02 Score 90.418 Data time: 0.2345, Total iter time: 2.4197 +thomas 04/10 05:14:14 ===> Epoch[114](17200/151): Loss 0.3077 LR: 8.700e-02 Score 90.174 Data time: 0.2338, Total iter time: 2.3808 +thomas 04/10 05:15:59 ===> Epoch[115](17240/151): Loss 0.3147 LR: 8.697e-02 Score 89.839 Data time: 0.2239, Total iter time: 2.5686 +thomas 04/10 05:17:43 ===> Epoch[115](17280/151): Loss 0.3135 LR: 8.694e-02 Score 89.897 Data time: 0.2458, Total iter time: 2.5425 +thomas 04/10 05:19:21 ===> Epoch[115](17320/151): Loss 0.3057 LR: 8.691e-02 Score 90.350 Data time: 0.2241, Total iter time: 2.3919 +thomas 04/10 05:21:00 ===> Epoch[115](17360/151): Loss 0.3008 LR: 8.688e-02 Score 90.397 Data time: 0.2281, Total iter time: 2.4216 +thomas 04/10 05:22:40 ===> Epoch[116](17400/151): Loss 0.2846 LR: 8.685e-02 Score 90.872 Data time: 0.2278, Total iter time: 2.4455 +thomas 04/10 05:24:18 ===> Epoch[116](17440/151): Loss 0.2908 LR: 8.682e-02 Score 90.728 Data time: 0.2269, Total iter time: 2.3918 +thomas 04/10 05:26:00 ===> Epoch[116](17480/151): Loss 0.2880 LR: 8.679e-02 Score 90.768 Data time: 0.2242, Total iter time: 2.4977 +thomas 04/10 05:27:39 ===> Epoch[117](17520/151): Loss 0.3096 LR: 8.676e-02 Score 90.120 Data time: 0.2327, Total iter time: 2.4262 +thomas 04/10 05:29:19 ===> Epoch[117](17560/151): Loss 0.3230 LR: 8.673e-02 Score 89.931 Data time: 0.2247, Total iter time: 2.4280 +thomas 04/10 05:30:57 ===> Epoch[117](17600/151): Loss 0.3027 LR: 8.670e-02 Score 90.130 Data time: 0.2401, Total iter time: 2.3969 +thomas 04/10 05:32:39 ===> Epoch[117](17640/151): Loss 0.2940 LR: 8.667e-02 Score 90.551 Data time: 0.2295, Total iter time: 2.4905 +thomas 04/10 05:34:21 ===> Epoch[118](17680/151): Loss 0.2980 LR: 8.664e-02 Score 90.676 Data time: 0.2472, Total iter time: 2.4936 +thomas 04/10 05:36:01 ===> Epoch[118](17720/151): Loss 0.2952 LR: 8.661e-02 Score 90.657 Data time: 0.2148, Total iter time: 2.4580 +thomas 04/10 05:37:41 ===> Epoch[118](17760/151): Loss 0.2858 LR: 8.658e-02 Score 90.735 Data time: 0.2114, Total iter time: 2.4480 +thomas 04/10 05:39:24 ===> Epoch[118](17800/151): Loss 0.3014 LR: 8.655e-02 Score 90.344 Data time: 0.2331, Total iter time: 2.5131 +thomas 04/10 05:41:09 ===> Epoch[119](17840/151): Loss 0.2938 LR: 8.651e-02 Score 90.557 Data time: 0.2280, Total iter time: 2.5722 +thomas 04/10 05:42:44 ===> Epoch[119](17880/151): Loss 0.3170 LR: 8.648e-02 Score 89.876 Data time: 0.2382, Total iter time: 2.3000 +thomas 04/10 05:44:18 ===> Epoch[119](17920/151): Loss 0.3256 LR: 8.645e-02 Score 89.512 Data time: 0.2253, Total iter time: 2.3117 +thomas 04/10 05:46:03 ===> Epoch[119](17960/151): Loss 0.3148 LR: 8.642e-02 Score 89.956 Data time: 0.2305, Total iter time: 2.5663 +thomas 04/10 05:47:44 ===> Epoch[120](18000/151): Loss 0.3031 LR: 8.639e-02 Score 90.298 Data time: 0.2287, Total iter time: 2.4557 +thomas 04/10 05:47:45 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 05:47:45 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 05:48:29 101/312: Data time: 0.0025, Iter time: 0.2281 Loss 0.549 (AVG: 0.605) Score 81.226 (AVG: 84.045) mIOU 55.149 mAP 66.702 mAcc 64.247 +IOU: 75.138 96.692 56.708 72.838 85.695 75.300 73.842 23.700 22.238 62.447 0.000 45.520 55.264 42.482 42.483 52.683 81.402 25.235 74.278 39.037 +mAP: 74.822 97.018 66.929 73.822 86.079 83.219 70.960 43.018 49.023 62.966 6.098 45.481 54.735 72.128 58.371 83.451 96.853 75.935 85.770 47.358 +mAcc: 91.960 98.760 75.581 83.307 89.665 90.589 84.028 31.889 22.973 90.958 0.000 60.583 64.577 59.160 48.511 56.933 82.782 25.462 74.503 52.715 + +thomas 04/10 05:49:11 201/312: Data time: 0.0023, Iter time: 0.1928 Loss 0.013 (AVG: 0.636) Score 99.841 (AVG: 83.112) mIOU 55.421 mAP 65.474 mAcc 64.035 +IOU: 73.317 96.918 54.557 68.574 85.219 79.664 72.220 28.057 27.296 69.772 0.000 45.060 56.073 36.607 42.337 50.414 77.830 30.790 76.046 37.668 +mAP: 72.882 97.053 60.480 71.084 86.757 79.889 70.038 47.437 50.393 63.089 6.923 46.760 59.697 60.098 52.561 82.203 91.520 74.481 88.850 47.284 +mAcc: 91.916 98.646 73.308 83.020 91.347 91.761 83.169 36.152 28.279 88.696 0.000 55.656 66.088 51.724 47.338 57.984 78.904 31.028 76.296 49.391 + +thomas 04/10 05:49:51 301/312: Data time: 0.0022, Iter time: 0.2516 Loss 0.491 (AVG: 0.648) Score 79.433 (AVG: 82.844) mIOU 54.684 mAP 64.587 mAcc 63.467 +IOU: 74.047 96.831 54.325 70.879 85.014 75.915 70.403 27.927 24.406 67.969 0.000 49.349 55.799 38.728 40.319 51.023 76.581 31.122 68.386 34.651 +mAP: 73.303 97.123 59.576 70.786 86.592 79.133 68.836 47.854 45.533 59.610 6.938 49.412 59.950 59.760 54.498 79.652 90.872 73.901 83.333 45.083 +mAcc: 92.237 98.641 74.348 83.637 90.616 89.756 82.881 35.903 25.328 88.305 0.000 59.934 65.439 51.329 47.397 58.320 78.161 31.430 68.619 47.055 + +thomas 04/10 05:49:55 312/312: Data time: 0.0043, Iter time: 0.1921 Loss 0.839 (AVG: 0.646) Score 79.381 (AVG: 82.847) mIOU 54.497 mAP 64.362 mAcc 63.259 +IOU: 74.066 96.859 54.199 71.181 85.213 75.989 70.243 27.849 24.082 67.658 0.000 48.762 54.254 38.140 40.280 51.023 77.244 32.205 66.156 34.531 +mAP: 73.259 97.120 59.393 71.516 86.808 79.377 68.368 47.505 44.884 59.222 6.845 49.412 58.543 60.099 54.498 79.652 91.145 73.627 81.717 44.244 +mAcc: 92.283 98.652 74.049 83.644 90.763 89.752 82.952 35.880 25.021 87.562 0.000 59.934 64.561 49.824 47.397 58.320 78.746 32.516 66.374 46.943 + +thomas 04/10 05:49:55 Finished test. Elapsed time: 129.4030 +thomas 04/10 05:49:56 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34Cbest_val.pth +thomas 04/10 05:49:56 Current best mIoU: 54.497 at iter 18000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 05:51:39 ===> Epoch[120](18040/151): Loss 0.2939 LR: 8.636e-02 Score 90.772 Data time: 0.2169, Total iter time: 2.5010 +thomas 04/10 05:53:21 ===> Epoch[120](18080/151): Loss 0.2796 LR: 8.633e-02 Score 90.946 Data time: 0.2151, Total iter time: 2.4909 +thomas 04/10 05:54:57 ===> Epoch[120](18120/151): Loss 0.3143 LR: 8.630e-02 Score 90.198 Data time: 0.2182, Total iter time: 2.3503 +thomas 04/10 05:56:40 ===> Epoch[121](18160/151): Loss 0.2887 LR: 8.627e-02 Score 90.696 Data time: 0.2120, Total iter time: 2.5308 +thomas 04/10 05:58:21 ===> Epoch[121](18200/151): Loss 0.2707 LR: 8.624e-02 Score 91.262 Data time: 0.2272, Total iter time: 2.4610 +thomas 04/10 06:00:00 ===> Epoch[121](18240/151): Loss 0.3257 LR: 8.621e-02 Score 89.654 Data time: 0.2185, Total iter time: 2.4332 +thomas 04/10 06:01:43 ===> Epoch[122](18280/151): Loss 0.3316 LR: 8.618e-02 Score 89.357 Data time: 0.2253, Total iter time: 2.5113 +thomas 04/10 06:03:23 ===> Epoch[122](18320/151): Loss 0.3163 LR: 8.615e-02 Score 90.067 Data time: 0.2299, Total iter time: 2.4266 +thomas 04/10 06:05:05 ===> Epoch[122](18360/151): Loss 0.2927 LR: 8.612e-02 Score 90.653 Data time: 0.2400, Total iter time: 2.5024 +thomas 04/10 06:06:46 ===> Epoch[122](18400/151): Loss 0.2794 LR: 8.609e-02 Score 91.012 Data time: 0.2169, Total iter time: 2.4789 +thomas 04/10 06:08:24 ===> Epoch[123](18440/151): Loss 0.3440 LR: 8.606e-02 Score 89.046 Data time: 0.2297, Total iter time: 2.3872 +thomas 04/10 06:10:08 ===> Epoch[123](18480/151): Loss 0.3050 LR: 8.603e-02 Score 90.046 Data time: 0.2477, Total iter time: 2.5345 +thomas 04/10 06:11:53 ===> Epoch[123](18520/151): Loss 0.3045 LR: 8.600e-02 Score 90.375 Data time: 0.2314, Total iter time: 2.5819 +thomas 04/10 06:13:31 ===> Epoch[123](18560/151): Loss 0.2931 LR: 8.597e-02 Score 90.857 Data time: 0.2329, Total iter time: 2.4051 +thomas 04/10 06:15:12 ===> Epoch[124](18600/151): Loss 0.2983 LR: 8.594e-02 Score 90.381 Data time: 0.2083, Total iter time: 2.4492 +thomas 04/10 06:16:53 ===> Epoch[124](18640/151): Loss 0.2844 LR: 8.590e-02 Score 90.806 Data time: 0.2216, Total iter time: 2.4816 +thomas 04/10 06:18:37 ===> Epoch[124](18680/151): Loss 0.2947 LR: 8.587e-02 Score 90.456 Data time: 0.2301, Total iter time: 2.5436 +thomas 04/10 06:20:18 ===> Epoch[124](18720/151): Loss 0.2749 LR: 8.584e-02 Score 91.150 Data time: 0.2439, Total iter time: 2.4582 +thomas 04/10 06:21:56 ===> Epoch[125](18760/151): Loss 0.3174 LR: 8.581e-02 Score 90.210 Data time: 0.2239, Total iter time: 2.4205 +thomas 04/10 06:23:44 ===> Epoch[125](18800/151): Loss 0.3201 LR: 8.578e-02 Score 89.628 Data time: 0.2284, Total iter time: 2.6251 +thomas 04/10 06:25:25 ===> Epoch[125](18840/151): Loss 0.2775 LR: 8.575e-02 Score 91.059 Data time: 0.2309, Total iter time: 2.4744 +thomas 04/10 06:27:04 ===> Epoch[126](18880/151): Loss 0.2795 LR: 8.572e-02 Score 90.911 Data time: 0.2181, Total iter time: 2.4125 +thomas 04/10 06:28:47 ===> Epoch[126](18920/151): Loss 0.2850 LR: 8.569e-02 Score 90.831 Data time: 0.2442, Total iter time: 2.5249 +thomas 04/10 06:30:24 ===> Epoch[126](18960/151): Loss 0.2710 LR: 8.566e-02 Score 91.228 Data time: 0.2092, Total iter time: 2.3801 +thomas 04/10 06:32:05 ===> Epoch[126](19000/151): Loss 0.3009 LR: 8.563e-02 Score 90.305 Data time: 0.2256, Total iter time: 2.4612 +thomas 04/10 06:32:06 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 06:32:06 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 06:32:50 101/312: Data time: 0.0025, Iter time: 0.2842 Loss 0.330 (AVG: 0.879) Score 91.208 (AVG: 75.488) mIOU 46.353 mAP 63.937 mAcc 58.270 +IOU: 64.466 95.456 38.712 70.490 83.604 67.708 63.267 23.149 31.353 62.671 0.000 41.487 52.015 11.934 32.089 38.454 52.164 31.167 36.450 30.427 +mAP: 71.284 96.342 43.969 68.120 86.622 76.351 66.950 43.863 50.665 64.468 11.323 42.658 61.328 54.984 63.595 77.444 88.297 77.980 81.518 50.969 +mAcc: 74.406 98.820 51.461 83.044 87.914 72.134 71.489 32.477 33.557 84.802 0.000 49.368 70.673 77.806 56.174 39.925 52.231 31.825 36.474 60.814 + +thomas 04/10 06:33:32 201/312: Data time: 0.0024, Iter time: 0.4562 Loss 0.763 (AVG: 0.797) Score 74.970 (AVG: 76.812) mIOU 46.416 mAP 63.319 mAcc 58.549 +IOU: 66.053 96.483 42.249 70.467 83.432 69.264 62.948 24.644 28.771 61.508 0.012 44.942 50.831 15.750 31.416 30.277 46.972 29.820 43.326 29.155 +mAP: 72.968 96.767 47.999 70.108 86.136 80.240 65.639 42.230 44.231 62.032 12.897 50.182 63.096 56.130 63.386 70.668 86.108 73.487 76.387 45.693 +mAcc: 76.187 99.038 56.178 84.411 87.902 72.703 71.791 34.099 32.010 82.586 0.012 55.319 73.018 78.237 57.281 32.882 47.014 30.264 43.362 56.689 + +thomas 04/10 06:34:13 301/312: Data time: 0.0046, Iter time: 0.2159 Loss 0.475 (AVG: 0.773) Score 86.526 (AVG: 77.351) mIOU 47.394 mAP 63.276 mAcc 59.914 +IOU: 66.068 96.446 46.254 68.581 83.325 70.336 61.761 25.638 32.893 67.561 0.010 51.483 50.455 15.858 36.471 30.880 46.055 26.279 41.271 30.260 +mAP: 72.029 96.982 49.163 70.092 87.202 78.465 66.794 44.182 43.826 62.919 12.059 51.441 62.108 58.251 64.604 68.869 85.335 69.860 77.509 43.834 +mAcc: 75.710 98.963 60.916 84.186 87.651 73.537 69.904 34.779 36.705 85.118 0.010 61.108 75.777 82.261 67.516 34.364 46.087 26.607 41.301 55.775 + +thomas 04/10 06:34:17 312/312: Data time: 0.0032, Iter time: 0.1985 Loss 0.437 (AVG: 0.777) Score 87.701 (AVG: 77.261) mIOU 47.158 mAP 63.293 mAcc 59.654 +IOU: 65.850 96.489 45.344 67.180 83.342 69.927 62.490 25.404 32.476 67.392 0.009 51.047 51.439 15.616 35.881 30.771 46.029 25.267 40.281 30.924 +mAP: 72.011 97.025 48.948 70.521 87.397 77.414 67.512 43.415 43.977 62.919 11.716 52.118 62.815 58.251 64.336 68.869 85.429 70.231 77.323 43.634 +mAcc: 75.666 98.977 60.521 84.348 87.811 73.060 70.559 34.744 36.217 85.118 0.009 60.729 76.264 82.261 66.017 34.364 46.060 25.565 40.309 54.482 + +thomas 04/10 06:34:17 Finished test. Elapsed time: 130.5156 +thomas 04/10 06:34:17 Current best mIoU: 54.497 at iter 18000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 06:36:00 ===> Epoch[127](19040/151): Loss 0.2959 LR: 8.560e-02 Score 90.435 Data time: 0.2421, Total iter time: 2.5189 +thomas 04/10 06:37:47 ===> Epoch[127](19080/151): Loss 0.3162 LR: 8.557e-02 Score 89.953 Data time: 0.2362, Total iter time: 2.6142 +thomas 04/10 06:39:30 ===> Epoch[127](19120/151): Loss 0.2955 LR: 8.554e-02 Score 90.670 Data time: 0.2184, Total iter time: 2.5398 +thomas 04/10 06:41:11 ===> Epoch[127](19160/151): Loss 0.3188 LR: 8.551e-02 Score 90.010 Data time: 0.2440, Total iter time: 2.4515 +thomas 04/10 06:42:48 ===> Epoch[128](19200/151): Loss 0.3065 LR: 8.548e-02 Score 90.244 Data time: 0.2193, Total iter time: 2.3754 +thomas 04/10 06:44:29 ===> Epoch[128](19240/151): Loss 0.2866 LR: 8.545e-02 Score 90.573 Data time: 0.2358, Total iter time: 2.4764 +thomas 04/10 06:46:11 ===> Epoch[128](19280/151): Loss 0.3096 LR: 8.542e-02 Score 90.008 Data time: 0.2385, Total iter time: 2.5071 +thomas 04/10 06:47:57 ===> Epoch[128](19320/151): Loss 0.2809 LR: 8.539e-02 Score 91.057 Data time: 0.2420, Total iter time: 2.5945 +thomas 04/10 06:49:45 ===> Epoch[129](19360/151): Loss 0.2884 LR: 8.536e-02 Score 90.962 Data time: 0.2588, Total iter time: 2.6234 +thomas 04/10 06:51:33 ===> Epoch[129](19400/151): Loss 0.2684 LR: 8.532e-02 Score 91.407 Data time: 0.2594, Total iter time: 2.6370 +thomas 04/10 06:53:15 ===> Epoch[129](19440/151): Loss 0.2828 LR: 8.529e-02 Score 91.070 Data time: 0.2474, Total iter time: 2.5045 +thomas 04/10 06:54:59 ===> Epoch[130](19480/151): Loss 0.3075 LR: 8.526e-02 Score 90.049 Data time: 0.2299, Total iter time: 2.5353 +thomas 04/10 06:56:37 ===> Epoch[130](19520/151): Loss 0.2758 LR: 8.523e-02 Score 91.100 Data time: 0.2220, Total iter time: 2.3962 +thomas 04/10 06:58:14 ===> Epoch[130](19560/151): Loss 0.2892 LR: 8.520e-02 Score 90.703 Data time: 0.2246, Total iter time: 2.3814 +thomas 04/10 06:59:53 ===> Epoch[130](19600/151): Loss 0.2686 LR: 8.517e-02 Score 91.260 Data time: 0.2248, Total iter time: 2.4037 +thomas 04/10 07:01:36 ===> Epoch[131](19640/151): Loss 0.2731 LR: 8.514e-02 Score 91.177 Data time: 0.2461, Total iter time: 2.5379 +thomas 04/10 07:03:18 ===> Epoch[131](19680/151): Loss 0.2727 LR: 8.511e-02 Score 91.302 Data time: 0.2256, Total iter time: 2.4728 +thomas 04/10 07:04:58 ===> Epoch[131](19720/151): Loss 0.2648 LR: 8.508e-02 Score 91.217 Data time: 0.2495, Total iter time: 2.4373 +thomas 04/10 07:06:37 ===> Epoch[131](19760/151): Loss 0.2757 LR: 8.505e-02 Score 91.045 Data time: 0.2274, Total iter time: 2.4270 +thomas 04/10 07:08:17 ===> Epoch[132](19800/151): Loss 0.2740 LR: 8.502e-02 Score 91.251 Data time: 0.2156, Total iter time: 2.4412 +thomas 04/10 07:09:58 ===> Epoch[132](19840/151): Loss 0.2911 LR: 8.499e-02 Score 90.490 Data time: 0.2269, Total iter time: 2.4702 +thomas 04/10 07:11:39 ===> Epoch[132](19880/151): Loss 0.2666 LR: 8.496e-02 Score 91.218 Data time: 0.2292, Total iter time: 2.4684 +thomas 04/10 07:13:18 ===> Epoch[132](19920/151): Loss 0.2888 LR: 8.493e-02 Score 90.717 Data time: 0.2290, Total iter time: 2.4259 +thomas 04/10 07:15:00 ===> Epoch[133](19960/151): Loss 0.2993 LR: 8.490e-02 Score 90.519 Data time: 0.2285, Total iter time: 2.4735 +thomas 04/10 07:16:38 ===> Epoch[133](20000/151): Loss 0.2879 LR: 8.487e-02 Score 90.667 Data time: 0.2228, Total iter time: 2.3941 +thomas 04/10 07:16:39 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 07:16:39 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 07:17:24 101/312: Data time: 0.0025, Iter time: 0.1899 Loss 0.128 (AVG: 0.721) Score 96.336 (AVG: 80.548) mIOU 45.976 mAP 60.374 mAcc 54.870 +IOU: 72.421 96.930 34.622 67.376 81.121 73.062 61.746 30.276 31.872 66.128 0.000 51.867 41.784 22.076 27.709 16.993 48.978 16.539 48.720 29.295 +mAP: 72.917 97.634 59.510 63.832 85.576 81.271 64.988 47.175 43.181 59.593 12.181 53.302 54.616 55.216 38.205 47.205 85.065 61.680 76.668 47.669 +mAcc: 84.594 98.548 79.081 70.270 92.704 88.970 74.178 41.031 33.295 83.081 0.000 58.942 53.827 26.906 31.341 16.998 49.030 16.594 48.862 49.159 + +thomas 04/10 07:18:05 201/312: Data time: 0.0342, Iter time: 0.2743 Loss 0.833 (AVG: 0.710) Score 71.943 (AVG: 80.674) mIOU 47.079 mAP 60.709 mAcc 56.356 +IOU: 71.841 96.933 35.870 63.602 84.020 71.711 67.665 34.453 31.319 66.038 0.000 54.529 47.110 31.287 34.062 11.667 45.995 16.675 50.053 26.744 +mAP: 73.301 97.239 57.060 64.186 87.568 79.693 64.616 48.622 46.799 58.436 10.424 54.918 56.769 57.438 41.764 51.119 85.325 65.280 73.821 39.811 +mAcc: 83.532 98.762 80.779 68.550 94.721 90.070 79.054 45.586 33.008 83.815 0.000 61.164 59.876 36.901 39.544 11.675 46.097 16.823 50.154 47.009 + +thomas 04/10 07:18:45 301/312: Data time: 0.0045, Iter time: 0.2445 Loss 1.093 (AVG: 0.716) Score 70.385 (AVG: 80.337) mIOU 49.150 mAP 62.260 mAcc 58.646 +IOU: 71.207 96.743 39.284 65.558 83.695 73.096 67.999 34.534 30.836 62.576 0.000 56.075 48.887 38.356 42.233 15.470 49.163 18.553 59.366 29.358 +mAP: 73.051 97.198 56.932 64.148 87.370 79.577 66.391 49.382 46.133 59.221 10.593 53.481 57.372 59.650 51.794 62.467 81.386 67.948 79.064 42.041 +mAcc: 83.300 98.631 79.729 70.947 95.397 89.500 79.988 44.517 32.236 84.874 0.000 65.861 61.364 44.018 50.098 15.871 49.244 18.699 59.520 49.119 + +thomas 04/10 07:18:49 312/312: Data time: 0.0023, Iter time: 0.1853 Loss 2.368 (AVG: 0.721) Score 64.332 (AVG: 80.279) mIOU 48.887 mAP 62.305 mAcc 58.438 +IOU: 71.241 96.691 39.156 65.948 83.644 72.747 67.421 34.232 29.957 62.006 0.000 55.589 48.443 37.497 41.277 15.470 49.189 18.420 59.366 29.444 +mAP: 73.022 97.104 56.780 64.734 87.419 79.665 66.589 49.036 45.843 59.620 10.912 53.840 57.230 59.861 51.288 62.467 80.979 68.021 79.064 42.618 +mAcc: 83.356 98.626 79.790 71.261 95.414 89.527 79.212 44.857 31.323 84.164 0.000 65.179 61.735 42.961 49.143 15.871 49.275 18.557 59.520 48.995 + +thomas 04/10 07:18:49 Finished test. Elapsed time: 129.7934 +thomas 04/10 07:18:49 Current best mIoU: 54.497 at iter 18000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 07:20:28 ===> Epoch[133](20040/151): Loss 0.2929 LR: 8.484e-02 Score 90.602 Data time: 0.2260, Total iter time: 2.4325 +thomas 04/10 07:22:15 ===> Epoch[133](20080/151): Loss 0.2679 LR: 8.481e-02 Score 91.381 Data time: 0.2324, Total iter time: 2.6114 +thomas 04/10 07:23:57 ===> Epoch[134](20120/151): Loss 0.2976 LR: 8.478e-02 Score 90.274 Data time: 0.2417, Total iter time: 2.4999 +thomas 04/10 07:25:43 ===> Epoch[134](20160/151): Loss 0.2692 LR: 8.474e-02 Score 91.454 Data time: 0.2433, Total iter time: 2.6008 +thomas 04/10 07:27:17 ===> Epoch[134](20200/151): Loss 0.2855 LR: 8.471e-02 Score 90.739 Data time: 0.2096, Total iter time: 2.2965 +thomas 04/10 07:28:55 ===> Epoch[135](20240/151): Loss 0.2863 LR: 8.468e-02 Score 90.627 Data time: 0.2269, Total iter time: 2.3885 +thomas 04/10 07:30:38 ===> Epoch[135](20280/151): Loss 0.2884 LR: 8.465e-02 Score 90.885 Data time: 0.2338, Total iter time: 2.5038 +thomas 04/10 07:32:19 ===> Epoch[135](20320/151): Loss 0.2716 LR: 8.462e-02 Score 91.351 Data time: 0.2268, Total iter time: 2.4768 +thomas 04/10 07:33:58 ===> Epoch[135](20360/151): Loss 0.2759 LR: 8.459e-02 Score 91.028 Data time: 0.2389, Total iter time: 2.4230 +thomas 04/10 07:35:42 ===> Epoch[136](20400/151): Loss 0.2684 LR: 8.456e-02 Score 91.364 Data time: 0.2191, Total iter time: 2.5444 +thomas 04/10 07:37:23 ===> Epoch[136](20440/151): Loss 0.2617 LR: 8.453e-02 Score 91.617 Data time: 0.2397, Total iter time: 2.4670 +thomas 04/10 07:39:04 ===> Epoch[136](20480/151): Loss 0.2804 LR: 8.450e-02 Score 90.972 Data time: 0.2206, Total iter time: 2.4841 +thomas 04/10 07:40:49 ===> Epoch[136](20520/151): Loss 0.2879 LR: 8.447e-02 Score 90.633 Data time: 0.2147, Total iter time: 2.5543 +thomas 04/10 07:42:26 ===> Epoch[137](20560/151): Loss 0.2984 LR: 8.444e-02 Score 90.504 Data time: 0.2366, Total iter time: 2.3831 +thomas 04/10 07:44:08 ===> Epoch[137](20600/151): Loss 0.2679 LR: 8.441e-02 Score 91.540 Data time: 0.2098, Total iter time: 2.4725 +thomas 04/10 07:45:49 ===> Epoch[137](20640/151): Loss 0.2668 LR: 8.438e-02 Score 91.463 Data time: 0.2137, Total iter time: 2.4710 +thomas 04/10 07:47:30 ===> Epoch[137](20680/151): Loss 0.2664 LR: 8.435e-02 Score 91.646 Data time: 0.2403, Total iter time: 2.4703 +thomas 04/10 07:49:10 ===> Epoch[138](20720/151): Loss 0.2803 LR: 8.432e-02 Score 90.934 Data time: 0.2275, Total iter time: 2.4381 +thomas 04/10 07:50:51 ===> Epoch[138](20760/151): Loss 0.2740 LR: 8.429e-02 Score 91.039 Data time: 0.2301, Total iter time: 2.4776 +thomas 04/10 07:52:29 ===> Epoch[138](20800/151): Loss 0.2660 LR: 8.426e-02 Score 91.594 Data time: 0.2199, Total iter time: 2.3977 +thomas 04/10 07:54:14 ===> Epoch[139](20840/151): Loss 0.3053 LR: 8.422e-02 Score 90.412 Data time: 0.2353, Total iter time: 2.5691 +thomas 04/10 07:55:53 ===> Epoch[139](20880/151): Loss 0.2855 LR: 8.419e-02 Score 90.847 Data time: 0.2279, Total iter time: 2.4302 +thomas 04/10 07:57:37 ===> Epoch[139](20920/151): Loss 0.2853 LR: 8.416e-02 Score 90.952 Data time: 0.2425, Total iter time: 2.5409 +thomas 04/10 07:59:15 ===> Epoch[139](20960/151): Loss 0.2754 LR: 8.413e-02 Score 91.240 Data time: 0.2265, Total iter time: 2.3924 +thomas 04/10 08:00:53 ===> Epoch[140](21000/151): Loss 0.2751 LR: 8.410e-02 Score 91.097 Data time: 0.2351, Total iter time: 2.3950 +thomas 04/10 08:00:55 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 08:00:55 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 08:01:38 101/312: Data time: 0.0038, Iter time: 0.2477 Loss 0.275 (AVG: 0.677) Score 88.485 (AVG: 81.717) mIOU 50.574 mAP 62.733 mAcc 59.909 +IOU: 71.759 96.287 49.827 54.590 88.746 76.735 68.308 34.487 31.397 63.137 0.000 50.522 54.354 39.796 44.129 27.738 43.516 19.505 55.413 41.235 +mAP: 71.918 95.295 53.502 59.199 88.495 76.655 73.479 46.855 47.492 62.568 9.181 45.205 65.016 61.855 62.379 66.055 87.710 69.610 69.567 42.620 +mAcc: 88.100 99.295 65.522 69.399 94.363 85.531 76.054 50.033 37.647 80.905 0.000 65.986 66.045 54.574 57.550 28.072 43.551 19.795 55.542 60.212 + +thomas 04/10 08:02:19 201/312: Data time: 0.0233, Iter time: 0.2570 Loss 0.802 (AVG: 0.651) Score 76.660 (AVG: 81.994) mIOU 51.550 mAP 63.058 mAcc 60.866 +IOU: 73.073 96.013 50.498 65.110 85.766 76.811 70.047 34.339 38.868 59.118 0.000 54.471 54.023 35.906 41.172 29.417 50.175 18.903 60.639 36.654 +mAP: 72.799 95.626 51.767 62.904 87.490 80.085 69.779 50.225 44.190 61.033 8.538 51.915 61.612 56.124 59.760 70.851 85.722 68.848 77.753 44.145 +mAcc: 88.610 99.239 64.395 76.794 91.735 88.518 78.710 47.824 46.682 79.645 0.000 69.506 63.861 47.286 54.410 31.439 50.378 19.183 60.717 58.381 + +thomas 04/10 08:03:02 301/312: Data time: 0.0023, Iter time: 0.2702 Loss 0.406 (AVG: 0.635) Score 88.594 (AVG: 82.588) mIOU 51.939 mAP 62.727 mAcc 60.885 +IOU: 73.572 95.847 50.993 67.417 86.567 79.689 69.370 33.511 39.989 65.446 0.000 56.660 51.533 38.027 34.476 29.365 51.203 19.629 58.950 36.529 +mAP: 72.233 95.479 53.911 66.250 87.914 82.079 68.611 49.895 43.664 58.072 9.068 53.198 60.330 55.942 53.012 72.079 84.433 67.097 77.755 43.523 +mAcc: 89.266 99.200 64.968 77.768 92.166 91.171 78.260 46.944 47.395 83.184 0.000 70.684 62.538 49.731 47.850 31.368 51.483 19.913 59.020 54.792 + +thomas 04/10 08:03:06 312/312: Data time: 0.0023, Iter time: 0.1137 Loss 0.496 (AVG: 0.632) Score 83.629 (AVG: 82.635) mIOU 51.930 mAP 62.871 mAcc 60.982 +IOU: 73.915 95.865 51.090 67.456 86.374 79.656 68.370 33.260 39.818 65.136 0.000 55.866 51.521 37.932 35.106 28.975 51.977 20.690 58.950 36.641 +mAP: 72.260 95.530 54.395 66.068 87.842 82.079 68.686 50.010 43.250 58.072 8.903 53.890 60.110 55.942 54.002 71.444 85.029 68.845 77.755 43.301 +mAcc: 89.399 99.201 64.898 77.877 92.206 91.171 76.902 46.736 47.193 83.184 0.000 71.253 62.638 49.731 48.985 30.927 52.242 21.016 59.020 55.071 + +thomas 04/10 08:03:06 Finished test. Elapsed time: 131.2619 +thomas 04/10 08:03:06 Current best mIoU: 54.497 at iter 18000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 08:04:45 ===> Epoch[140](21040/151): Loss 0.2600 LR: 8.407e-02 Score 91.736 Data time: 0.2355, Total iter time: 2.4081 +thomas 04/10 08:06:26 ===> Epoch[140](21080/151): Loss 0.2673 LR: 8.404e-02 Score 91.213 Data time: 0.2249, Total iter time: 2.4761 +thomas 04/10 08:08:07 ===> Epoch[140](21120/151): Loss 0.2675 LR: 8.401e-02 Score 91.345 Data time: 0.2372, Total iter time: 2.4675 +thomas 04/10 08:09:46 ===> Epoch[141](21160/151): Loss 0.2574 LR: 8.398e-02 Score 91.642 Data time: 0.2324, Total iter time: 2.4286 +thomas 04/10 08:11:28 ===> Epoch[141](21200/151): Loss 0.2955 LR: 8.395e-02 Score 90.598 Data time: 0.2328, Total iter time: 2.4995 +thomas 04/10 08:13:07 ===> Epoch[141](21240/151): Loss 0.3074 LR: 8.392e-02 Score 90.217 Data time: 0.2271, Total iter time: 2.4083 +thomas 04/10 08:14:50 ===> Epoch[141](21280/151): Loss 0.2925 LR: 8.389e-02 Score 90.806 Data time: 0.2260, Total iter time: 2.5151 +thomas 04/10 08:16:28 ===> Epoch[142](21320/151): Loss 0.3042 LR: 8.386e-02 Score 90.244 Data time: 0.2220, Total iter time: 2.4088 +thomas 04/10 08:18:07 ===> Epoch[142](21360/151): Loss 0.2975 LR: 8.383e-02 Score 90.413 Data time: 0.2158, Total iter time: 2.4112 +thomas 04/10 08:19:47 ===> Epoch[142](21400/151): Loss 0.2768 LR: 8.380e-02 Score 90.974 Data time: 0.2269, Total iter time: 2.4379 +thomas 04/10 08:21:26 ===> Epoch[142](21440/151): Loss 0.2978 LR: 8.377e-02 Score 90.520 Data time: 0.2393, Total iter time: 2.4277 +thomas 04/10 08:23:07 ===> Epoch[143](21480/151): Loss 0.2897 LR: 8.374e-02 Score 90.927 Data time: 0.2290, Total iter time: 2.4754 +thomas 04/10 08:24:44 ===> Epoch[143](21520/151): Loss 0.2742 LR: 8.370e-02 Score 91.073 Data time: 0.2171, Total iter time: 2.3612 +thomas 04/10 08:26:28 ===> Epoch[143](21560/151): Loss 0.2805 LR: 8.367e-02 Score 91.079 Data time: 0.2213, Total iter time: 2.5557 +thomas 04/10 08:28:05 ===> Epoch[144](21600/151): Loss 0.2665 LR: 8.364e-02 Score 91.407 Data time: 0.2193, Total iter time: 2.3570 +thomas 04/10 08:29:49 ===> Epoch[144](21640/151): Loss 0.2750 LR: 8.361e-02 Score 91.264 Data time: 0.2324, Total iter time: 2.5447 +thomas 04/10 08:31:29 ===> Epoch[144](21680/151): Loss 0.2584 LR: 8.358e-02 Score 91.521 Data time: 0.2298, Total iter time: 2.4597 +thomas 04/10 08:33:08 ===> Epoch[144](21720/151): Loss 0.2642 LR: 8.355e-02 Score 91.440 Data time: 0.2340, Total iter time: 2.4118 +thomas 04/10 08:34:47 ===> Epoch[145](21760/151): Loss 0.2753 LR: 8.352e-02 Score 91.091 Data time: 0.2355, Total iter time: 2.4210 +thomas 04/10 08:36:30 ===> Epoch[145](21800/151): Loss 0.2495 LR: 8.349e-02 Score 91.759 Data time: 0.2336, Total iter time: 2.5172 +thomas 04/10 08:38:13 ===> Epoch[145](21840/151): Loss 0.2748 LR: 8.346e-02 Score 91.001 Data time: 0.2382, Total iter time: 2.5326 +thomas 04/10 08:39:52 ===> Epoch[145](21880/151): Loss 0.2668 LR: 8.343e-02 Score 91.341 Data time: 0.2271, Total iter time: 2.4028 +thomas 04/10 08:41:37 ===> Epoch[146](21920/151): Loss 0.2931 LR: 8.340e-02 Score 90.524 Data time: 0.2580, Total iter time: 2.5787 +thomas 04/10 08:43:15 ===> Epoch[146](21960/151): Loss 0.2675 LR: 8.337e-02 Score 91.199 Data time: 0.2207, Total iter time: 2.3910 +thomas 04/10 08:44:57 ===> Epoch[146](22000/151): Loss 0.2895 LR: 8.334e-02 Score 90.806 Data time: 0.2305, Total iter time: 2.4938 +thomas 04/10 08:44:59 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 08:44:59 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 08:45:42 101/312: Data time: 0.0046, Iter time: 0.2597 Loss 1.960 (AVG: 0.732) Score 67.887 (AVG: 79.974) mIOU 50.224 mAP 65.354 mAcc 62.161 +IOU: 72.038 96.651 38.050 59.130 84.636 77.224 66.941 30.308 20.367 46.866 0.053 64.128 31.951 35.338 46.771 51.589 83.178 19.800 56.982 22.476 +mAP: 72.269 96.228 63.081 60.973 87.694 87.093 71.360 46.540 38.502 73.026 16.896 67.698 60.649 63.403 53.865 75.736 87.082 64.958 73.667 46.363 +mAcc: 83.554 98.846 80.142 63.313 95.310 93.532 70.117 39.378 21.333 96.465 0.053 69.279 88.499 40.818 51.234 54.970 91.632 19.868 58.420 26.467 + +thomas 04/10 08:46:25 201/312: Data time: 0.0025, Iter time: 0.2629 Loss 0.547 (AVG: 0.799) Score 85.289 (AVG: 78.906) mIOU 48.023 mAP 62.425 mAcc 59.780 +IOU: 71.585 96.376 40.087 54.439 82.254 75.099 62.014 29.738 23.717 42.078 0.033 49.154 34.115 30.332 26.201 49.304 81.215 21.023 69.646 22.054 +mAP: 72.029 96.897 56.059 56.548 86.970 80.466 66.009 48.187 40.042 66.582 15.633 54.771 61.135 58.375 43.843 72.786 89.030 59.598 81.269 42.272 +mAcc: 84.082 98.875 74.718 58.514 95.323 89.867 67.492 38.015 24.890 95.830 0.034 53.706 85.483 36.422 27.677 58.862 87.428 21.518 70.927 25.946 + +thomas 04/10 08:47:05 301/312: Data time: 0.0026, Iter time: 0.2574 Loss 1.787 (AVG: 0.816) Score 60.316 (AVG: 78.495) mIOU 47.611 mAP 61.529 mAcc 59.184 +IOU: 71.132 96.608 38.554 49.493 81.185 73.534 60.897 31.573 19.928 44.033 0.024 46.272 32.069 32.334 32.658 49.707 82.194 17.179 69.053 23.799 +mAP: 71.877 96.864 54.534 52.014 85.727 78.974 64.048 48.146 37.134 64.954 15.565 47.588 58.256 59.841 46.752 77.485 89.206 59.284 79.713 42.621 +mAcc: 83.618 98.981 72.741 53.897 95.248 88.609 65.352 41.405 20.992 92.884 0.024 50.549 84.233 37.512 35.177 58.142 87.889 17.516 70.158 28.755 + +thomas 04/10 08:47:10 312/312: Data time: 0.0023, Iter time: 0.2717 Loss 1.848 (AVG: 0.816) Score 65.351 (AVG: 78.498) mIOU 47.785 mAP 61.521 mAcc 59.349 +IOU: 70.994 96.620 39.537 49.196 81.537 73.695 61.434 31.097 20.047 44.006 0.022 47.434 32.292 31.235 32.890 49.707 82.194 18.113 69.053 24.599 +mAP: 71.618 96.933 55.081 51.196 85.952 79.285 63.944 47.821 37.404 63.985 15.403 47.922 58.272 59.932 46.408 77.485 89.206 59.739 79.713 43.115 +mAcc: 83.465 98.970 73.437 53.560 95.297 88.644 65.842 40.979 21.339 92.845 0.022 51.907 84.412 36.392 35.464 58.142 87.889 18.472 70.158 29.740 + +thomas 04/10 08:47:10 Finished test. Elapsed time: 130.9223 +thomas 04/10 08:47:10 Current best mIoU: 54.497 at iter 18000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 08:48:50 ===> Epoch[146](22040/151): Loss 0.3086 LR: 8.331e-02 Score 90.222 Data time: 0.2149, Total iter time: 2.4566 +thomas 04/10 08:50:31 ===> Epoch[147](22080/151): Loss 0.2606 LR: 8.328e-02 Score 91.562 Data time: 0.2152, Total iter time: 2.4577 +thomas 04/10 08:52:14 ===> Epoch[147](22120/151): Loss 0.2688 LR: 8.325e-02 Score 91.290 Data time: 0.2209, Total iter time: 2.5301 +thomas 04/10 08:53:54 ===> Epoch[147](22160/151): Loss 0.2708 LR: 8.322e-02 Score 91.356 Data time: 0.2364, Total iter time: 2.4509 +thomas 04/10 08:55:34 ===> Epoch[148](22200/151): Loss 0.2764 LR: 8.318e-02 Score 91.144 Data time: 0.2163, Total iter time: 2.4232 +thomas 04/10 08:57:13 ===> Epoch[148](22240/151): Loss 0.2665 LR: 8.315e-02 Score 91.547 Data time: 0.2429, Total iter time: 2.4253 +thomas 04/10 08:58:58 ===> Epoch[148](22280/151): Loss 0.2680 LR: 8.312e-02 Score 91.226 Data time: 0.2353, Total iter time: 2.5833 +thomas 04/10 09:00:46 ===> Epoch[148](22320/151): Loss 0.2755 LR: 8.309e-02 Score 90.973 Data time: 0.2545, Total iter time: 2.6403 +thomas 04/10 09:02:33 ===> Epoch[149](22360/151): Loss 0.2779 LR: 8.306e-02 Score 91.001 Data time: 0.2548, Total iter time: 2.6095 +thomas 04/10 09:04:16 ===> Epoch[149](22400/151): Loss 0.2778 LR: 8.303e-02 Score 90.996 Data time: 0.2561, Total iter time: 2.5208 +thomas 04/10 09:05:57 ===> Epoch[149](22440/151): Loss 0.2860 LR: 8.300e-02 Score 90.726 Data time: 0.2458, Total iter time: 2.4728 +thomas 04/10 09:07:40 ===> Epoch[149](22480/151): Loss 0.2738 LR: 8.297e-02 Score 91.330 Data time: 0.2252, Total iter time: 2.5153 +thomas 04/10 09:09:24 ===> Epoch[150](22520/151): Loss 0.2724 LR: 8.294e-02 Score 91.103 Data time: 0.2381, Total iter time: 2.5392 +thomas 04/10 09:11:00 ===> Epoch[150](22560/151): Loss 0.2487 LR: 8.291e-02 Score 91.897 Data time: 0.2238, Total iter time: 2.3497 +thomas 04/10 09:12:39 ===> Epoch[150](22600/151): Loss 0.2937 LR: 8.288e-02 Score 90.543 Data time: 0.2220, Total iter time: 2.4248 +thomas 04/10 09:14:22 ===> Epoch[150](22640/151): Loss 0.2806 LR: 8.285e-02 Score 90.944 Data time: 0.2229, Total iter time: 2.5111 +thomas 04/10 09:16:09 ===> Epoch[151](22680/151): Loss 0.2719 LR: 8.282e-02 Score 91.319 Data time: 0.2450, Total iter time: 2.5997 +thomas 04/10 09:17:55 ===> Epoch[151](22720/151): Loss 0.2784 LR: 8.279e-02 Score 91.062 Data time: 0.2347, Total iter time: 2.5992 +thomas 04/10 09:19:35 ===> Epoch[151](22760/151): Loss 0.2721 LR: 8.276e-02 Score 91.333 Data time: 0.2298, Total iter time: 2.4562 +thomas 04/10 09:21:14 ===> Epoch[151](22800/151): Loss 0.2849 LR: 8.273e-02 Score 91.048 Data time: 0.2274, Total iter time: 2.4252 +thomas 04/10 09:22:57 ===> Epoch[152](22840/151): Loss 0.2587 LR: 8.269e-02 Score 91.599 Data time: 0.2540, Total iter time: 2.4917 +thomas 04/10 09:24:42 ===> Epoch[152](22880/151): Loss 0.2830 LR: 8.266e-02 Score 90.843 Data time: 0.2256, Total iter time: 2.5681 +thomas 04/10 09:26:22 ===> Epoch[152](22920/151): Loss 0.2797 LR: 8.263e-02 Score 90.920 Data time: 0.2242, Total iter time: 2.4595 +thomas 04/10 09:28:05 ===> Epoch[153](22960/151): Loss 0.3081 LR: 8.260e-02 Score 90.299 Data time: 0.2184, Total iter time: 2.5032 +thomas 04/10 09:29:47 ===> Epoch[153](23000/151): Loss 0.3086 LR: 8.257e-02 Score 90.327 Data time: 0.2469, Total iter time: 2.5012 +thomas 04/10 09:29:49 Checkpoint saved to ./outputs/ScanNet/2020-04-09_16-27-14/checkpoint_NoneRes16UNet34C.pth +thomas 04/10 09:29:49 ===> Start testing +/home/tcn02/SpatioTemporalSegmentation/lib/datasets +thomas 04/10 09:30:32 101/312: Data time: 0.0030, Iter time: 0.2152 Loss 0.539 (AVG: 0.731) Score 79.512 (AVG: 80.913) mIOU 45.093 mAP 62.061 mAcc 55.173 +IOU: 73.909 96.885 43.812 59.864 81.919 72.886 64.162 32.878 25.474 51.605 0.000 27.359 44.303 40.452 8.328 19.656 58.560 12.792 62.036 24.980 +mAP: 73.958 97.441 49.303 63.117 86.763 86.628 66.227 46.251 44.607 55.179 6.340 46.965 54.310 67.774 44.734 84.944 88.818 71.925 67.076 38.852 +mAcc: 88.488 99.121 69.616 69.055 95.970 92.582 70.560 46.718 27.400 94.999 0.000 28.567 81.346 45.110 8.515 20.309 59.197 12.792 62.851 30.260 + +thomas 04/10 09:31:14 201/312: Data time: 0.0027, Iter time: 0.2591 Loss 0.894 (AVG: 0.740) Score 74.426 (AVG: 80.420) mIOU 45.866 mAP 61.462 mAcc 55.949 +IOU: 72.969 96.826 44.516 64.363 82.722 74.306 66.263 31.205 29.176 51.411 0.000 27.569 39.038 33.541 18.321 25.609 57.864 10.281 67.473 23.870 +mAP: 74.249 97.149 52.398 64.726 85.473 84.056 65.600 47.837 41.877 59.774 9.990 43.916 52.256 57.591 49.882 80.062 80.216 70.068 72.702 39.410 +mAcc: 88.641 99.066 68.749 73.134 95.310 94.079 72.885 41.604 31.265 92.478 0.000 28.745 82.923 38.716 18.823 27.525 58.301 10.284 68.016 28.439 + +thomas 04/10 09:31:55 301/312: Data time: 0.0038, Iter time: 0.2718 Loss 0.778 (AVG: 0.715) Score 77.200 (AVG: 80.892) mIOU 47.454 mAP 62.078 mAcc 57.299 +IOU: 73.125 96.703 45.656 64.778 82.851 76.153 66.374 32.418 31.115 53.066 0.022 38.066 42.035 30.381 29.897 24.555 57.624 10.283 69.497 24.485 +mAP: 74.569 97.182 52.644 62.450 86.212 84.394 64.472 48.560 43.284 58.066 11.880 48.549 56.090 53.578 55.876 76.900 79.250 68.873 78.128 40.603 +mAcc: 88.453 98.966 69.873 73.686 94.900 93.463 73.459 43.763 33.208 91.346 0.022 40.138 83.557 36.150 31.211 26.207 57.968 10.286 69.964 29.359 + +thomas 04/10 09:31:59 312/312: Data time: 0.0023, Iter time: 0.1705 Loss 0.851 (AVG: 0.714) Score 75.909 (AVG: 80.877) mIOU 47.313 mAP 62.065 mAcc 57.172 +IOU: 73.229 96.757 45.791 64.844 82.564 76.412 65.425 32.694 30.545 53.103 0.022 37.295 41.923 29.829 29.467 24.555 57.719 10.111 69.497 24.472 +mAP: 74.590 97.239 53.118 62.253 86.334 84.153 63.438 49.055 43.035 59.472 11.788 48.102 56.132 52.926 55.950 76.900 79.563 68.580 78.128 40.551 +mAcc: 88.496 98.986 70.307 73.805 94.918 93.481 72.567 43.953 32.571 91.557 0.022 39.358 83.461 35.469 30.861 26.207 58.053 10.114 69.964 29.296 + +thomas 04/10 09:31:59 Finished test. Elapsed time: 130.4676 +thomas 04/10 09:31:59 Current best mIoU: 54.497 at iter 18000 +/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. + warnings.warn("To get the last learning rate computed by the scheduler, " +thomas 04/10 09:33:39 ===> Epoch[153](23040/151): Loss 0.2660 LR: 8.254e-02 Score 91.119 Data time: 0.2328, Total iter time: 2.4332 +thomas 04/10 09:35:18 ===> Epoch[153](23080/151): Loss 0.2867 LR: 8.251e-02 Score 90.944 Data time: 0.2296, Total iter time: 2.4348 +thomas 04/10 09:37:00 ===> Epoch[154](23120/151): Loss 0.2995 LR: 8.248e-02 Score 90.414 Data time: 0.2245, Total iter time: 2.4953 +thomas 04/10 09:38:41 ===> Epoch[154](23160/151): Loss 0.2792 LR: 8.245e-02 Score 91.172 Data time: 0.2173, Total iter time: 2.4572 +thomas 04/10 09:40:22 ===> Epoch[154](23200/151): Loss 0.2680 LR: 8.242e-02 Score 91.191 Data time: 0.2412, Total iter time: 2.4815 +thomas 04/10 09:42:04 ===> Epoch[154](23240/151): Loss 0.2638 LR: 8.239e-02 Score 91.665 Data time: 0.2504, Total iter time: 2.4756 +thomas 04/10 09:43:43 ===> Epoch[155](23280/151): Loss 0.2868 LR: 8.236e-02 Score 90.741 Data time: 0.2269, Total iter time: 2.4313 +thomas 04/10 09:45:28 ===> Epoch[155](23320/151): Loss 0.2684 LR: 8.233e-02 Score 91.203 Data time: 0.2404, Total iter time: 2.5469 +thomas 04/10 09:47:09 ===> Epoch[155](23360/151): Loss 0.2629 LR: 8.230e-02 Score 91.625 Data time: 0.2125, Total iter time: 2.4891 +thomas 04/10 09:48:48 ===> Epoch[155](23400/151): Loss 0.2788 LR: 8.227e-02 Score 91.115 Data time: 0.2262, Total iter time: 2.4241 +thomas 04/10 09:50:24 ===> Epoch[156](23440/151): Loss 0.2677 LR: 8.223e-02 Score 91.486 Data time: 0.2078, Total iter time: 2.3353 +thomas 04/10 09:52:03 ===> Epoch[156](23480/151): Loss 0.2406 LR: 8.220e-02 Score 92.208 Data time: 0.2124, Total iter time: 2.4178 +thomas 04/10 09:53:44 ===> Epoch[156](23520/151): Loss 0.2556 LR: 8.217e-02 Score 91.745 Data time: 0.2460, Total iter time: 2.4861 +Traceback (most recent call last): + File "/home/tcn02/.pyenv/versions/3.6.10/lib/python3.6/runpy.py", line 193, in _run_module_as_main + "__main__", mod_spec) + File "/home/tcn02/.pyenv/versions/3.6.10/lib/python3.6/runpy.py", line 85, in _run_code + exec(code, run_globals) + File "/home/tcn02/SpatioTemporalSegmentation/main.py", line 159, in + main() + File "/home/tcn02/SpatioTemporalSegmentation/main.py", line 152, in main + train(model, train_data_loader, val_data_loader, config) + File "/home/tcn02/SpatioTemporalSegmentation/lib/train.py", line 100, in train + soutput = model(*inputs) + File "/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ + result = self.forward(*input, **kwargs) + File "/home/tcn02/SpatioTemporalSegmentation/models/res16unet.py", line 252, in forward + out = self.block8(out) + File "/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ + result = self.forward(*input, **kwargs) + File "/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward + input = module(input) + File "/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ + result = self.forward(*input, **kwargs) + File "/home/tcn02/SpatioTemporalSegmentation/models/modules/resnet_block.py", line 42, in forward + out = self.conv1(x) + File "/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ + result = self.forward(*input, **kwargs) + File "/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/MinkowskiEngine/MinkowskiConvolution.py", line 278, in forward + out_coords_key, input.coords_man) + File "/home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/MinkowskiEngine/MinkowskiConvolution.py", line 93, in forward + ctx.coords_man.CPPCoordsManager) +RuntimeError: CUDA out of memory. Tried to allocate 86.00 MiB (GPU 0; 15.90 GiB total capacity; 1.92 GiB already allocated; 18.38 MiB free; 2.04 GiB reserved in total by PyTorch) (malloc at /pytorch/c10/cuda/CUDACachingAllocator.cpp:289) +frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7f2a28c3f193 in /home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/lib/libc10.so) +frame #1: + 0x1bccc (0x7f2a28e80ccc in /home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/lib/libc10_cuda.so) +frame #2: + 0x1cd5e (0x7f2a28e81d5e in /home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/lib/libc10_cuda.so) +frame #3: THCStorage_resize + 0xa3 (0x7f2a2d73d6f3 in /home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/lib/libtorch.so) +frame #4: + 0x5b2b38a (0x7f2a2ebbd38a in /home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/lib/libtorch.so) +frame #5: at::native::resize_cuda_(at::Tensor&, c10::ArrayRef, c10::optional) + 0x19c (0x7f2a2ebbc34c in /home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/lib/libtorch.so) +frame #6: + 0x45bc37a (0x7f2a2d64e37a in /home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/lib/libtorch.so) +frame #7: + 0x1f4fc43 (0x7f2a2afe1c43 in /home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/lib/libtorch.so) +frame #8: + 0x437149d (0x7f2a2d40349d in /home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/lib/libtorch.so) +frame #9: + 0x1f4fc43 (0x7f2a2afe1c43 in /home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/lib/libtorch.so) +frame #10: at::Tensor::resize_(c10::ArrayRef, c10::optional) const + 0x1a6 (0x7f2a15705226 in /home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/MinkowskiEngineBackend.cpython-36m-x86_64-linux-gnu.so) +frame #11: + 0x5a519 (0x7f2a15713519 in /home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/MinkowskiEngineBackend.cpython-36m-x86_64-linux-gnu.so) +frame #12: + 0x431eb (0x7f2a156fc1eb in /home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/MinkowskiEngineBackend.cpython-36m-x86_64-linux-gnu.so) +frame #13: + 0x3c2dc (0x7f2a156f52dc in /home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/MinkowskiEngineBackend.cpython-36m-x86_64-linux-gnu.so) + +frame #21: THPFunction_apply(_object*, _object*) + 0xa8f (0x7f2a73c9982f in /home/tcn02/.cache/pypoetry/virtualenvs/spatiotemporalsegmentation-m5BOK7Dg-py3.6/lib/python3.6/site-packages/torch/lib/libtorch_python.so) + +``` \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/s3dis_fold5/Pointnet2_original.md b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/s3dis_fold5/Pointnet2_original.md new file mode 100644 index 00000000..405f9dc2 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/s3dis_fold5/Pointnet2_original.md @@ -0,0 +1,694 @@ +``` +(superpoint-graph-job-py3.6) ➜ deeppointcloud-benchmarks git:(pn2) ✗ poetry run python train.py experiment.model_name=pointnet2_kc experiment.dataset=s3dis experiment.experiment_name=15-59-58 +CLASS WEIGHT : {'ceiling': 0.0249, 'floor': 0.026, 'wall': 0.0301, 'column': 0.0805, 'beam': 0.1004, 'window': 0.1216, 'door': 0.0584, 'table': 0.0679, 'chair': 0.0542, 'bookcase': 0.179, 'sofa': 0.069, 'board': 0.1509, 'clutter': 0.0371} +SegmentationModel( + (SA_modules): ModuleList( + (0): PointnetSAModuleMSG( + (groupers): ModuleList( + (0): QueryAndGroup() + (1): QueryAndGroup() + ) + (mlps): ModuleList( + (0): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(9, 16, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + (layer2): Conv2d( + (conv): Conv2d(16, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + ) + (1): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(9, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + (layer2): Conv2d( + (conv): Conv2d(32, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + ) + ) + ) + (1): PointnetSAModuleMSG( + (groupers): ModuleList( + (0): QueryAndGroup() + (1): QueryAndGroup() + ) + (mlps): ModuleList( + (0): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(99, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + (layer2): Conv2d( + (conv): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + ) + (1): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(99, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(64, 96, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + (layer2): Conv2d( + (conv): Conv2d(96, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + ) + ) + ) + (2): PointnetSAModuleMSG( + (groupers): ModuleList( + (0): QueryAndGroup() + (1): QueryAndGroup() + ) + (mlps): ModuleList( + (0): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(259, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(128, 196, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(196, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + (layer2): Conv2d( + (conv): Conv2d(196, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + ) + (1): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(259, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(128, 196, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(196, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + (layer2): Conv2d( + (conv): Conv2d(196, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + ) + ) + ) + (3): PointnetSAModuleMSG( + (groupers): ModuleList( + (0): QueryAndGroup() + (1): QueryAndGroup() + ) + (mlps): ModuleList( + (0): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(515, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + (layer2): Conv2d( + (conv): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + ) + (1): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(515, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(256, 384, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + (layer2): Conv2d( + (conv): Conv2d(384, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + ) + ) + ) + ) + (FP_modules): ModuleList( + (0): PointnetFPModule( + (mlp): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(262, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + ) + ) + (1): PointnetFPModule( + (mlp): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(608, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + ) + ) + (2): PointnetFPModule( + (mlp): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(768, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + ) + ) + (3): PointnetFPModule( + (mlp): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(1536, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (activation): ReLU(inplace) + ) + ) + ) + ) + (FC_layer): Seq( + (0): Conv1d( + (conv): Conv1d(128, 128, kernel_size=(1,), stride=(1,), bias=False) + (normlayer): BatchNorm1d( + (bn): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + (1): Dropout(p=0.5) + (2): Conv1d( + (conv): Conv1d(128, 13, kernel_size=(1,), stride=(1,)) + ) + ) +) +Adam ( +Parameter Group 0 + amsgrad: False + betas: (0.9, 0.999) + eps: 1e-08 + lr: 0.01 + weight_decay: 0 +) +Model size = 3026829 +Access tensorboard with the following command +EPOCH 1 / 350 + 0%| | 0/523 [00:00 0.9110346436500549, macc: 61.33574242365333 -> 61.377196530312936 + +EPOCH 3 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:41<00:00, 1.86it/s, data_loading=0.331, iteration=0.476, train_acc=82.36, train_loss_seg=0.677, train_macc=71.94, train_miou=49.87] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:11<00:00, 3.02it/s, test_acc=80.76, test_loss_seg=0.563, test_macc=67.97, test_miou=38.89] +loss_seg: 0.9110346436500549 -> 0.563483715057373, acc: 75.69741537404613 -> 80.76140647710756, macc: 61.377196530312936 -> 67.97780615141481, miou: 34.126744523142705 -> 38.895046430138535 + +EPOCH 4 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:46<00:00, 1.82it/s, data_loading=0.332, iteration=0.466, train_acc=83.81, train_loss_seg=0.615, train_macc=74.71, train_miou=52.82] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:09<00:00, 3.09it/s, test_acc=80.82, test_loss_seg=0.529, test_macc=69.60, test_miou=36.70] +loss_seg: 0.563483715057373 -> 0.5292543172836304, acc: 80.76140647710756 -> 80.82145513490195, macc: 67.97780615141481 -> 69.60467642834878 + +EPOCH 5 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:48<00:00, 1.82it/s, data_loading=0.336, iteration=0.464, train_acc=84.80, train_loss_seg=0.572, train_macc=76.60, train_miou=54.62] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:08<00:00, 3.13it/s, test_acc=80.43, test_loss_seg=0.839, test_macc=70.67, test_miou=39.47] +macc: 69.60467642834878 -> 70.67034371324998, miou: 38.895046430138535 -> 39.47518720735008 + +EPOCH 6 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:43<00:00, 1.84it/s, data_loading=0.326, iteration=0.459, train_acc=85.68, train_loss_seg=0.537, train_macc=77.89, train_miou=56.21] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:08<00:00, 3.13it/s, test_acc=82.38, test_loss_seg=0.535, test_macc=68.75, test_miou=38.79] +acc: 80.82145513490195 -> 82.38928861396256 + +EPOCH 7 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:39<00:00, 1.87it/s, data_loading=0.325, iteration=0.457, train_acc=86.40, train_loss_seg=0.501, train_macc=79.63, train_miou=57.89] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:08<00:00, 3.16it/s, test_acc=76.86, test_loss_seg=0.630, test_macc=66.25, test_miou=35.61] + +EPOCH 8 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:38<00:00, 1.88it/s, data_loading=0.338, iteration=0.466, train_acc=87.15, train_loss_seg=0.473, train_macc=80.78, train_miou=59.18] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.18it/s, test_acc=79.99, test_loss_seg=0.512, test_macc=69.68, test_miou=38.44] +loss_seg: 0.5292543172836304 -> 0.51248699426651 + +EPOCH 9 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:38<00:00, 1.88it/s, data_loading=0.331, iteration=0.465, train_acc=87.69, train_loss_seg=0.452, train_macc=81.48, train_miou=60.15] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:08<00:00, 3.15it/s, test_acc=82.55, test_loss_seg=0.545, test_macc=70.58, test_miou=40.43] +acc: 82.38928861396256 -> 82.55865052688948, miou: 39.47518720735008 -> 40.43524252361075 + +EPOCH 10 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:37<00:00, 1.88it/s, data_loading=0.324, iteration=0.476, train_acc=88.20, train_loss_seg=0.424, train_macc=82.79, train_miou=61.47] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.20it/s, test_acc=81.87, test_loss_seg=0.385, test_macc=69.04, test_miou=40.39] +loss_seg: 0.51248699426651 -> 0.3858228325843811 + +EPOCH 11 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:33<00:00, 1.91it/s, data_loading=0.327, iteration=0.460, train_acc=88.64, train_loss_seg=0.408, train_macc=83.38, train_miou=61.98] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:06<00:00, 3.21it/s, test_acc=82.11, test_loss_seg=0.393, test_macc=70.55, test_miou=39.37] + +EPOCH 12 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:35<00:00, 1.90it/s, data_loading=0.329, iteration=0.458, train_acc=88.96, train_loss_seg=0.391, train_macc=84.36, train_miou=63.03] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:06<00:00, 3.23it/s, test_acc=83.40, test_loss_seg=0.233, test_macc=70.19, test_miou=41.02] +loss_seg: 0.3858228325843811 -> 0.2336183786392212, acc: 82.55865052688948 -> 83.40356871139171, miou: 40.43524252361075 -> 41.021470293663725 + +EPOCH 13 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:36<00:00, 1.89it/s, data_loading=0.331, iteration=0.465, train_acc=89.41, train_loss_seg=0.375, train_macc=84.59, train_miou=63.92] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.20it/s, test_acc=82.00, test_loss_seg=0.339, test_macc=70.33, test_miou=41.84] +miou: 41.021470293663725 -> 41.84289022643496 + +EPOCH 14 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:36<00:00, 1.89it/s, data_loading=0.330, iteration=0.452, train_acc=89.90, train_loss_seg=0.354, train_macc=85.73, train_miou=64.91] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.16it/s, test_acc=83.10, test_loss_seg=0.311, test_macc=69.94, test_miou=41.03] + +EPOCH 15 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:37<00:00, 1.89it/s, data_loading=0.344, iteration=0.452, train_acc=90.04, train_loss_seg=0.347, train_macc=85.96, train_miou=65.52] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:08<00:00, 3.16it/s, test_acc=81.96, test_loss_seg=0.295, test_macc=69.19, test_miou=40.05] + +EPOCH 16 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:36<00:00, 1.89it/s, data_loading=0.333, iteration=0.461, train_acc=90.43, train_loss_seg=0.328, train_macc=86.81, train_miou=66.43] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.20it/s, test_acc=83.43, test_loss_seg=0.434, test_macc=71.08, test_miou=41.53] +acc: 83.40356871139171 -> 83.43692868254911, macc: 70.67034371324998 -> 71.08029098060784 + +EPOCH 17 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:35<00:00, 1.90it/s, data_loading=0.325, iteration=0.461, train_acc=90.69, train_loss_seg=0.32 , train_macc=87.06, train_miou=66.71] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.18it/s, test_acc=83.33, test_loss_seg=0.269, test_macc=71.73, test_miou=41.16] +macc: 71.08029098060784 -> 71.73158063103956 + +EPOCH 18 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:37<00:00, 1.89it/s, data_loading=0.323, iteration=0.462, train_acc=91.20, train_loss_seg=0.294, train_macc=87.99, train_miou=68.31] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:08<00:00, 3.13it/s, test_acc=82.44, test_loss_seg=0.307, test_macc=70.42, test_miou=40.09] + +EPOCH 19 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:36<00:00, 1.89it/s, data_loading=0.323, iteration=0.468, train_acc=91.32, train_loss_seg=0.291, train_macc=88.27, train_miou=68.26] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.17it/s, test_acc=84.06, test_loss_seg=0.212, test_macc=71.29, test_miou=43.01] +loss_seg: 0.2336183786392212 -> 0.21208004653453827, acc: 83.43692868254911 -> 84.06896546829583, miou: 41.84289022643496 -> 43.01301018966881 + +EPOCH 20 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:37<00:00, 1.88it/s, data_loading=0.333, iteration=0.460, train_acc=91.62, train_loss_seg=0.281, train_macc=88.50, train_miou=69.17] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:08<00:00, 3.14it/s, test_acc=83.58, test_loss_seg=0.254, test_macc=70.92, test_miou=41.98] + +EPOCH 21 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:37<00:00, 1.88it/s, data_loading=0.339, iteration=0.464, train_acc=91.81, train_loss_seg=0.272, train_macc=88.65, train_miou=69.02] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:08<00:00, 3.13it/s, test_acc=83.31, test_loss_seg=0.318, test_macc=70.13, test_miou=42.24] + +EPOCH 22 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:37<00:00, 1.89it/s, data_loading=0.334, iteration=0.462, train_acc=92.04, train_loss_seg=0.261, train_macc=89.32, train_miou=70.17] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.17it/s, test_acc=84.69, test_loss_seg=0.224, test_macc=72.45, test_miou=43.03] +acc: 84.06896546829583 -> 84.69125082326487, macc: 71.73158063103956 -> 72.45769781266885, miou: 43.01301018966881 -> 43.03650049598167 + +EPOCH 23 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:38<00:00, 1.88it/s, data_loading=0.336, iteration=0.459, train_acc=92.09, train_loss_seg=0.261, train_macc=89.61, train_miou=70.26] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:08<00:00, 3.14it/s, test_acc=83.80, test_loss_seg=0.197, test_macc=71.36, test_miou=42.44] +loss_seg: 0.21208004653453827 -> 0.19700782001018524 + +EPOCH 24 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:37<00:00, 1.89it/s, data_loading=0.314, iteration=0.456, train_acc=92.48, train_loss_seg=0.246, train_macc=89.72, train_miou=71.14] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:08<00:00, 3.13it/s, test_acc=84.98, test_loss_seg=0.151, test_macc=71.98, test_miou=44.43] +loss_seg: 0.19700782001018524 -> 0.15112227201461792, acc: 84.69125082326487 -> 84.98886463253999, miou: 43.03650049598167 -> 44.439731484668506 + +EPOCH 25 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:34<00:00, 1.90it/s, data_loading=0.324, iteration=0.452, train_acc=92.77, train_loss_seg=0.234, train_macc=90.51, train_miou=71.85] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.17it/s, test_acc=84.12, test_loss_seg=0.221, test_macc=70.77, test_miou=42.35] + +EPOCH 26 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:34<00:00, 1.91it/s, data_loading=0.315, iteration=0.449, train_acc=92.93, train_loss_seg=0.227, train_macc=90.85, train_miou=72.26] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:08<00:00, 3.12it/s, test_acc=83.00, test_loss_seg=0.288, test_macc=69.50, test_miou=42.85] + +EPOCH 27 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:36<00:00, 1.89it/s, data_loading=0.322, iteration=0.454, train_acc=93.00, train_loss_seg=0.225, train_macc=90.99, train_miou=72.58] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.20it/s, test_acc=83.31, test_loss_seg=0.235, test_macc=70.93, test_miou=42.57] + +EPOCH 28 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:35<00:00, 1.90it/s, data_loading=0.319, iteration=0.460, train_acc=93.36, train_loss_seg=0.213, train_macc=91.15, train_miou=73.26] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.17it/s, test_acc=83.88, test_loss_seg=0.183, test_macc=70.78, test_miou=42.46] + +EPOCH 29 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:35<00:00, 1.90it/s, data_loading=0.334, iteration=0.449, train_acc=93.19, train_loss_seg=0.217, train_macc=91.11, train_miou=73.31] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.18it/s, test_acc=82.47, test_loss_seg=0.242, test_macc=71.12, test_miou=40.42] + +EPOCH 30 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:35<00:00, 1.90it/s, data_loading=0.328, iteration=0.461, train_acc=93.53, train_loss_seg=0.204, train_macc=91.65, train_miou=73.97] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.18it/s, test_acc=82.15, test_loss_seg=0.203, test_macc=71.51, test_miou=39.71] + +EPOCH 31 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:35<00:00, 1.90it/s, data_loading=0.324, iteration=0.453, train_acc=93.54, train_loss_seg=0.204, train_macc=91.61, train_miou=73.81] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:09<00:00, 3.08it/s, test_acc=83.94, test_loss_seg=0.235, test_macc=70.03, test_miou=43.26] + +EPOCH 32 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:36<00:00, 1.89it/s, data_loading=0.325, iteration=0.456, train_acc=93.69, train_loss_seg=0.200, train_macc=91.95, train_miou=74.49] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.18it/s, test_acc=84.11, test_loss_seg=0.166, test_macc=71.31, test_miou=41.01] + +EPOCH 33 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:34<00:00, 1.90it/s, data_loading=0.327, iteration=0.469, train_acc=93.82, train_loss_seg=0.193, train_macc=92.10, train_miou=74.67] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:08<00:00, 3.16it/s, test_acc=84.75, test_loss_seg=0.185, test_macc=73.11, test_miou=44.23] +macc: 72.45769781266885 -> 73.11160574504926 + +EPOCH 34 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:36<00:00, 1.89it/s, data_loading=0.329, iteration=0.459, train_acc=93.74, train_loss_seg=0.199, train_macc=91.91, train_miou=74.15] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.18it/s, test_acc=83.20, test_loss_seg=0.150, test_macc=71.73, test_miou=42.17] +loss_seg: 0.15112227201461792 -> 0.15026850998401642 + +EPOCH 35 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:35<00:00, 1.90it/s, data_loading=0.329, iteration=0.451, train_acc=94.19, train_loss_seg=0.179, train_macc=92.64, train_miou=75.81] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.19it/s, test_acc=84.23, test_loss_seg=0.190, test_macc=71.86, test_miou=43.93] + +EPOCH 36 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:35<00:00, 1.90it/s, data_loading=0.332, iteration=0.456, train_acc=94.30, train_loss_seg=0.176, train_macc=92.89, train_miou=76.17] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:08<00:00, 3.14it/s, test_acc=82.42, test_loss_seg=0.261, test_macc=68.87, test_miou=38.35] + +EPOCH 37 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:35<00:00, 1.90it/s, data_loading=0.325, iteration=0.460, train_acc=94.05, train_loss_seg=0.186, train_macc=92.43, train_miou=75.26] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.16it/s, test_acc=84.07, test_loss_seg=0.165, test_macc=70.83, test_miou=42.26] + +EPOCH 38 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:36<00:00, 1.89it/s, data_loading=0.331, iteration=0.464, train_acc=94.14, train_loss_seg=0.182, train_macc=92.75, train_miou=75.89] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.20it/s, test_acc=83.08, test_loss_seg=0.171, test_macc=70.10, test_miou=41.86] + +EPOCH 39 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:33<00:00, 1.91it/s, data_loading=0.319, iteration=0.451, train_acc=94.48, train_loss_seg=0.169, train_macc=93.20, train_miou=76.81] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:06<00:00, 3.22it/s, test_acc=84.71, test_loss_seg=0.132, test_macc=71.43, test_miou=44.33] +loss_seg: 0.15026850998401642 -> 0.13270394504070282 + +EPOCH 40 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:35<00:00, 1.90it/s, data_loading=0.327, iteration=0.462, train_acc=94.69, train_loss_seg=0.163, train_macc=93.58, train_miou=77.13] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.17it/s, test_acc=84.79, test_loss_seg=0.129, test_macc=71.15, test_miou=42.89] +loss_seg: 0.13270394504070282 -> 0.12961801886558533 + +EPOCH 41 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:34<00:00, 1.91it/s, data_loading=0.328, iteration=0.457, train_acc=94.54, train_loss_seg=0.167, train_macc=93.25, train_miou=76.86] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.20it/s, test_acc=84.38, test_loss_seg=0.196, test_macc=71.61, test_miou=41.87] + +EPOCH 42 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:35<00:00, 1.90it/s, data_loading=0.340, iteration=0.455, train_acc=94.73, train_loss_seg=0.161, train_macc=93.53, train_miou=77.72] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:06<00:00, 3.25it/s, test_acc=84.59, test_loss_seg=0.167, test_macc=71.84, test_miou=44.35] + +EPOCH 43 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:34<00:00, 1.91it/s, data_loading=0.325, iteration=0.463, train_acc=94.70, train_loss_seg=0.163, train_macc=93.26, train_miou=77.40] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.20it/s, test_acc=83.36, test_loss_seg=0.166, test_macc=70.89, test_miou=41.52] + +EPOCH 44 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:34<00:00, 1.90it/s, data_loading=0.325, iteration=0.452, train_acc=94.84, train_loss_seg=0.156, train_macc=93.64, train_miou=77.39] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:06<00:00, 3.25it/s, test_acc=83.43, test_loss_seg=0.158, test_macc=71.14, test_miou=41.82] + +EPOCH 45 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:32<00:00, 1.92it/s, data_loading=0.323, iteration=0.461, train_acc=94.50, train_loss_seg=0.176, train_macc=93.02, train_miou=76.19] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:06<00:00, 3.25it/s, test_acc=84.02, test_loss_seg=0.161, test_macc=71.02, test_miou=42.06] + +EPOCH 46 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [05:06<00:00, 1.70it/s, data_loading=0.338, iteration=0.461, train_acc=94.97, train_loss_seg=0.153, train_macc=93.70, train_miou=77.66] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.18it/s, test_acc=84.55, test_loss_seg=0.115, test_macc=71.61, test_miou=42.24] +loss_seg: 0.12961801886558533 -> 0.11495383083820343 + +EPOCH 47 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:35<00:00, 1.90it/s, data_loading=0.333, iteration=0.463, train_acc=95.36, train_loss_seg=0.139, train_macc=94.37, train_miou=79.42] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:08<00:00, 3.14it/s, test_acc=84.62, test_loss_seg=0.154, test_macc=72.11, test_miou=43.54] + +EPOCH 48 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:36<00:00, 1.89it/s, data_loading=0.328, iteration=0.457, train_acc=95.34, train_loss_seg=0.138, train_macc=94.43, train_miou=79.75] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:06<00:00, 3.24it/s, test_acc=84.35, test_loss_seg=0.110, test_macc=70.68, test_miou=42.43] +loss_seg: 0.11495383083820343 -> 0.11047439277172089 + +EPOCH 49 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [05:05<00:00, 1.71it/s, data_loading=0.332, iteration=0.469, train_acc=94.97, train_loss_seg=0.153, train_macc=93.95, train_miou=78.14] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:08<00:00, 3.13it/s, test_acc=84.87, test_loss_seg=0.195, test_macc=71.97, test_miou=44.80] +miou: 44.439731484668506 -> 44.80838627244144 + +EPOCH 50 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:38<00:00, 1.88it/s, data_loading=0.333, iteration=0.462, train_acc=95.29, train_loss_seg=0.139, train_macc=94.41, train_miou=79.09] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.17it/s, test_acc=84.60, test_loss_seg=0.158, test_macc=71.63, test_miou=43.43] + +EPOCH 51 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:37<00:00, 1.88it/s, data_loading=0.322, iteration=0.461, train_acc=95.36, train_loss_seg=0.138, train_macc=94.42, train_miou=79.55] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:09<00:00, 3.10it/s, test_acc=84.55, test_loss_seg=0.098, test_macc=70.97, test_miou=43.26] +loss_seg: 0.11047439277172089 -> 0.09837505966424942 + +EPOCH 52 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:38<00:00, 1.88it/s, data_loading=0.335, iteration=0.470, train_acc=95.31, train_loss_seg=0.140, train_macc=94.25, train_miou=78.97] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.18it/s, test_acc=84.49, test_loss_seg=0.204, test_macc=71.37, test_miou=42.48] + +EPOCH 53 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:34<00:00, 1.90it/s, data_loading=0.334, iteration=0.450, train_acc=95.43, train_loss_seg=0.136, train_macc=94.42, train_miou=79.70] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:06<00:00, 3.22it/s, test_acc=84.37, test_loss_seg=0.130, test_macc=71.28, test_miou=41.90] + +EPOCH 54 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:31<00:00, 1.92it/s, data_loading=0.323, iteration=0.453, train_acc=95.47, train_loss_seg=0.134, train_macc=94.46, train_miou=79.92] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.20it/s, test_acc=84.68, test_loss_seg=0.106, test_macc=71.52, test_miou=43.73] + +EPOCH 55 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:36<00:00, 1.89it/s, data_loading=0.324, iteration=0.453, train_acc=95.67, train_loss_seg=0.126, train_macc=94.63, train_miou=80.24] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.21it/s, test_acc=84.83, test_loss_seg=0.124, test_macc=71.88, test_miou=44.05] + +EPOCH 56 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:34<00:00, 1.91it/s, data_loading=0.324, iteration=0.450, train_acc=95.53, train_loss_seg=0.133, train_macc=94.66, train_miou=80.05] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:09<00:00, 3.12it/s, test_acc=84.29, test_loss_seg=0.177, test_macc=70.81, test_miou=43.80] + +EPOCH 57 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:34<00:00, 1.91it/s, data_loading=0.322, iteration=0.458, train_acc=95.39, train_loss_seg=0.138, train_macc=94.43, train_miou=79.38] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.18it/s, test_acc=84.19, test_loss_seg=0.208, test_macc=70.90, test_miou=42.96] + +EPOCH 58 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:36<00:00, 1.89it/s, data_loading=0.324, iteration=0.464, train_acc=95.61, train_loss_seg=0.130, train_macc=94.69, train_miou=80.01] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:08<00:00, 3.15it/s, test_acc=84.55, test_loss_seg=0.112, test_macc=70.99, test_miou=41.65] + +EPOCH 59 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:36<00:00, 1.89it/s, data_loading=0.325, iteration=0.453, train_acc=95.80, train_loss_seg=0.122, train_macc=95.03, train_miou=81.05] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:06<00:00, 3.24it/s, test_acc=84.23, test_loss_seg=0.138, test_macc=72.23, test_miou=42.83] + +EPOCH 60 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:37<00:00, 1.89it/s, data_loading=0.326, iteration=0.456, train_acc=95.75, train_loss_seg=0.125, train_macc=94.90, train_miou=80.58] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:08<00:00, 3.14it/s, test_acc=84.16, test_loss_seg=0.100, test_macc=70.87, test_miou=42.73] + +EPOCH 61 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:35<00:00, 1.90it/s, data_loading=0.323, iteration=0.459, train_acc=95.80, train_loss_seg=0.123, train_macc=94.99, train_miou=80.98] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.17it/s, test_acc=84.52, test_loss_seg=0.168, test_macc=71.90, test_miou=43.64] + +EPOCH 62 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:37<00:00, 1.89it/s, data_loading=0.331, iteration=0.464, train_acc=95.94, train_loss_seg=0.118, train_macc=95.10, train_miou=80.99] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:08<00:00, 3.14it/s, test_acc=83.74, test_loss_seg=0.127, test_macc=70.53, test_miou=41.60] + +EPOCH 63 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:35<00:00, 1.90it/s, data_loading=0.344, iteration=0.451, train_acc=95.67, train_loss_seg=0.128, train_macc=94.89, train_miou=80.30] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.16it/s, test_acc=84.29, test_loss_seg=0.140, test_macc=71.06, test_miou=41.74] + +EPOCH 64 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:35<00:00, 1.90it/s, data_loading=0.328, iteration=0.456, train_acc=96.12, train_loss_seg=0.111, train_macc=95.37, train_miou=82.25] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.17it/s, test_acc=83.05, test_loss_seg=0.139, test_macc=71.48, test_miou=42.18] + +EPOCH 65 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:36<00:00, 1.89it/s, data_loading=0.325, iteration=0.461, train_acc=95.73, train_loss_seg=0.128, train_macc=94.82, train_miou=80.18] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.16it/s, test_acc=85.08, test_loss_seg=0.077, test_macc=70.91, test_miou=43.16] +loss_seg: 0.09837505966424942 -> 0.07739892601966858, acc: 84.98886463253999 -> 85.0884193597838 + +EPOCH 66 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:37<00:00, 1.88it/s, data_loading=0.324, iteration=0.479, train_acc=95.70, train_loss_seg=0.128, train_macc=94.81, train_miou=80.21] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.20it/s, test_acc=84.17, test_loss_seg=0.123, test_macc=71.74, test_miou=41.97] + +EPOCH 67 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:34<00:00, 1.91it/s, data_loading=0.320, iteration=0.457, train_acc=96.04, train_loss_seg=0.114, train_macc=95.37, train_miou=82.08] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:06<00:00, 3.22it/s, test_acc=84.49, test_loss_seg=0.119, test_macc=71.48, test_miou=43.67] + +EPOCH 68 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:34<00:00, 1.91it/s, data_loading=0.332, iteration=0.454, train_acc=96.13, train_loss_seg=0.110, train_macc=95.51, train_miou=82.13] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.21it/s, test_acc=84.97, test_loss_seg=0.127, test_macc=72.48, test_miou=44.31] + +EPOCH 69 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:34<00:00, 1.91it/s, data_loading=0.331, iteration=0.454, train_acc=96.16, train_loss_seg=0.110, train_macc=95.54, train_miou=82.26] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.19it/s, test_acc=84.71, test_loss_seg=0.083, test_macc=70.67, test_miou=43.37] + +EPOCH 70 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:33<00:00, 1.91it/s, data_loading=0.330, iteration=0.456, train_acc=96.23, train_loss_seg=0.108, train_macc=95.54, train_miou=82.62] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:08<00:00, 3.16it/s, test_acc=84.86, test_loss_seg=0.094, test_macc=71.52, test_miou=43.10] + +EPOCH 71 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:35<00:00, 1.90it/s, data_loading=0.330, iteration=0.451, train_acc=96.21, train_loss_seg=0.108, train_macc=95.59, train_miou=82.09] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.16it/s, test_acc=84.41, test_loss_seg=0.112, test_macc=71.62, test_miou=42.50] + +EPOCH 72 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:34<00:00, 1.90it/s, data_loading=0.328, iteration=0.452, train_acc=96.06, train_loss_seg=0.115, train_macc=95.35, train_miou=81.45] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.17it/s, test_acc=84.80, test_loss_seg=0.120, test_macc=71.54, test_miou=43.00] + +EPOCH 73 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:34<00:00, 1.90it/s, data_loading=0.336, iteration=0.455, train_acc=95.86, train_loss_seg=0.123, train_macc=95.20, train_miou=81.20] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.18it/s, test_acc=83.71, test_loss_seg=0.129, test_macc=70.73, test_miou=40.08] + +EPOCH 74 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:35<00:00, 1.90it/s, data_loading=0.326, iteration=0.452, train_acc=96.15, train_loss_seg=0.112, train_macc=95.38, train_miou=81.31] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.18it/s, test_acc=84.92, test_loss_seg=0.088, test_macc=71.76, test_miou=41.76] + +EPOCH 75 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:36<00:00, 1.89it/s, data_loading=0.332, iteration=0.469, train_acc=96.31, train_loss_seg=0.105, train_macc=95.69, train_miou=82.84] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.19it/s, test_acc=84.66, test_loss_seg=0.152, test_macc=71.09, test_miou=42.08] + +EPOCH 76 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:36<00:00, 1.89it/s, data_loading=0.331, iteration=0.457, train_acc=96.28, train_loss_seg=0.108, train_macc=95.67, train_miou=82.70] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.18it/s, test_acc=84.69, test_loss_seg=0.104, test_macc=71.21, test_miou=42.81] + +EPOCH 77 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:32<00:00, 1.92it/s, data_loading=0.317, iteration=0.458, train_acc=96.45, train_loss_seg=0.101, train_macc=95.90, train_miou=83.25] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:06<00:00, 3.22it/s, test_acc=84.88, test_loss_seg=0.109, test_macc=71.48, test_miou=45.58] +miou: 44.80838627244144 -> 45.583527852040845 + +EPOCH 78 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:35<00:00, 1.90it/s, data_loading=0.326, iteration=0.459, train_acc=96.18, train_loss_seg=0.111, train_macc=95.49, train_miou=82.25] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.18it/s, test_acc=84.90, test_loss_seg=0.138, test_macc=71.76, test_miou=42.66] + +EPOCH 79 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:35<00:00, 1.90it/s, data_loading=0.330, iteration=0.453, train_acc=96.50, train_loss_seg=0.099, train_macc=95.97, train_miou=82.85] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.17it/s, test_acc=85.13, test_loss_seg=0.077, test_macc=71.45, test_miou=44.09] +loss_seg: 0.07739892601966858 -> 0.07701006531715393, acc: 85.0884193597838 -> 85.13295373251269 + +EPOCH 80 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:34<00:00, 1.90it/s, data_loading=0.329, iteration=0.451, train_acc=96.38, train_loss_seg=0.103, train_macc=95.77, train_miou=82.97] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:06<00:00, 3.22it/s, test_acc=84.03, test_loss_seg=0.068, test_macc=70.81, test_miou=42.61] +loss_seg: 0.07701006531715393 -> 0.06862835586071014 + +EPOCH 81 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:30<00:00, 1.93it/s, data_loading=0.322, iteration=0.454, train_acc=96.39, train_loss_seg=0.102, train_macc=95.87, train_miou=82.95] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:05<00:00, 3.28it/s, test_acc=84.78, test_loss_seg=0.248, test_macc=70.62, test_miou=44.48] + +EPOCH 82 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:36<00:00, 1.89it/s, data_loading=0.313, iteration=0.459, train_acc=96.46, train_loss_seg=0.101, train_macc=96.00, train_miou=83.53] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.19it/s, test_acc=84.42, test_loss_seg=0.083, test_macc=71.10, test_miou=42.54] + +EPOCH 83 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:33<00:00, 1.91it/s, data_loading=0.316, iteration=0.456, train_acc=95.87, train_loss_seg=0.127, train_macc=94.91, train_miou=80.11] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:06<00:00, 3.21it/s, test_acc=84.12, test_loss_seg=0.087, test_macc=70.99, test_miou=41.23] + +EPOCH 84 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:33<00:00, 1.92it/s, data_loading=0.320, iteration=0.465, train_acc=96.59, train_loss_seg=0.097, train_macc=96.04, train_miou=83.78] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.17it/s, test_acc=84.86, test_loss_seg=0.069, test_macc=71.64, test_miou=44.13] + +EPOCH 85 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:31<00:00, 1.93it/s, data_loading=0.320, iteration=0.446, train_acc=96.61, train_loss_seg=0.095, train_macc=96.06, train_miou=83.66] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:06<00:00, 3.24it/s, test_acc=84.61, test_loss_seg=0.079, test_macc=71.61, test_miou=42.14] + +EPOCH 86 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:33<00:00, 1.91it/s, data_loading=0.329, iteration=0.453, train_acc=96.54, train_loss_seg=0.098, train_macc=95.90, train_miou=83.48] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.17it/s, test_acc=84.79, test_loss_seg=0.088, test_macc=71.40, test_miou=43.40] + +EPOCH 87 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:32<00:00, 1.92it/s, data_loading=0.324, iteration=0.443, train_acc=96.60, train_loss_seg=0.097, train_macc=96.17, train_miou=83.74] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.19it/s, test_acc=84.87, test_loss_seg=0.078, test_macc=72.29, test_miou=41.83] + +EPOCH 88 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:33<00:00, 1.91it/s, data_loading=0.326, iteration=0.455, train_acc=96.47, train_loss_seg=0.101, train_macc=95.83, train_miou=82.83] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.21it/s, test_acc=83.82, test_loss_seg=0.130, test_macc=71.20, test_miou=40.69] + +EPOCH 89 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:34<00:00, 1.90it/s, data_loading=0.324, iteration=0.461, train_acc=96.44, train_loss_seg=0.104, train_macc=95.80, train_miou=82.77] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.20it/s, test_acc=84.00, test_loss_seg=0.123, test_macc=69.64, test_miou=40.34] + +EPOCH 90 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:34<00:00, 1.90it/s, data_loading=0.319, iteration=0.471, train_acc=96.56, train_loss_seg=0.098, train_macc=96.12, train_miou=83.30] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.16it/s, test_acc=84.24, test_loss_seg=0.102, test_macc=70.94, test_miou=42.32] + +EPOCH 91 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:34<00:00, 1.91it/s, data_loading=0.320, iteration=0.451, train_acc=96.79, train_loss_seg=0.09 , train_macc=96.35, train_miou=84.52] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:06<00:00, 3.25it/s, test_acc=84.29, test_loss_seg=0.080, test_macc=71.28, test_miou=42.60] + +EPOCH 92 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:33<00:00, 1.91it/s, data_loading=0.318, iteration=0.450, train_acc=96.77, train_loss_seg=0.09 , train_macc=96.37, train_miou=84.68] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:13<00:00, 2.94it/s, test_acc=84.82, test_loss_seg=0.061, test_macc=71.26, test_miou=43.51] +loss_seg: 0.06862835586071014 -> 0.0614144541323185 + +EPOCH 93 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:40<00:00, 1.87it/s, data_loading=0.330, iteration=0.457, train_acc=96.64, train_loss_seg=0.095, train_macc=96.14, train_miou=84.0 ] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:08<00:00, 3.16it/s, test_acc=84.29, test_loss_seg=0.087, test_macc=70.98, test_miou=42.75] + +EPOCH 94 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:36<00:00, 1.89it/s, data_loading=0.335, iteration=0.456, train_acc=96.55, train_loss_seg=0.099, train_macc=95.96, train_miou=83.15] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:07<00:00, 3.20it/s, test_acc=83.66, test_loss_seg=0.154, test_macc=70.37, test_miou=39.60] + +EPOCH 95 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:32<00:00, 1.92it/s, data_loading=0.320, iteration=0.453, train_acc=96.62, train_loss_seg=0.096, train_macc=96.02, train_miou=83.26] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:06<00:00, 3.21it/s, test_acc=84.72, test_loss_seg=0.076, test_macc=71.05, test_miou=42.13] + +EPOCH 96 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:32<00:00, 1.92it/s, data_loading=0.332, iteration=0.455, train_acc=96.75, train_loss_seg=0.091, train_macc=96.31, train_miou=84.42] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:06<00:00, 3.25it/s, test_acc=85.26, test_loss_seg=0.078, test_macc=71.66, test_miou=44.13] +acc: 85.13295373251269 -> 85.26667395303411 + +EPOCH 97 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:32<00:00, 1.92it/s, data_loading=0.326, iteration=0.455, train_acc=96.69, train_loss_seg=0.092, train_macc=96.27, train_miou=84.20] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:06<00:00, 3.21it/s, test_acc=84.62, test_loss_seg=0.061, test_macc=71.09, test_miou=41.48] + +EPOCH 98 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:31<00:00, 1.93it/s, data_loading=0.324, iteration=0.445, train_acc=96.75, train_loss_seg=0.092, train_macc=96.14, train_miou=84.26] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:05<00:00, 3.26it/s, test_acc=84.99, test_loss_seg=0.099, test_macc=70.84, test_miou=43.99] + +EPOCH 99 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:32<00:00, 1.92it/s, data_loading=0.326, iteration=0.454, train_acc=96.59, train_loss_seg=0.099, train_macc=96.08, train_miou=83.40] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:08<00:00, 3.15it/s, test_acc=84.42, test_loss_seg=0.075, test_macc=71.49, test_miou=43.09] + +EPOCH 100 / 350 +100%|█████████████████████████████████████████████████████| 523/523 [04:32<00:00, 1.92it/s, data_loading=0.323, iteration=0.450, train_acc=96.90, train_loss_seg=0.085, train_macc=96.43, train_miou=85.00] +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 215/215 [01:12<00:00, 2.97it/s, test_acc=84.83, test_loss_seg=0.051, test_macc=71.33, test_miou=42.57] +loss_seg: 0.0614144541323185 -> 0.051259834319353104 + +BEST: +* loss_seg: 0.051259834319353104 +* acc: 85.26667395303411 +* miou: 45.583527852040845 +* macc: 73.11160574504926 +``` diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/s3dis_fold5/RSConv_2LD.md b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/s3dis_fold5/RSConv_2LD.md new file mode 100644 index 00000000..63088b53 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/s3dis_fold5/RSConv_2LD.md @@ -0,0 +1,151 @@ + +``` +# Relation-Shape Convolutional Neural Network for Point Cloud Analysis (https://arxiv.org/abs/1904.07601) +RSConv: + type: RSConv + down_conv: + module_name: RSConv + ratios: [0.2, 0.25] + radius: [0.1, 0.2] + local_nn: [[10, 8, 3], [10, 32, 64, 64]] + down_conv_nn: [[6, 16, 32, 64], [64, 64, 128]] + innermost: + module_name: GlobalBaseModule + aggr: max + nn: [131, 128] #[3 + 128] + up_conv: + module_name: FPModule + ratios: [1, 0.25, 0.2] + radius: [0.2, 0.2, 0.1] + up_conv_nn: [[256, 64], [128, 64], [64, 64]] #[128 + 128, ...], [64+64, ...] + up_k: [1, 3, 3] + skip: True + mlp_cls: + nn: [64, 64, 64, 64] + dropout: 0.5 +``` + +``` +CLASS WEIGHT : {'ceiling': 0.0249, 'floor': 0.026, 'wall': 0.0301, 'column': 0.0805, 'beam': 0.1004, 'window': 0.1216, 'door': 0.0584, 'table': 0.0679, 'chair': 0.0542, 'bookcase': 0.179, 'sofa': 0.069, 'board': 0.1509, 'clutter': 0.0371} +SegmentationModel( + (model): UnetSkipConnectionBlock( + (down): RSConv( + (_conv): Convolution( + (local_nn): Sequential( + (0): Sequential( + (0): Linear(in_features=10, out_features=8, bias=True) + (1): ReLU() + (2): BatchNorm1d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (1): Sequential( + (0): Linear(in_features=8, out_features=6, bias=True) + (1): ReLU() + (2): BatchNorm1d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + ) + (activation): ReLU() + (global_nn): Sequential( + (0): Sequential( + (0): Linear(in_features=6, out_features=16, bias=True) + (1): ReLU() + (2): BatchNorm1d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (1): Sequential( + (0): Linear(in_features=16, out_features=32, bias=True) + (1): ReLU() + (2): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (2): Sequential( + (0): Linear(in_features=32, out_features=64, bias=True) + (1): ReLU() + (2): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + ) + ) + ) + (submodule): UnetSkipConnectionBlock( + (down): RSConv( + (_conv): Convolution( + (local_nn): Sequential( + (0): Sequential( + (0): Linear(in_features=10, out_features=32, bias=True) + (1): ReLU() + (2): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (1): Sequential( + (0): Linear(in_features=32, out_features=64, bias=True) + (1): ReLU() + (2): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (2): Sequential( + (0): Linear(in_features=64, out_features=64, bias=True) + (1): ReLU() + (2): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + ) + (activation): ReLU() + (global_nn): Sequential( + (0): Sequential( + (0): Linear(in_features=64, out_features=64, bias=True) + (1): ReLU() + (2): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (1): Sequential( + (0): Linear(in_features=64, out_features=128, bias=True) + (1): ReLU() + (2): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + ) + ) + ) + (submodule): UnetSkipConnectionBlock( + (inner): GlobalBaseModule( + (nn): Sequential( + (0): Sequential( + (0): Linear(in_features=131, out_features=128, bias=True) + (1): ReLU() + (2): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + ) + ) + (up): FPModule( + (nn): Sequential( + (0): Sequential( + (0): Linear(in_features=256, out_features=64, bias=True) + (1): ReLU() + (2): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + ) + ) + ) + (up): FPModule( + (nn): Sequential( + (0): Sequential( + (0): Linear(in_features=128, out_features=64, bias=True) + (1): ReLU() + (2): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + ) + ) + ) + (up): FPModule( + (nn): Sequential( + (0): Sequential( + (0): Linear(in_features=70, out_features=64, bias=True) + (1): ReLU() + (2): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + ) + ) + ) +) +Model size = 78919 +``` + +EPOCH 105 / 350 +``` +100%|███████████████████████████████████████████████████| 1395/1395 [08:40<00:00, 2.68it/s, data_loading=0.002, iteration=0.161, train_acc=92.19, train_loss_seg=0.278, train_macc=88.40, train_miou=60.00] +``` +``` +100%|██████████████████████████████████████████████████████████████████████████████████████████████| 571/571 [01:53<00:00, 5.01it/s, test_acc=84.50, test_loss_seg=0.492, test_macc=74.96, test_miou=50.16] +``` diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/shapenet/pointnet2_charlesmsg.md b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/shapenet/pointnet2_charlesmsg.md new file mode 100644 index 00000000..cd023809 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/shapenet/pointnet2_charlesmsg.md @@ -0,0 +1,897 @@ +``` +SegmentationModel( + (model): UnetSkipConnectionBlock( + (down): PointNetMSGDown( + (mlps): ModuleList( + (0): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(6, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer1): Conv2d( + (conv): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer2): Conv2d( + (conv): Conv2d(32, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + ) + (1): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(6, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer1): Conv2d( + (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer2): Conv2d( + (conv): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + ) + (2): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(6, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer1): Conv2d( + (conv): Conv2d(64, 96, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer2): Conv2d( + (conv): Conv2d(96, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + ) + ) + ) + (submodule): UnetSkipConnectionBlock( + (down): PointNetMSGDown( + (mlps): ModuleList( + (0): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(323, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer1): Conv2d( + (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer2): Conv2d( + (conv): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + ) + (1): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(323, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer1): Conv2d( + (conv): Conv2d(128, 196, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(196, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer2): Conv2d( + (conv): Conv2d(196, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + ) + ) + ) + (submodule): UnetSkipConnectionBlock( + (inner): GlobalDenseBaseModule( + (nn): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(515, 256, kernel_size=(1, 1), stride=(1, 1)) + (activation): ReLU(inplace=True) + ) + (layer1): Conv2d( + (conv): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1)) + (activation): ReLU(inplace=True) + ) + (layer2): Conv2d( + (conv): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1)) + (activation): ReLU(inplace=True) + ) + ) + ) + (up): DenseFPModule( + (nn): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(1536, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer1): Conv2d( + (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + ) + ) + ) + (up): DenseFPModule( + (nn): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(576, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer1): Conv2d( + (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + ) + ) + ) + (up): DenseFPModule( + (nn): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(131, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer1): Conv2d( + (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + ) + ) + ) +) + test_acc = 92.85835378686252 + test_macc = 86.64299746256908 + test_miou = 79.99710743728609 + test_acc_per_class = {'Airplane': 91.19217948130498, 'Bag': 95.99958147321429, 'Cap': 92.1475497159091, 'Car': 91.5811659414557, 'Chair': 94.9159795587713, 'Earphone': 90.45061383928571, 'Guitar': 96.23685632861635, 'Knife': 92.17041015625, 'Lamp': 91.73438865821679, 'Laptop': 97.2526826054217, 'Motorbike': 86.58183976715686, 'Mug': 99.4384765625, 'Pistol': 95.16379616477273, 'Rocket': 81.73014322916666, 'Skateboard': 94.17685231854838, 'Table': 94.96114478920991} + test_macc_per_class = {'Airplane': 89.06243767787745, 'Bag': 84.09410361964764, 'Cap': 86.54321207586608, 'Car': 87.7766907282271, 'Chair': 91.56436731086353, 'Earphone': 70.48676946450136, 'Guitar': 94.36338527343086, 'Knife': 92.20011208213583, 'Lamp': 90.13929039908149, 'Laptop': 97.40733601531446, 'Motorbike': 78.55112059647392, 'Mug': 96.37424104485675, 'Pistol': 85.95718166476736, 'Rocket': 66.45608530138686, 'Skateboard': 85.07896230817131, 'Table': 90.23266383850316} + test_miou_per_class = {'Airplane': 81.30800228045005, 'Bag': 79.90154219367757, 'Cap': 80.95288325004574, 'Car': 77.86561384816493, 'Chair': 85.53427826230133, 'Earphone': 63.080359997168365, 'Guitar': 90.16618477875281, 'Knife': 85.47124348409487, 'Lamp': 82.68459221203005, 'Laptop': 94.63087773652728, 'Motorbike': 70.0856598757985, 'Mug': 94.52961288041513, 'Pistol': 80.60174154334135, 'Rocket': 57.74179685696749, 'Skateboard': 72.14278482957299, 'Table': 83.2565449672691} +================================================== +EPOCH 27 / 100 +100%|█████████████████████████████| 876/876 [15:46<00:00, 1.08s/it, data_loading=0.573, iteration=0.231, train_acc=94.52, train_loss_seg=0.131, train_macc=89.64, train_miou=84.02] +Learning rate = 0.000500 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:30<00:00, 2.00it/s, test_acc=93.22, test_loss_seg=0.153, test_macc=87.85, test_miou=80.96] +================================================== + test_loss_seg = 0.15340451896190643 + test_acc = 93.22898699137852 + test_macc = 87.85805703177758 + test_miou = 80.96247394197191 + test_acc_per_class = {'Airplane': 90.62757743768329, 'Bag': 96.11118861607143, 'Cap': 94.02521306818183, 'Car': 91.44580696202532, 'Chair': 94.99241222034802, 'Earphone': 92.29561941964286, 'Guitar': 96.17021668632076, 'Knife': 93.56689453125, 'Lamp': 91.45968777316433, 'Laptop': 98.17924039909639, 'Motorbike': 86.97916666666666, 'Mug': 99.46931537828947, 'Pistol': 94.35924183238636, 'Rocket': 81.34765625, 'Skateboard': 95.50308719758065, 'Table': 95.13146742334906} + test_macc_per_class = {'Airplane': 88.2120060003439, 'Bag': 84.03704470054272, 'Cap': 90.42198460141482, 'Car': 86.68123156303913, 'Chair': 92.22595957067684, 'Earphone': 71.1377068772335, 'Guitar': 93.38804561422519, 'Knife': 93.55317151502673, 'Lamp': 90.28987301451377, 'Laptop': 98.14970383845026, 'Motorbike': 83.86486246068793, 'Mug': 96.19329426862734, 'Pistol': 90.22163671659838, 'Rocket': 71.62019314384735, 'Skateboard': 84.49905116803619, 'Table': 91.23314745517713} + test_miou_per_class = {'Airplane': 80.65380838732187, 'Bag': 80.24180951813926, 'Cap': 85.52031373418691, 'Car': 77.22299128601095, 'Chair': 85.97673551319005, 'Earphone': 65.24062052051468, 'Guitar': 89.73535384246146, 'Knife': 87.90561125158978, 'Lamp': 82.19385505627528, 'Laptop': 96.39722300937387, 'Motorbike': 71.07819359908385, 'Mug': 94.78214715603237, 'Pistol': 79.61865910766471, 'Rocket': 59.514398968796364, 'Skateboard': 75.03366818846827, 'Table': 84.28419393244099} +================================================== +EPOCH 28 / 100 +100%|█████████████████████████████| 876/876 [15:48<00:00, 1.08s/it, data_loading=0.570, iteration=0.230, train_acc=94.65, train_loss_seg=0.131, train_macc=89.04, train_miou=83.46] +Learning rate = 0.000500 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:29<00:00, 2.00it/s, test_acc=92.80, test_loss_seg=0.100, test_macc=86.38, test_miou=79.91] +================================================== + test_loss_seg = 0.10021613538265228 + test_acc = 92.80427131779552 + test_macc = 86.38394935831555 + test_miou = 79.91970122660882 + test_acc_per_class = {'Airplane': 90.89720605755132, 'Bag': 95.59500558035714, 'Cap': 91.89009232954545, 'Car': 91.27490852452532, 'Chair': 94.72586891867898, 'Earphone': 89.03111049107143, 'Guitar': 96.2758574095912, 'Knife': 90.9991455078125, 'Lamp': 92.11408708479021, 'Laptop': 98.11452842620481, 'Motorbike': 86.36354932598039, 'Mug': 99.45261101973685, 'Pistol': 95.12717507102273, 'Rocket': 82.47884114583334, 'Skateboard': 95.32825100806451, 'Table': 95.20010318396226} + test_macc_per_class = {'Airplane': 88.94360221404386, 'Bag': 80.90085808199245, 'Cap': 85.90292619013543, 'Car': 84.60633288726973, 'Chair': 92.15843884959686, 'Earphone': 70.9059370489332, 'Guitar': 93.68911824692113, 'Knife': 91.02153014139623, 'Lamp': 91.07645826302205, 'Laptop': 98.09734732110445, 'Motorbike': 79.12492820838155, 'Mug': 96.56731496049736, 'Pistol': 85.61990236075296, 'Rocket': 69.89330283222094, 'Skateboard': 83.6305206742677, 'Table': 90.00467145251254} + test_miou_per_class = {'Airplane': 80.78157055117343, 'Bag': 77.35969159376795, 'Cap': 80.28253065818791, 'Car': 76.5164429398088, 'Chair': 85.46628344217666, 'Earphone': 62.04618864045758, 'Guitar': 89.93403329233881, 'Knife': 83.48154491992675, 'Lamp': 83.75901032104366, 'Laptop': 96.27251424563674, 'Motorbike': 69.51969575265848, 'Mug': 94.67321682103926, 'Pistol': 80.22168912889191, 'Rocket': 60.24364578545022, 'Skateboard': 74.26175440042991, 'Table': 83.89540713275309} +================================================== +EPOCH 29 / 100 +100%|█████████████████████████████| 876/876 [15:48<00:00, 1.08s/it, data_loading=0.574, iteration=0.230, train_acc=94.96, train_loss_seg=0.128, train_macc=90.79, train_miou=85.09] +Learning rate = 0.000250 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:30<00:00, 2.00it/s, test_acc=93.23, test_loss_seg=0.196, test_macc=88.07, test_miou=81.17] +================================================== + test_loss_seg = 0.1961469203233719 + test_acc = 93.23073759506124 + test_macc = 88.07382094853415 + test_miou = 81.17031880461357 + test_acc_per_class = {'Airplane': 91.41054572947213, 'Bag': 95.96819196428571, 'Cap': 92.74236505681817, 'Car': 91.76628016218355, 'Chair': 95.0377030806108, 'Earphone': 90.80287388392857, 'Guitar': 96.51201356132076, 'Knife': 92.623291015625, 'Lamp': 91.4357858937937, 'Laptop': 98.00687123493977, 'Motorbike': 86.31472120098039, 'Mug': 99.38707853618422, 'Pistol': 95.47230113636364, 'Rocket': 83.26416015625, 'Skateboard': 95.69839969758065, 'Table': 95.24921921064269} + test_macc_per_class = {'Airplane': 89.42421771795684, 'Bag': 85.40382379748301, 'Cap': 87.76213757274512, 'Car': 86.46499252816187, 'Chair': 92.7078797963302, 'Earphone': 70.64083375743061, 'Guitar': 95.04001600129193, 'Knife': 92.64376171046031, 'Lamp': 91.56684300364158, 'Laptop': 98.01925754951804, 'Motorbike': 83.67225964252239, 'Mug': 97.22880146728664, 'Pistol': 86.7474776569102, 'Rocket': 76.58076191466664, 'Skateboard': 85.29472350133159, 'Table': 89.98334755880923} + test_miou_per_class = {'Airplane': 81.76235449819579, 'Bag': 80.27460752031888, 'Cap': 82.39836596406276, 'Car': 77.55584305802572, 'Chair': 86.08657819418973, 'Earphone': 63.0715212758375, 'Guitar': 90.80025263236439, 'Knife': 86.25808350744008, 'Lamp': 83.19166709924777, 'Laptop': 96.06615774894739, 'Motorbike': 72.02578195221675, 'Mug': 94.1796161622361, 'Pistol': 81.19552884029052, 'Rocket': 63.86565827529235, 'Skateboard': 75.9725078860307, 'Table': 84.0205762591209} +================================================== +EPOCH 30 / 100 +100%|█████████████████████████████| 876/876 [15:48<00:00, 1.08s/it, data_loading=0.574, iteration=0.236, train_acc=94.85, train_loss_seg=0.120, train_macc=90.81, train_miou=85.67] +Learning rate = 0.000250 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:29<00:00, 2.01it/s, test_acc=93.29, test_loss_seg=0.115, test_macc=86.85, test_miou=80.68] +================================================== + test_loss_seg = 0.11502961814403534 + test_acc = 93.29741751972873 + test_macc = 86.85001433441582 + test_miou = 80.67996523191651 + test_acc_per_class = {'Airplane': 91.1255956744868, 'Bag': 95.99958147321429, 'Cap': 92.23188920454545, 'Car': 91.95757515822784, 'Chair': 95.0192538174716, 'Earphone': 91.83175223214286, 'Guitar': 96.33973319575472, 'Knife': 93.8232421875, 'Lamp': 92.05928348994755, 'Laptop': 98.17100432981928, 'Motorbike': 86.53301164215686, 'Mug': 99.4384765625, 'Pistol': 95.1693448153409, 'Rocket': 83.12174479166666, 'Skateboard': 94.76121471774194, 'Table': 95.17597702314269} + test_macc_per_class = {'Airplane': 89.03032846867416, 'Bag': 82.52827540676121, 'Cap': 86.64299248428749, 'Car': 87.08500018893011, 'Chair': 93.07248800622054, 'Earphone': 71.17976525384326, 'Guitar': 94.9332697834113, 'Knife': 93.82134556542857, 'Lamp': 90.64099430528012, 'Laptop': 98.11776983399314, 'Motorbike': 79.96403825324565, 'Mug': 96.15382146208256, 'Pistol': 84.93867895644892, 'Rocket': 70.20119249480884, 'Skateboard': 80.90072247759542, 'Table': 90.38954640964167} + test_miou_per_class = {'Airplane': 81.49019266459204, 'Bag': 79.3004440240375, 'Cap': 81.13064873848018, 'Car': 78.21987851451986, 'Chair': 85.90387250682713, 'Earphone': 64.73536122899971, 'Guitar': 90.42847266275447, 'Knife': 88.36393133007594, 'Lamp': 83.0220137958252, 'Laptop': 96.37950915676103, 'Motorbike': 69.48324923118211, 'Mug': 94.50629335816407, 'Pistol': 79.96895197215981, 'Rocket': 61.28597424238653, 'Skateboard': 72.74061367666374, 'Table': 83.92003660723488} +================================================== +EPOCH 31 / 100 +100%|█████████████████████████████| 876/876 [15:47<00:00, 1.08s/it, data_loading=0.572, iteration=0.230, train_acc=95.34, train_loss_seg=0.116, train_macc=91.69, train_miou=86.29] +Learning rate = 0.000250 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:30<00:00, 2.00it/s, test_acc=93.17, test_loss_seg=0.085, test_macc=87.23, test_miou=80.75] +================================================== + test_loss_seg = 0.08585391193628311 + test_acc = 93.17479283404612 + test_macc = 87.2334939488383 + test_miou = 80.75697033616278 + test_acc_per_class = {'Airplane': 91.37517755681817, 'Bag': 96.2158203125, 'Cap': 92.46271306818183, 'Car': 91.95726611946202, 'Chair': 95.0624639337713, 'Earphone': 88.82882254464286, 'Guitar': 96.4309404481132, 'Knife': 93.54248046875, 'Lamp': 91.9486519340035, 'Laptop': 98.10746893825302, 'Motorbike': 87.12756587009804, 'Mug': 99.46674547697368, 'Pistol': 95.35466974431817, 'Rocket': 82.88167317708334, 'Skateboard': 94.7265625, 'Table': 95.30766325176887} + test_macc_per_class = {'Airplane': 90.22941763382764, 'Bag': 83.02168606168019, 'Cap': 87.19646833776883, 'Car': 87.15665523863217, 'Chair': 92.14378129209098, 'Earphone': 69.89354449363078, 'Guitar': 94.72227416056761, 'Knife': 93.52940852113849, 'Lamp': 90.50572842703522, 'Laptop': 98.07639702782734, 'Motorbike': 79.33172343682315, 'Mug': 96.36595383327888, 'Pistol': 87.96831779514088, 'Rocket': 69.20816381628526, 'Skateboard': 85.51262390129905, 'Table': 90.87375920438622} + test_miou_per_class = {'Airplane': 81.94446050056139, 'Bag': 80.20480432836587, 'Cap': 81.72161524217609, 'Car': 78.28383558090579, 'Chair': 86.12458469788338, 'Earphone': 60.602004915178476, 'Guitar': 90.56895862300145, 'Knife': 87.86284281453165, 'Lamp': 83.05045884869833, 'Laptop': 96.2578093922929, 'Motorbike': 70.92770699341807, 'Mug': 94.77691994930714, 'Pistol': 81.44899227239195, 'Rocket': 60.50318661344794, 'Skateboard': 73.46915263398756, 'Table': 84.36419197245635} +================================================== +EPOCH 32 / 100 +100%|█████████████████████████████| 876/876 [15:48<00:00, 1.08s/it, data_loading=0.575, iteration=0.227, train_acc=95.31, train_loss_seg=0.117, train_macc=90.99, train_miou=85.60] +Learning rate = 0.000250 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:29<00:00, 2.00it/s, test_acc=93.06, test_loss_seg=0.203, test_macc=87.12, test_miou=80.45] +================================================== + test_loss_seg = 0.20361241698265076 + test_acc = 93.06406385793046 + test_macc = 87.12048879686581 + test_miou = 80.4596145496511 + test_acc_per_class = {'Airplane': 91.27881002565982, 'Bag': 96.34137834821429, 'Cap': 90.8203125, 'Car': 91.796875, 'Chair': 94.99400745738636, 'Earphone': 89.84723772321429, 'Guitar': 96.33819772012579, 'Knife': 93.7567138671875, 'Lamp': 91.52985686188812, 'Laptop': 97.85097420933735, 'Motorbike': 86.71587775735294, 'Mug': 99.44875616776315, 'Pistol': 95.2425870028409, 'Rocket': 83.24381510416666, 'Skateboard': 94.50447328629032, 'Table': 95.31514869545991} + test_macc_per_class = {'Airplane': 89.97161136600984, 'Bag': 84.74701273527164, 'Cap': 84.42859388722587, 'Car': 85.38200318467446, 'Chair': 92.20138202431706, 'Earphone': 70.17629599470958, 'Guitar': 94.38570807320879, 'Knife': 93.74572932992015, 'Lamp': 89.80544114828879, 'Laptop': 97.87469517112717, 'Motorbike': 81.34764461850129, 'Mug': 97.34255017239317, 'Pistol': 85.41972277494715, 'Rocket': 73.5497358098793, 'Skateboard': 82.57158204530427, 'Table': 90.97811241407453} + test_miou_per_class = {'Airplane': 81.64225364827809, 'Bag': 81.26505834264994, 'Cap': 77.98838099428598, 'Car': 77.55484305260184, 'Chair': 85.83102536674897, 'Earphone': 61.874529239911936, 'Guitar': 90.38948533449083, 'Knife': 88.2428112838634, 'Lamp': 82.24483152850999, 'Laptop': 95.76598672132354, 'Motorbike': 70.53932018062464, 'Mug': 94.71810789810641, 'Pistol': 79.851836655024, 'Rocket': 62.87426005966705, 'Skateboard': 72.08270550885838, 'Table': 84.4883969794728} +================================================== +EPOCH 33 / 100 +100%|█████████████████████████████| 876/876 [15:46<00:00, 1.08s/it, data_loading=0.565, iteration=0.224, train_acc=95.59, train_loss_seg=0.115, train_macc=91.64, train_miou=86.71] +Learning rate = 0.000250 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:29<00:00, 2.00it/s, test_acc=93.17, test_loss_seg=0.135, test_macc=87.44, test_miou=80.85] +================================================== + test_loss_seg = 0.135122612118721 + test_acc = 93.17240495206183 + test_macc = 87.44845845717404 + test_miou = 80.85693253475722 + test_acc_per_class = {'Airplane': 91.4917350164956, 'Bag': 96.38323102678571, 'Cap': 91.41956676136364, 'Car': 91.8552833267405, 'Chair': 95.04852294921875, 'Earphone': 88.91950334821429, 'Guitar': 96.3956245086478, 'Knife': 93.9215087890625, 'Lamp': 92.11391635708041, 'Laptop': 98.10629235692771, 'Motorbike': 86.91980698529412, 'Mug': 99.44747121710526, 'Pistol': 95.5777254971591, 'Rocket': 82.4951171875, 'Skateboard': 95.23689516129032, 'Table': 95.42627874410378} + test_macc_per_class = {'Airplane': 89.48298147911312, 'Bag': 84.1738476036854, 'Cap': 85.02223493058077, 'Car': 87.03453467609717, 'Chair': 92.76949527200652, 'Earphone': 69.77468796055324, 'Guitar': 94.45888975011894, 'Knife': 93.91355275548563, 'Lamp': 89.8771150526542, 'Laptop': 98.09137434421447, 'Motorbike': 83.5750420112817, 'Mug': 96.042556789376, 'Pistol': 87.74932422919667, 'Rocket': 71.40123873509795, 'Skateboard': 84.52749915952757, 'Table': 91.28096056579543} + test_miou_per_class = {'Airplane': 81.9324423154001, 'Bag': 81.20226529294713, 'Cap': 79.16953074104345, 'Car': 77.89996014803708, 'Chair': 85.98297460872246, 'Earphone': 60.54599968461872, 'Guitar': 90.50485528649543, 'Knife': 88.53658898782679, 'Lamp': 82.8139468456402, 'Laptop': 96.25670685941566, 'Motorbike': 72.20938870646147, 'Mug': 94.57355801790176, 'Pistol': 81.77631088592517, 'Rocket': 60.99132720422915, 'Skateboard': 74.38179379157471, 'Table': 84.93327117987607} +================================================== +EPOCH 34 / 100 +100%|█████████████████████████████| 876/876 [15:47<00:00, 1.08s/it, data_loading=0.569, iteration=0.228, train_acc=95.01, train_loss_seg=0.117, train_macc=90.67, train_miou=84.88] +Learning rate = 0.000250 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:30<00:00, 2.00it/s, test_acc=93.13, test_loss_seg=0.156, test_macc=87.16, test_miou=80.51] +================================================== + test_loss_seg = 0.1566714346408844 + test_acc = 93.13860638801172 + test_macc = 87.16784201264483 + test_miou = 80.51680126555296 + test_acc_per_class = {'Airplane': 91.45751237170087, 'Bag': 95.89494977678571, 'Cap': 90.98455255681817, 'Car': 91.94212321993672, 'Chair': 94.95072798295455, 'Earphone': 89.52287946428571, 'Guitar': 96.19816234276729, 'Knife': 93.7481689453125, 'Lamp': 91.99594350961539, 'Laptop': 98.06923004518072, 'Motorbike': 86.72928155637256, 'Mug': 99.45132606907895, 'Pistol': 95.2914151278409, 'Rocket': 83.67106119791666, 'Skateboard': 95.22271925403226, 'Table': 95.08764878758844} + test_macc_per_class = {'Airplane': 90.14471800815083, 'Bag': 81.3812998468758, 'Cap': 84.22385075743897, 'Car': 86.83906886715509, 'Chair': 92.17215592272008, 'Earphone': 70.25123566789519, 'Guitar': 93.44238556728321, 'Knife': 93.73012429327252, 'Lamp': 90.59191881486194, 'Laptop': 98.05803094523561, 'Motorbike': 84.07813927344327, 'Mug': 96.05619166319835, 'Pistol': 86.19158470746667, 'Rocket': 72.86109033223265, 'Skateboard': 84.65582525217332, 'Table': 90.00785228291356} + test_miou_per_class = {'Airplane': 81.95039234496188, 'Bag': 78.49644362880258, 'Cap': 78.15017554826669, 'Car': 78.08285354399462, 'Chair': 85.71278808389387, 'Earphone': 61.493329293520105, 'Guitar': 89.83670826003824, 'Knife': 88.22385983429145, 'Lamp': 83.07187414930586, 'Laptop': 96.18514563773677, 'Motorbike': 70.96490724000326, 'Mug': 94.60897500534438, 'Pistol': 80.54524133723014, 'Rocket': 63.00256329759934, 'Skateboard': 74.35831010058985, 'Table': 83.58525294326803} +================================================== +EPOCH 35 / 100 +100%|█████████████████████████████| 876/876 [15:47<00:00, 1.08s/it, data_loading=0.569, iteration=0.227, train_acc=95.22, train_loss_seg=0.113, train_macc=90.60, train_miou=85.71] +Learning rate = 0.000250 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:30<00:00, 2.00it/s, test_acc=93.24, test_loss_seg=0.084, test_macc=87.64, test_miou=80.87] +================================================== + test_loss_seg = 0.08403374254703522 + test_acc = 93.24986362383835 + test_macc = 87.64508671171639 + test_miou = 80.87228036053683 + test_acc_per_class = {'Airplane': 91.59082317631965, 'Bag': 95.85658482142857, 'Cap': 91.78355823863636, 'Car': 92.08984375, 'Chair': 95.03083662553267, 'Earphone': 90.96330915178571, 'Guitar': 96.43892492138365, 'Knife': 93.7152099609375, 'Lamp': 91.99833369755245, 'Laptop': 98.15806193524097, 'Motorbike': 86.63449754901961, 'Mug': 99.45775082236842, 'Pistol': 95.43235085227273, 'Rocket': 82.470703125, 'Skateboard': 95.22586945564517, 'Table': 95.15115989829009} + test_macc_per_class = {'Airplane': 89.04640906700385, 'Bag': 83.79066595617574, 'Cap': 86.22623850636663, 'Car': 87.7611983565129, 'Chair': 91.96910312545982, 'Earphone': 71.0458907123327, 'Guitar': 94.6407168942902, 'Knife': 93.67976612004227, 'Lamp': 89.96881579186314, 'Laptop': 98.10887437499886, 'Motorbike': 82.87111967193545, 'Mug': 95.75795460430123, 'Pistol': 85.64950676192126, 'Rocket': 74.67695197728425, 'Skateboard': 86.05785872019786, 'Table': 91.07031674677604} + test_miou_per_class = {'Airplane': 82.0641558794372, 'Bag': 79.33434867063099, 'Cap': 80.23520924643405, 'Car': 78.78097647308475, 'Chair': 85.85478447578984, 'Earphone': 63.57578984955058, 'Guitar': 90.54023568451215, 'Knife': 88.1509488464281, 'Lamp': 82.64427089377511, 'Laptop': 96.35466668490601, 'Motorbike': 70.65757813369449, 'Mug': 94.63455818076285, 'Pistol': 80.48778520404437, 'Rocket': 61.78889584558051, 'Skateboard': 74.90865715891717, 'Table': 83.94362454104105} +================================================== +EPOCH 36 / 100 +100%|█████████████████████████████| 876/876 [15:48<00:00, 1.08s/it, data_loading=0.568, iteration=0.228, train_acc=95.39, train_loss_seg=0.118, train_macc=91.39, train_miou=86.29] +Learning rate = 0.000250 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:29<00:00, 2.01it/s, test_acc=93.18, test_loss_seg=0.166, test_macc=87.03, test_miou=80.70] +================================================== + test_loss_seg = 0.16629894077777863 + test_acc = 93.1840362976093 + test_macc = 87.03941365956172 + test_miou = 80.7076158768367 + test_acc_per_class = {'Airplane': 91.48328674853371, 'Bag': 96.41810825892857, 'Cap': 91.89897017045455, 'Car': 92.06512064873418, 'Chair': 95.0775146484375, 'Earphone': 89.50544084821429, 'Guitar': 96.51385613207547, 'Knife': 93.85009765625, 'Lamp': 91.84177638767483, 'Laptop': 98.17276920180723, 'Motorbike': 86.75896139705883, 'Mug': 99.47831003289474, 'Pistol': 94.85529119318183, 'Rocket': 82.30387369791666, 'Skateboard': 95.51568800403226, 'Table': 95.20551573555424} + test_macc_per_class = {'Airplane': 89.17969589436944, 'Bag': 84.46172600921913, 'Cap': 85.95650258994871, 'Car': 87.49709785916299, 'Chair': 92.40937630807616, 'Earphone': 70.07922308306519, 'Guitar': 95.07229655623304, 'Knife': 93.84826070089584, 'Lamp': 90.22855620366164, 'Laptop': 98.1531353313276, 'Motorbike': 78.9069243520677, 'Mug': 96.02402444255915, 'Pistol': 84.5990567717369, 'Rocket': 72.54369432236118, 'Skateboard': 83.6524462674419, 'Table': 90.01860186086071} + test_miou_per_class = {'Airplane': 81.8470503478444, 'Bag': 81.42550763852611, 'Cap': 80.31809166699216, 'Car': 78.68272618503379, 'Chair': 86.18688976740931, 'Earphone': 61.42939877080785, 'Guitar': 90.86138931326587, 'Knife': 88.41160480368472, 'Lamp': 82.42479495629277, 'Laptop': 96.3853558978698, 'Motorbike': 69.21458138256197, 'Mug': 94.84495762140371, 'Pistol': 79.57461608721532, 'Rocket': 60.93875306467487, 'Skateboard': 74.96616774092381, 'Table': 83.80996878488072} +================================================== +EPOCH 37 / 100 +100%|█████████████████████████████| 876/876 [15:47<00:00, 1.08s/it, data_loading=0.571, iteration=0.230, train_acc=94.57, train_loss_seg=0.114, train_macc=91.38, train_miou=85.48] +Learning rate = 0.000250 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:29<00:00, 2.00it/s, test_acc=93.37, test_loss_seg=0.101, test_macc=87.24, test_miou=81.00] +================================================== + test_loss_seg = 0.10134632140398026 + test_acc = 93.37676534846744 + test_macc = 87.24977929860533 + test_miou = 81.00780575985672 + test_acc_per_class = {'Airplane': 91.59612124266863, 'Bag': 96.20884486607143, 'Cap': 92.71129261363636, 'Car': 92.16463113132912, 'Chair': 95.00774036754261, 'Earphone': 91.09584263392857, 'Guitar': 96.50617875393081, 'Knife': 93.3416748046875, 'Lamp': 91.4238349541084, 'Laptop': 98.06158226656626, 'Motorbike': 87.39181219362744, 'Mug': 99.42819695723685, 'Pistol': 95.09499289772727, 'Rocket': 83.1787109375, 'Skateboard': 95.64012096774194, 'Table': 95.1766679871757} + test_macc_per_class = {'Airplane': 90.0597957060545, 'Bag': 83.3458742115248, 'Cap': 88.08413443296197, 'Car': 87.63194239611917, 'Chair': 92.42275250927976, 'Earphone': 70.30104043505679, 'Guitar': 94.54966297827542, 'Knife': 93.30280588427044, 'Lamp': 90.50723449496006, 'Laptop': 98.05754982393566, 'Motorbike': 80.43397309420497, 'Mug': 95.86997314421336, 'Pistol': 83.39273313878863, 'Rocket': 72.25731168832506, 'Skateboard': 85.73219001983568, 'Table': 90.04749481987906} + test_miou_per_class = {'Airplane': 82.24594243284238, 'Bag': 80.30540005312446, 'Cap': 82.45478709589001, 'Car': 78.86113837408469, 'Chair': 85.97162105477182, 'Earphone': 63.43292136351971, 'Guitar': 90.78079183125853, 'Knife': 87.4869836031494, 'Lamp': 81.79734426433151, 'Laptop': 96.17085371303699, 'Motorbike': 71.45695339730031, 'Mug': 94.38558978536085, 'Pistol': 79.0746673224116, 'Rocket': 62.05819285285838, 'Skateboard': 75.8714510037603, 'Table': 83.77025401000672} +================================================== +EPOCH 38 / 100 +100%|█████████████████████████████| 876/876 [15:47<00:00, 1.08s/it, data_loading=0.571, iteration=0.232, train_acc=95.06, train_loss_seg=0.118, train_macc=91.29, train_miou=86.00] +Learning rate = 0.000250 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:29<00:00, 2.00it/s, test_acc=93.00, test_loss_seg=0.161, test_macc=87.38, test_miou=80.62] +================================================== + test_loss_seg = 0.16178269684314728 + test_acc = 93.00293456839157 + test_macc = 87.38331929450625 + test_miou = 80.6263402251304 + test_acc_per_class = {'Airplane': 91.33995257514663, 'Bag': 95.76939174107143, 'Cap': 93.701171875, 'Car': 92.04750543908227, 'Chair': 94.9948397549716, 'Earphone': 84.70284598214286, 'Guitar': 96.293361831761, 'Knife': 93.665771484375, 'Lamp': 91.6208547312063, 'Laptop': 98.20453689759037, 'Motorbike': 87.11224724264706, 'Mug': 99.4371916118421, 'Pistol': 95.24591619318183, 'Rocket': 83.056640625, 'Skateboard': 95.55506552419355, 'Table': 95.29965958505306} + test_macc_per_class = {'Airplane': 89.68211677057958, 'Bag': 82.53411455506607, 'Cap': 89.49867437864077, 'Car': 87.0224903619836, 'Chair': 92.74302068612421, 'Earphone': 68.31078982976419, 'Guitar': 94.14587514940838, 'Knife': 93.66488092506658, 'Lamp': 90.31278754375185, 'Laptop': 98.15513383518902, 'Motorbike': 82.98680102771505, 'Mug': 96.17634557571053, 'Pistol': 84.77474579068667, 'Rocket': 69.5738838702578, 'Skateboard': 87.70862139224404, 'Table': 90.84282701991184} + test_miou_per_class = {'Airplane': 81.71433383063473, 'Bag': 78.57341515759214, 'Cap': 84.65787960229167, 'Car': 78.46661489708228, 'Chair': 86.13013926348177, 'Earphone': 55.42526298722535, 'Guitar': 90.01810406221719, 'Knife': 88.08519094073145, 'Lamp': 82.4485407473712, 'Laptop': 96.44495659836733, 'Motorbike': 72.1415046708371, 'Mug': 94.4974304828809, 'Pistol': 79.97964355553825, 'Rocket': 60.847613104665555, 'Skateboard': 76.01911150214819, 'Table': 84.57170219902143} +================================================== +EPOCH 39 / 100 +100%|█████████████████████████████| 876/876 [15:47<00:00, 1.08s/it, data_loading=0.571, iteration=0.227, train_acc=94.88, train_loss_seg=0.117, train_macc=89.82, train_miou=84.36] +Learning rate = 0.000250 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:29<00:00, 2.00it/s, test_acc=93.28, test_loss_seg=0.135, test_macc=87.35, test_miou=80.98] +================================================== + test_loss_seg = 0.1353352814912796 + test_acc = 93.28559608189924 + test_macc = 87.3499715071235 + test_miou = 80.98760638253789 + test_acc_per_class = {'Airplane': 91.5252417063783, 'Bag': 96.46344866071429, 'Cap': 92.1475497159091, 'Car': 91.9826072982595, 'Chair': 95.00995982776989, 'Earphone': 92.79087611607143, 'Guitar': 96.35355247641509, 'Knife': 92.1063232421875, 'Lamp': 92.02052829982517, 'Laptop': 98.11688158885542, 'Motorbike': 85.66272212009804, 'Mug': 99.4397615131579, 'Pistol': 95.19597833806817, 'Rocket': 82.9345703125, 'Skateboard': 95.49678679435483, 'Table': 95.32274929982312} + test_macc_per_class = {'Airplane': 89.31818138000617, 'Bag': 84.26332218998596, 'Cap': 86.69105041024714, 'Car': 87.86013469048616, 'Chair': 91.56314828305703, 'Earphone': 71.07660724717591, 'Guitar': 94.15074819362438, 'Knife': 92.12645614791424, 'Lamp': 89.41053595614414, 'Laptop': 98.12254724495146, 'Motorbike': 80.31896494544054, 'Mug': 95.95728188836969, 'Pistol': 87.84739425549608, 'Rocket': 72.27492005296084, 'Skateboard': 86.56324142689732, 'Table': 90.05500980121904} + test_miou_per_class = {'Airplane': 82.02136922130296, 'Bag': 81.5111735414595, 'Cap': 81.0083226097314, 'Car': 78.39118412073181, 'Chair': 85.82107032484896, 'Earphone': 65.84596469719327, 'Guitar': 90.42008311044715, 'Knife': 85.36561553366818, 'Lamp': 81.89250846989042, 'Laptop': 96.27869771653228, 'Motorbike': 70.0872929649071, 'Mug': 94.49662363091429, 'Pistol': 80.92216764950004, 'Rocket': 61.83925293536993, 'Skateboard': 75.65383176318528, 'Table': 84.2465438309234} +================================================== +EPOCH 40 / 100 +100%|█████████████████████████████| 876/876 [15:48<00:00, 1.08s/it, data_loading=0.572, iteration=0.231, train_acc=95.39, train_loss_seg=0.114, train_macc=90.83, train_miou=85.88] +Learning rate = 0.000250 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:30<00:00, 2.00it/s, test_acc=93.33, test_loss_seg=0.096, test_macc=87.12, test_miou=80.91] +================================================== + test_loss_seg = 0.09682118892669678 + test_acc = 93.3399496902154 + test_macc = 87.12618688077049 + test_miou = 80.91506223808629 + test_acc_per_class = {'Airplane': 91.41956676136364, 'Bag': 96.44949776785714, 'Cap': 91.943359375, 'Car': 91.64977254746836, 'Chair': 95.03284801136364, 'Earphone': 92.00962611607143, 'Guitar': 96.39685288915094, 'Knife': 93.4136962890625, 'Lamp': 91.40420126748252, 'Laptop': 98.16217996987952, 'Motorbike': 87.22522212009804, 'Mug': 99.35109991776315, 'Pistol': 95.53666548295455, 'Rocket': 82.71484375, 'Skateboard': 95.51411290322581, 'Table': 95.21564987470519} + test_macc_per_class = {'Airplane': 89.2397912630305, 'Bag': 83.8827256640641, 'Cap': 85.56967196532752, 'Car': 82.19177111180215, 'Chair': 91.77833917058938, 'Earphone': 71.2689983527564, 'Guitar': 94.30974927023152, 'Knife': 93.39955294177342, 'Lamp': 88.9389246408982, 'Laptop': 98.10478976644113, 'Motorbike': 79.98996220659137, 'Mug': 94.5647839379295, 'Pistol': 85.82554329717772, 'Rocket': 77.46643347373417, 'Skateboard': 87.35256454971484, 'Table': 90.13538848026582} + test_miou_per_class = {'Airplane': 81.78825678653105, 'Bag': 81.32486201696794, 'Cap': 80.23656064679783, 'Car': 76.05157055126519, 'Chair': 85.95415329016264, 'Earphone': 64.95603355494943, 'Guitar': 90.49116029179586, 'Knife': 87.63518669124092, 'Lamp': 81.55026575138609, 'Laptop': 96.36205793499381, 'Motorbike': 71.07663446633367, 'Mug': 93.55768058954126, 'Pistol': 80.67325713429466, 'Rocket': 62.984758944343135, 'Skateboard': 76.03308199031679, 'Table': 83.96547516846039} +================================================== +EPOCH 41 / 100 +100%|█████████████████████████████| 876/876 [15:47<00:00, 1.08s/it, data_loading=0.570, iteration=0.228, train_acc=95.36, train_loss_seg=0.113, train_macc=89.70, train_miou=84.07] +Learning rate = 0.000250 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:30<00:00, 2.00it/s, test_acc=93.32, test_loss_seg=0.097, test_macc=87.75, test_miou=81.07] +================================================== + test_loss_seg = 0.09709758311510086 + test_acc = 93.32577508455933 + test_macc = 87.75068264437806 + test_miou = 81.07719762310961 + test_acc_per_class = {'Airplane': 91.48729609604106, 'Bag': 95.61941964285714, 'Cap': 92.41832386363636, 'Car': 91.87073526503164, 'Chair': 95.01481489701705, 'Earphone': 91.11328125, 'Guitar': 96.43677525550315, 'Knife': 92.958984375, 'Lamp': 91.3627144340035, 'Laptop': 98.13335372740963, 'Motorbike': 87.36883425245098, 'Mug': 99.40892269736842, 'Pistol': 95.4134854403409, 'Rocket': 83.33333333333334, 'Skateboard': 95.97246723790323, 'Table': 95.29965958505306} + test_macc_per_class = {'Airplane': 89.23289364683619, 'Bag': 84.4338588969025, 'Cap': 87.002505505893, 'Car': 86.40860508881731, 'Chair': 92.87547784512319, 'Earphone': 71.30466225306495, 'Guitar': 94.44825945048653, 'Knife': 92.9171995383555, 'Lamp': 89.06774598596772, 'Laptop': 98.10984537489334, 'Motorbike': 82.6191137767716, 'Mug': 95.69738949905071, 'Pistol': 85.58028780969055, 'Rocket': 76.5160783446904, 'Skateboard': 87.44749300229898, 'Table': 90.34950629120657} + test_miou_per_class = {'Airplane': 81.77379497505214, 'Bag': 78.84671148176555, 'Cap': 81.57631593114627, 'Car': 78.01513477862358, 'Chair': 86.0679043835205, 'Earphone': 63.91438086020067, 'Guitar': 90.46108986537763, 'Knife': 86.81164614842572, 'Lamp': 81.29010810495507, 'Laptop': 96.30857523793959, 'Motorbike': 72.19450773127504, 'Mug': 94.19771031442315, 'Pistol': 80.43945216847987, 'Rocket': 63.570413253538185, 'Skateboard': 77.47816375898654, 'Table': 84.28925297604435} +================================================== +EPOCH 42 / 100 +100%|█████████████████████████████| 876/876 [15:48<00:00, 1.08s/it, data_loading=0.566, iteration=0.229, train_acc=95.50, train_loss_seg=0.113, train_macc=92.49, train_miou=87.45] +Learning rate = 0.000250 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:29<00:00, 2.00it/s, test_acc=93.38, test_loss_seg=0.068, test_macc=87.61, test_miou=81.09] +================================================== + test_loss_seg = 0.06812456995248795 + test_acc = 93.38126958173848 + test_macc = 87.61576947546077 + test_miou = 81.09972830747977 + test_acc_per_class = {'Airplane': 91.30630269428153, 'Bag': 96.22279575892857, 'Cap': 91.59712357954545, 'Car': 91.97086382515823, 'Chair': 95.03284801136364, 'Earphone': 91.259765625, 'Guitar': 96.22856476022012, 'Knife': 93.4423828125, 'Lamp': 92.17469542176573, 'Laptop': 98.17512236445783, 'Motorbike': 87.38319546568627, 'Mug': 99.42048725328947, 'Pistol': 95.27809836647727, 'Rocket': 83.63037109375, 'Skateboard': 95.5534904233871, 'Table': 95.42420585200472} + test_macc_per_class = {'Airplane': 89.56525251471265, 'Bag': 84.24841347507545, 'Cap': 85.77168655551617, 'Car': 88.16955379496929, 'Chair': 92.94691480843983, 'Earphone': 71.45449986754642, 'Guitar': 93.73531381687282, 'Knife': 93.41926308107895, 'Lamp': 89.86408064217846, 'Laptop': 98.11560452319328, 'Motorbike': 80.70022621292581, 'Mug': 96.14433019404916, 'Pistol': 84.87810731741232, 'Rocket': 74.34272151948909, 'Skateboard': 87.8238404710642, 'Table': 90.6725028128485} + test_miou_per_class = {'Airplane': 81.82798689644471, 'Bag': 80.6876529167239, 'Cap': 79.75716020303656, 'Car': 78.5130480130221, 'Chair': 86.01744813082675, 'Earphone': 64.15821172383389, 'Guitar': 90.1415203459045, 'Knife': 87.67978146700302, 'Lamp': 82.54916998885274, 'Laptop': 96.38704629956634, 'Motorbike': 71.03906111265627, 'Mug': 94.34810555811409, 'Pistol': 79.7168309597929, 'Rocket': 63.3969921969162, 'Skateboard': 76.5020556979478, 'Table': 84.87358140903449} +================================================== +EPOCH 43 / 100 +100%|█████████████████████████████| 876/876 [15:44<00:00, 1.08s/it, data_loading=0.572, iteration=0.229, train_acc=95.98, train_loss_seg=0.107, train_macc=90.71, train_miou=85.93] +Learning rate = 0.000125 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:30<00:00, 2.00it/s, test_acc=93.14, test_loss_seg=0.215, test_macc=87.26, test_miou=80.80] +================================================== + test_loss_seg = 0.21586279571056366 + test_acc = 93.14783191486194 + test_macc = 87.26379019871194 + test_miou = 80.8070511707846 + test_acc_per_class = {'Airplane': 91.50018328445748, 'Bag': 96.59946986607143, 'Cap': 91.62375710227273, 'Car': 91.98971518987342, 'Chair': 94.95010375976562, 'Earphone': 88.14871651785714, 'Guitar': 96.40790831367924, 'Knife': 93.4130859375, 'Lamp': 91.28878933566433, 'Laptop': 98.16570971385542, 'Motorbike': 87.20320159313727, 'Mug': 99.39221833881578, 'Pistol': 95.4134854403409, 'Rocket': 83.80940755208334, 'Skateboard': 95.07623487903226, 'Table': 95.38332381338444} + test_macc_per_class = {'Airplane': 89.1357133416585, 'Bag': 84.51813176213902, 'Cap': 85.31466271486369, 'Car': 87.018471028356, 'Chair': 91.9578803542641, 'Earphone': 70.29757907822312, 'Guitar': 94.42736969104251, 'Knife': 93.39540712555545, 'Lamp': 89.30940170745959, 'Laptop': 98.13966019773909, 'Motorbike': 82.06195745310765, 'Mug': 95.90899576150818, 'Pistol': 86.77821260659823, 'Rocket': 73.33974071482608, 'Skateboard': 84.01973395883319, 'Table': 90.59772568321664} + test_miou_per_class = {'Airplane': 81.87544650534, 'Bag': 82.07935485286893, 'Cap': 79.61546702150153, 'Car': 78.27749042373556, 'Chair': 85.79818532013434, 'Earphone': 59.904316780487534, 'Guitar': 90.5473252749178, 'Knife': 87.6320019974454, 'Lamp': 81.43954437431489, 'Laptop': 96.3711948444136, 'Motorbike': 71.62303349588308, 'Mug': 94.07647220802077, 'Pistol': 81.17544453191627, 'Rocket': 63.306316914176705, 'Skateboard': 74.56789622886456, 'Table': 84.62332795853258} +================================================== +EPOCH 44 / 100 +100%|█████████████████████████████| 876/876 [15:48<00:00, 1.08s/it, data_loading=0.571, iteration=0.224, train_acc=95.50, train_loss_seg=0.107, train_macc=91.51, train_miou=86.89] +Learning rate = 0.000125 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:30<00:00, 2.00it/s, test_acc=93.45, test_loss_seg=0.071, test_macc=87.29, test_miou=81.13] +================================================== + test_loss_seg = 0.07122541964054108 + test_acc = 93.45551160924853 + test_macc = 87.29497962189336 + test_miou = 81.13075927440083 + test_acc_per_class = {'Airplane': 91.43803839809385, 'Bag': 96.35532924107143, 'Cap': 91.845703125, 'Car': 92.20727848101265, 'Chair': 95.07286765358664, 'Earphone': 92.5048828125, 'Guitar': 96.52644703223271, 'Knife': 93.34228515625, 'Lamp': 92.0041384396853, 'Laptop': 98.14158979668674, 'Motorbike': 87.02895220588235, 'Mug': 99.38450863486842, 'Pistol': 95.64985795454545, 'Rocket': 83.349609375, 'Skateboard': 95.09828629032258, 'Table': 95.3384111512382} + test_macc_per_class = {'Airplane': 88.84612049066982, 'Bag': 85.56007610094308, 'Cap': 85.6192043695282, 'Car': 87.51227147505841, 'Chair': 92.66212084371934, 'Earphone': 71.34738144240855, 'Guitar': 94.80739962151885, 'Knife': 93.34133170891712, 'Lamp': 89.01987897521263, 'Laptop': 98.12591991893007, 'Motorbike': 81.51684584553348, 'Mug': 96.32256517941187, 'Pistol': 86.7560937570455, 'Rocket': 71.04640385969375, 'Skateboard': 83.70172373526961, 'Table': 90.53433662643347} + test_miou_per_class = {'Airplane': 81.79130526454955, 'Bag': 81.59513901981907, 'Cap': 80.09730015038691, 'Car': 78.75500792080774, 'Chair': 86.06041823876313, 'Earphone': 65.52895474695842, 'Guitar': 90.81936886155347, 'Knife': 87.51468764610927, 'Lamp': 81.82879756900351, 'Laptop': 96.32511663375469, 'Motorbike': 71.48599271866013, 'Mug': 94.05710307207306, 'Pistol': 81.40607518022294, 'Rocket': 62.04210710580309, 'Skateboard': 74.37728702455884, 'Table': 84.40748723738962} +================================================== +EPOCH 45 / 100 +100%|█████████████████████████████| 876/876 [15:41<00:00, 1.07s/it, data_loading=0.565, iteration=0.221, train_acc=95.64, train_loss_seg=0.110, train_macc=92.22, train_miou=87.43] +Learning rate = 0.000125 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.09it/s, test_acc=93.37, test_loss_seg=0.209, test_macc=87.39, test_miou=81.07] +================================================== + test_loss_seg = 0.20905061066150665 + test_acc = 93.37981745699196 + test_macc = 87.38998537352931 + test_miou = 81.07736100139486 + test_acc_per_class = {'Airplane': 91.55975073313783, 'Bag': 96.46693638392857, 'Cap': 91.22869318181817, 'Car': 92.18626384493672, 'Chair': 95.14624855735086, 'Earphone': 91.455078125, 'Guitar': 96.51477741745283, 'Knife': 93.7109375, 'Lamp': 91.68231670673077, 'Laptop': 98.18806475903614, 'Motorbike': 87.61584712009804, 'Mug': 99.43590666118422, 'Pistol': 95.53999467329545, 'Rocket': 82.99153645833334, 'Skateboard': 95.00693044354838, 'Table': 95.34779674602005} + test_macc_per_class = {'Airplane': 89.8724395013389, 'Bag': 84.444219056934, 'Cap': 85.0556889799006, 'Car': 86.83176788035303, 'Chair': 92.51417810825826, 'Earphone': 70.97941450500976, 'Guitar': 94.95362241423999, 'Knife': 93.69337661696815, 'Lamp': 89.13246727336997, 'Laptop': 98.12793451501744, 'Motorbike': 82.4450058466836, 'Mug': 95.42160063429267, 'Pistol': 86.30249847284904, 'Rocket': 73.92765976707364, 'Skateboard': 84.23502047716792, 'Table': 90.30287192701208} + test_miou_per_class = {'Airplane': 82.12163626041567, 'Bag': 81.58795434626064, 'Cap': 78.88350456437747, 'Car': 78.60826054253049, 'Chair': 86.2963273042356, 'Earphone': 64.13171983856815, 'Guitar': 90.8128715552976, 'Knife': 88.15818703548601, 'Lamp': 81.3154173707219, 'Laptop': 96.41215468404218, 'Motorbike': 72.08748146812057, 'Mug': 94.40449031912016, 'Pistol': 81.43016177513931, 'Rocket': 62.35069927085194, 'Skateboard': 74.2742806872763, 'Table': 84.36262899987369} +================================================== +EPOCH 46 / 100 +100%|█████████████████████████████| 876/876 [15:24<00:00, 1.06s/it, data_loading=0.560, iteration=0.226, train_acc=95.47, train_loss_seg=0.107, train_macc=91.31, train_miou=86.16] +Learning rate = 0.000125 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.08it/s, test_acc=93.35, test_loss_seg=0.148, test_macc=87.56, test_miou=81.11] +================================================== + test_loss_seg = 0.14810775220394135 + test_acc = 93.35569559738524 + test_macc = 87.56439900327129 + test_miou = 81.11513364106325 + test_acc_per_class = {'Airplane': 91.33937981121701, 'Bag': 96.40066964285714, 'Cap': 90.99343039772727, 'Car': 92.03576196598101, 'Chair': 95.06780450994317, 'Earphone': 92.28515625, 'Guitar': 96.55040045204403, 'Knife': 93.0279541015625, 'Lamp': 91.81326486013987, 'Laptop': 98.0968797063253, 'Motorbike': 87.12852328431373, 'Mug': 99.4140625, 'Pistol': 95.4378995028409, 'Rocket': 83.59781901041666, 'Skateboard': 95.166015625, 'Table': 95.33610793779481} + test_macc_per_class = {'Airplane': 89.97317632746608, 'Bag': 84.03444646316807, 'Cap': 84.49390471831025, 'Car': 85.63759817022752, 'Chair': 92.64630710449849, 'Earphone': 71.27549290478225, 'Guitar': 95.11368247361435, 'Knife': 93.02249782194711, 'Lamp': 89.5527070902113, 'Laptop': 98.01310114356366, 'Motorbike': 84.26902379874343, 'Mug': 96.03653117941487, 'Pistol': 86.51580072165108, 'Rocket': 75.5396930409615, 'Skateboard': 84.37230920679059, 'Table': 90.53411188699036} + test_miou_per_class = {'Airplane': 81.78549483782712, 'Bag': 81.21147757299278, 'Cap': 78.28122190954106, 'Car': 78.05794431532433, 'Chair': 86.20153478297254, 'Earphone': 65.29606976274373, 'Guitar': 90.85607566475004, 'Knife': 86.96232271941938, 'Lamp': 81.63388865900927, 'Laptop': 96.23322205596902, 'Motorbike': 73.13578689152611, 'Mug': 94.28035004686883, 'Pistol': 80.80890633930018, 'Rocket': 64.04267206827753, 'Skateboard': 74.61627953746688, 'Table': 84.43889109302319} +================================================== +EPOCH 47 / 100 +100%|█████████████████████████████| 876/876 [15:26<00:00, 1.06s/it, data_loading=0.564, iteration=0.223, train_acc=95.91, train_loss_seg=0.105, train_macc=92.39, train_miou=87.64] +Learning rate = 0.000125 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.08it/s, test_acc=93.39, test_loss_seg=0.121, test_macc=87.65, test_miou=81.29] +================================================== + test_loss_seg = 0.12167800217866898 + test_acc = 93.39537527749485 + test_macc = 87.65639224525272 + test_miou = 81.29817841668648 + test_acc_per_class = {'Airplane': 91.48758247800586, 'Bag': 96.49135044642857, 'Cap': 91.99662642045455, 'Car': 92.20326097705697, 'Chair': 95.06454467773438, 'Earphone': 91.58761160714286, 'Guitar': 96.49205237814465, 'Knife': 92.6202392578125, 'Lamp': 91.97084653627621, 'Laptop': 98.07099491716868, 'Motorbike': 87.06629136029412, 'Mug': 99.42691200657895, 'Pistol': 95.54887251420455, 'Rocket': 83.29671223958334, 'Skateboard': 95.59759324596774, 'Table': 95.40451337706368} + test_macc_per_class = {'Airplane': 89.45416522046395, 'Bag': 85.12890378935542, 'Cap': 86.15512738988768, 'Car': 86.27939102438143, 'Chair': 92.83669345885028, 'Earphone': 71.0501447539857, 'Guitar': 94.5590013554026, 'Knife': 92.61397922715094, 'Lamp': 89.33551122024805, 'Laptop': 98.05460642472656, 'Motorbike': 82.47103684494476, 'Mug': 96.47254879145764, 'Pistol': 87.78608977785458, 'Rocket': 75.10286222058134, 'Skateboard': 84.68741363701618, 'Table': 90.51480078773656} + test_miou_per_class = {'Airplane': 82.01931989187621, 'Bag': 81.91256762537272, 'Cap': 80.55544632067806, 'Car': 78.53845087247494, 'Chair': 86.22394552877041, 'Earphone': 64.32157828064375, 'Guitar': 90.67221616446103, 'Knife': 86.25202456835301, 'Lamp': 81.94206767325947, 'Laptop': 96.18818435502496, 'Motorbike': 72.42398061838334, 'Mug': 94.43933680634318, 'Pistol': 81.66082283285111, 'Rocket': 63.09944043758991, 'Skateboard': 75.80134340810613, 'Table': 84.7201292827957} +================================================== +EPOCH 48 / 100 +100%|█████████████████████████████| 876/876 [15:28<00:00, 1.06s/it, data_loading=0.569, iteration=0.229, train_acc=95.88, train_loss_seg=0.109, train_macc=92.51, train_miou=87.64] +Learning rate = 0.000125 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:28<00:00, 2.04it/s, test_acc=93.22, test_loss_seg=0.158, test_macc=87.94, test_miou=80.96] +================================================== + test_loss_seg = 0.15843454003334045 + test_acc = 93.22569155887729 + test_macc = 87.94394212582242 + test_miou = 80.96089920703717 + test_acc_per_class = {'Airplane': 91.4672493585044, 'Bag': 96.27511160714286, 'Cap': 91.3662997159091, 'Car': 92.1414532238924, 'Chair': 95.08445046164773, 'Earphone': 91.41671316964286, 'Guitar': 96.37443494496856, 'Knife': 93.031005859375, 'Lamp': 91.69529201267483, 'Laptop': 98.01804875753012, 'Motorbike': 87.39372702205883, 'Mug': 99.44618626644737, 'Pistol': 95.3280362215909, 'Rocket': 82.11263020833334, 'Skateboard': 95.22744455645162, 'Table': 95.23298155586674} + test_macc_per_class = {'Airplane': 89.49552112839663, 'Bag': 84.82935757984144, 'Cap': 84.84333492556854, 'Car': 88.29431652453803, 'Chair': 92.54062402324074, 'Earphone': 71.61747968051588, 'Guitar': 94.29274816099148, 'Knife': 93.00862915455482, 'Lamp': 89.05009483758131, 'Laptop': 97.99410071490693, 'Motorbike': 85.43968969175896, 'Mug': 96.9235571727561, 'Pistol': 85.29156553868646, 'Rocket': 77.39148674460186, 'Skateboard': 86.00719640173905, 'Table': 90.08337173348004} + test_miou_per_class = {'Airplane': 81.98618178833947, 'Bag': 81.07112085487826, 'Cap': 79.01033846986377, 'Car': 78.76704623661456, 'Chair': 86.21050181575089, 'Earphone': 64.23632911283535, 'Guitar': 90.46537556877549, 'Knife': 86.95797713740754, 'Lamp': 81.42090450287893, 'Laptop': 96.08507790754814, 'Motorbike': 73.35077301166585, 'Mug': 94.65362679907044, 'Pistol': 79.98599510681265, 'Rocket': 62.04592486399335, 'Skateboard': 75.11498117285788, 'Table': 84.01223296330232} +================================================== +EPOCH 49 / 100 +100%|█████████████████████████████| 876/876 [15:24<00:00, 1.06s/it, data_loading=0.560, iteration=0.224, train_acc=95.79, train_loss_seg=0.105, train_macc=91.84, train_miou=87.28] +Learning rate = 0.000125 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.08it/s, test_acc=93.44, test_loss_seg=0.066, test_macc=87.74, test_miou=81.24] +================================================== + test_loss_seg = 0.06631797552108765 + test_acc = 93.44060345504438 + test_macc = 87.74822844304096 + test_miou = 81.2493094922331 + test_acc_per_class = {'Airplane': 91.44118859970675, 'Bag': 96.14606584821429, 'Cap': 91.03781960227273, 'Car': 92.16895767405063, 'Chair': 95.0223749334162, 'Earphone': 92.15611049107143, 'Guitar': 96.57404677672956, 'Knife': 93.5748291015625, 'Lamp': 91.58824573863636, 'Laptop': 97.99039909638554, 'Motorbike': 87.01267616421569, 'Mug': 99.39221833881578, 'Pistol': 95.57550603693183, 'Rocket': 84.24886067708334, 'Skateboard': 95.79133064516128, 'Table': 95.32902555645637} + test_macc_per_class = {'Airplane': 89.27457489531396, 'Bag': 83.47490207622381, 'Cap': 84.12291391522987, 'Car': 87.99498492512188, 'Chair': 92.94119896778858, 'Earphone': 71.44912799026463, 'Guitar': 94.79544305543568, 'Knife': 93.56707171605484, 'Lamp': 88.48909361612513, 'Laptop': 98.00822073678128, 'Motorbike': 84.57616217703077, 'Mug': 96.58185554050307, 'Pistol': 86.41053357002706, 'Rocket': 74.4167806770052, 'Skateboard': 87.16826340192826, 'Table': 90.7005278278213} + test_miou_per_class = {'Airplane': 81.83332091685963, 'Bag': 80.14595717376332, 'Cap': 78.18750126974496, 'Car': 78.7752310650431, 'Chair': 86.02733393513773, 'Earphone': 65.33976332644504, 'Guitar': 90.90925917240838, 'Knife': 87.92236873763463, 'Lamp': 80.84628521585107, 'Laptop': 96.03470136561087, 'Motorbike': 72.83280936897685, 'Mug': 94.15220319743915, 'Pistol': 80.99760292892256, 'Rocket': 64.48105968454922, 'Skateboard': 77.07173123896273, 'Table': 84.43182327838062} +================================================== +EPOCH 50 / 100 +100%|█████████████████████████████| 876/876 [15:24<00:00, 1.05s/it, data_loading=0.567, iteration=0.224, train_acc=95.91, train_loss_seg=0.103, train_macc=91.89, train_miou=86.97] +Learning rate = 0.000125 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.08it/s, test_acc=93.27, test_loss_seg=0.142, test_macc=87.83, test_miou=80.99] +================================================== + test_loss_seg = 0.14202098548412323 + test_acc = 93.27067085336823 + test_macc = 87.8356143238222 + test_miou = 80.99377248218505 + test_acc_per_class = {'Airplane': 91.43832478005865, 'Bag': 96.09375, 'Cap': 90.8114346590909, 'Car': 92.11363973496836, 'Chair': 95.0284090909091, 'Earphone': 91.75153459821429, 'Guitar': 96.54118759827044, 'Knife': 92.89306640625, 'Lamp': 91.9032383631993, 'Laptop': 97.84273814006023, 'Motorbike': 87.1026731004902, 'Mug': 99.4397615131579, 'Pistol': 95.60657848011364, 'Rocket': 83.17464192708334, 'Skateboard': 95.37865423387096, 'Table': 95.21110102815447} + test_macc_per_class = {'Airplane': 89.87100714397559, 'Bag': 83.86328019848277, 'Cap': 83.82062373348359, 'Car': 88.08155260135965, 'Chair': 92.57802572625698, 'Earphone': 71.65184081628027, 'Guitar': 94.8216935347686, 'Knife': 92.89935047221103, 'Lamp': 88.57623574311047, 'Laptop': 97.85255968680238, 'Motorbike': 82.3668532606009, 'Mug': 96.13129734845458, 'Pistol': 87.62635552536656, 'Rocket': 77.56884215103614, 'Skateboard': 86.3780614540886, 'Table': 91.28224978487728} + test_miou_per_class = {'Airplane': 81.9494124663798, 'Bag': 80.12027792374614, 'Cap': 77.70703974986984, 'Car': 78.67497208152965, 'Chair': 86.03908367369752, 'Earphone': 64.68606481686045, 'Guitar': 90.85978321040471, 'Knife': 86.72922696336092, 'Lamp': 81.1764651404967, 'Laptop': 95.74903077256387, 'Motorbike': 72.39481733888812, 'Mug': 94.51516788189755, 'Pistol': 81.61505982384524, 'Rocket': 64.01157415210635, 'Skateboard': 75.45989562938695, 'Table': 84.21248808992705} +================================================== +EPOCH 51 / 100 +100%|█████████████████████████████| 876/876 [15:24<00:00, 1.06s/it, data_loading=0.559, iteration=0.226, train_acc=95.89, train_loss_seg=0.107, train_macc=92.70, train_miou=88.00] +Learning rate = 0.000125 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.08it/s, test_acc=93.25, test_loss_seg=0.070, test_macc=87.09, test_miou=80.79] +================================================== + test_loss_seg = 0.07092564553022385 + test_acc = 93.25430206546478 + test_macc = 87.09881206503472 + test_miou = 80.79540499953775 + test_acc_per_class = {'Airplane': 91.51006346224341, 'Bag': 96.240234375, 'Cap': 90.83362926136364, 'Car': 92.11085838607595, 'Chair': 95.01301158558239, 'Earphone': 91.54575892857143, 'Guitar': 96.55377849842768, 'Knife': 92.342529296875, 'Lamp': 91.7045113090035, 'Laptop': 98.13276543674698, 'Motorbike': 86.78959865196079, 'Mug': 99.43076685855263, 'Pistol': 95.67205255681817, 'Rocket': 83.642578125, 'Skateboard': 95.29517389112904, 'Table': 95.25152242408609} + test_macc_per_class = {'Airplane': 89.09042117852562, 'Bag': 84.48182857572155, 'Cap': 84.12088285919548, 'Car': 86.53671553287067, 'Chair': 92.22069122939186, 'Earphone': 71.7347874298771, 'Guitar': 94.78381712698295, 'Knife': 92.35543456705526, 'Lamp': 88.55395207395534, 'Laptop': 98.08778694918266, 'Motorbike': 82.14802570959759, 'Mug': 95.81332388628508, 'Pistol': 86.78213551332551, 'Rocket': 71.80465422111607, 'Skateboard': 85.18942417129813, 'Table': 89.8771120161749} + test_miou_per_class = {'Airplane': 81.93912423911743, 'Bag': 80.8303752678565, 'Cap': 77.87446919598364, 'Car': 78.24600804082435, 'Chair': 85.86946023399253, 'Earphone': 64.67612270797927, 'Guitar': 90.89601936001893, 'Knife': 85.77406466809845, 'Lamp': 81.03116646016376, 'Laptop': 96.30586226929863, 'Motorbike': 71.44159534115815, 'Mug': 94.40198410855774, 'Pistol': 81.68131824732355, 'Rocket': 62.51116306129815, 'Skateboard': 75.24470913181963, 'Table': 84.0030376591133} +================================================== +EPOCH 52 / 100 +100%|█████████████████████████████| 876/876 [15:24<00:00, 1.06s/it, data_loading=0.560, iteration=0.229, train_acc=95.71, train_loss_seg=0.103, train_macc=92.35, train_miou=87.57] +Learning rate = 0.000125 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.08it/s, test_acc=93.35, test_loss_seg=0.071, test_macc=87.56, test_miou=81.04] +================================================== + test_loss_seg = 0.07174931466579437 + test_acc = 93.3595435434435 + test_macc = 87.56577536826875 + test_miou = 81.04646629347569 + test_acc_per_class = {'Airplane': 91.51980044904691, 'Bag': 96.04143415178571, 'Cap': 91.41956676136364, 'Car': 92.19460789161393, 'Chair': 95.06787386807528, 'Earphone': 91.53180803571429, 'Guitar': 96.50341489779875, 'Knife': 92.7862548828125, 'Lamp': 91.6958041958042, 'Laptop': 98.17865210843374, 'Motorbike': 87.4444699754902, 'Mug': 99.43076685855263, 'Pistol': 95.6265536221591, 'Rocket': 84.04134114583334, 'Skateboard': 94.90297379032258, 'Table': 95.36737406028891} + test_macc_per_class = {'Airplane': 89.95087545491901, 'Bag': 84.20692037180213, 'Cap': 85.20175290804349, 'Car': 86.88851862780642, 'Chair': 92.6044415205628, 'Earphone': 71.5230664659375, 'Guitar': 94.74581115238342, 'Knife': 92.77594118168253, 'Lamp': 89.11827066835927, 'Laptop': 98.11895806499338, 'Motorbike': 83.47524322213789, 'Mug': 96.5441888186416, 'Pistol': 86.16019987535263, 'Rocket': 75.66328287413154, 'Skateboard': 83.31229555086193, 'Table': 90.76263913468439} + test_miou_per_class = {'Airplane': 82.06123078431214, 'Bag': 80.07793531910863, 'Cap': 79.24402761633843, 'Car': 78.60705542080362, 'Chair': 86.0838817054356, 'Earphone': 64.30912128258902, 'Guitar': 90.77909771654843, 'Knife': 86.53870270682711, 'Lamp': 81.62904582302875, 'Laptop': 96.39389265171945, 'Motorbike': 72.81202667312535, 'Mug': 94.48040977161791, 'Pistol': 81.15242788035229, 'Rocket': 64.50610819141362, 'Skateboard': 73.45778693520938, 'Table': 84.61071021718138} +================================================== +EPOCH 53 / 100 +100%|█████████████████████████████| 876/876 [15:26<00:00, 1.06s/it, data_loading=0.561, iteration=0.223, train_acc=95.27, train_loss_seg=0.109, train_macc=92.06, train_miou=86.82] +Learning rate = 0.000125 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:28<00:00, 2.04it/s, test_acc=93.40, test_loss_seg=0.083, test_macc=87.83, test_miou=81.23] +================================================== + test_loss_seg = 0.0837961956858635 + test_acc = 93.40027782393105 + test_macc = 87.83364204876779 + test_miou = 81.23142192067982 + test_acc_per_class = {'Airplane': 91.52123235887096, 'Bag': 96.20186941964286, 'Cap': 91.22869318181817, 'Car': 92.26846815664557, 'Chair': 95.05760886452414, 'Earphone': 92.05496651785714, 'Guitar': 96.45612224842768, 'Knife': 93.48388671875, 'Lamp': 91.54778327141608, 'Laptop': 98.15394390060241, 'Motorbike': 87.28553921568627, 'Mug': 99.42434210526315, 'Pistol': 95.60324928977273, 'Rocket': 83.49609375, 'Skateboard': 95.24319556451613, 'Table': 95.37745061910378} + test_macc_per_class = {'Airplane': 89.59398019426834, 'Bag': 85.28062852319545, 'Cap': 84.52241498802603, 'Car': 87.01600490486159, 'Chair': 92.75033588783408, 'Earphone': 72.05975989154702, 'Guitar': 94.67357516198763, 'Knife': 93.45909748455101, 'Lamp': 89.68129612815532, 'Laptop': 98.10720109028294, 'Motorbike': 83.24783022020394, 'Mug': 96.21597022123312, 'Pistol': 87.03760405309134, 'Rocket': 77.57758885878224, 'Skateboard': 83.63407659638695, 'Table': 90.48090857587772} + test_miou_per_class = {'Airplane': 81.95773880502645, 'Bag': 80.98661296175902, 'Cap': 78.65768057736122, 'Car': 78.74991881268437, 'Chair': 86.16496045844615, 'Earphone': 65.34265666229584, 'Guitar': 90.66638312522616, 'Knife': 87.7516571937474, 'Lamp': 81.59466468187154, 'Laptop': 96.34684942416915, 'Motorbike': 72.36834956573702, 'Mug': 94.38945298589975, 'Pistol': 81.27890890730552, 'Rocket': 64.35961792134917, 'Skateboard': 74.58497650343836, 'Table': 84.50232214455981} +================================================== +EPOCH 54 / 100 +100%|█████████████████████████████| 876/876 [15:35<00:00, 1.07s/it, data_loading=0.562, iteration=0.234, train_acc=96.08, train_loss_seg=0.103, train_macc=92.72, train_miou=88.07] +Learning rate = 0.000125 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:28<00:00, 2.03it/s, test_acc=93.30, test_loss_seg=0.114, test_macc=87.34, test_miou=80.93] +================================================== + test_loss_seg = 0.11489348858594894 + test_acc = 93.3093899450089 + test_macc = 87.34406925148981 + test_miou = 80.93545383221934 + test_acc_per_class = {'Airplane': 91.4794205920088, 'Bag': 96.39020647321429, 'Cap': 91.5926846590909, 'Car': 92.14361649525317, 'Chair': 95.06620927290483, 'Earphone': 90.50641741071429, 'Guitar': 96.3400402908805, 'Knife': 93.3734130859375, 'Lamp': 91.4760776333042, 'Laptop': 98.12217620481928, 'Motorbike': 87.66850490196079, 'Mug': 99.44618626644737, 'Pistol': 95.30584161931817, 'Rocket': 84.06168619791666, 'Skateboard': 94.70136088709677, 'Table': 95.27639712927476} + test_macc_per_class = {'Airplane': 89.82669080429005, 'Bag': 84.35668954182307, 'Cap': 86.02209523257403, 'Car': 86.74178585859894, 'Chair': 92.65185208016976, 'Earphone': 71.23748562240054, 'Guitar': 94.22580628876601, 'Knife': 93.35868172845029, 'Lamp': 89.35227611881011, 'Laptop': 98.10621274313621, 'Motorbike': 81.73692812822848, 'Mug': 95.84466132022979, 'Pistol': 85.5923616072256, 'Rocket': 73.32538789970499, 'Skateboard': 84.06532864525035, 'Table': 91.060864404179} + test_miou_per_class = {'Airplane': 82.13824264309179, 'Bag': 81.29212455616572, 'Cap': 79.84852188337643, 'Car': 78.45049164898714, 'Chair': 86.15107328361684, 'Earphone': 63.11930988759696, 'Guitar': 90.38139388764705, 'Knife': 87.56393656723786, 'Lamp': 81.31459288003569, 'Laptop': 96.28743429447728, 'Motorbike': 71.83322968210068, 'Mug': 94.54125219767494, 'Pistol': 80.6675891146952, 'Rocket': 63.58058583389802, 'Skateboard': 73.44038651796396, 'Table': 84.35709643694379} +================================================== +EPOCH 55 / 100 +100%|█████████████████████████████| 876/876 [15:36<00:00, 1.07s/it, data_loading=0.570, iteration=0.229, train_acc=95.70, train_loss_seg=0.103, train_macc=92.05, train_miou=87.56] +Learning rate = 0.000125 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:29<00:00, 2.02it/s, test_acc=93.32, test_loss_seg=0.104, test_macc=87.35, test_miou=81.05] +================================================== + test_loss_seg = 0.10475680977106094 + test_acc = 93.32453934951818 + test_macc = 87.35033862095706 + test_miou = 81.05043780840859 + test_acc_per_class = {'Airplane': 91.53340359237536, 'Bag': 96.40764508928571, 'Cap': 91.2198153409091, 'Car': 92.13650860363924, 'Chair': 95.08888938210227, 'Earphone': 90.52036830357143, 'Guitar': 96.43431849449685, 'Knife': 93.3819580078125, 'Lamp': 91.40317690122379, 'Laptop': 98.10746893825302, 'Motorbike': 87.18022365196079, 'Mug': 99.44618626644737, 'Pistol': 95.63432173295455, 'Rocket': 84.18375651041666, 'Skateboard': 95.12821320564517, 'Table': 95.38637557119694} + test_macc_per_class = {'Airplane': 89.77961590205196, 'Bag': 85.30591182550296, 'Cap': 84.53747781488967, 'Car': 86.27241604871301, 'Chair': 92.28909665466655, 'Earphone': 71.38333772705559, 'Guitar': 94.46948009089819, 'Knife': 93.36343414243726, 'Lamp': 89.27837054973081, 'Laptop': 98.09589305242056, 'Motorbike': 83.03609529848973, 'Mug': 96.5175210992247, 'Pistol': 87.81514066068334, 'Rocket': 71.52190888961462, 'Skateboard': 83.68220071199907, 'Table': 90.2575174669351} + test_miou_per_class = {'Airplane': 82.13499975782516, 'Bag': 81.68557620878822, 'Cap': 78.65034055955331, 'Car': 78.36535609640467, 'Chair': 86.21664953217775, 'Earphone': 63.078211923938355, 'Guitar': 90.60501804306492, 'Knife': 87.57663668850978, 'Lamp': 81.27472321161096, 'Laptop': 96.25922876125546, 'Motorbike': 72.3276664009819, 'Mug': 94.61191743573477, 'Pistol': 81.97248585662635, 'Rocket': 63.094875569255535, 'Skateboard': 74.44093609875728, 'Table': 84.51238279005285} +================================================== +EPOCH 56 / 100 +100%|█████████████████████████████| 876/876 [15:37<00:00, 1.07s/it, data_loading=0.565, iteration=0.230, train_acc=95.52, train_loss_seg=0.099, train_macc=91.84, train_miou=87.22] +Learning rate = 0.000125 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:28<00:00, 2.03it/s, test_acc=93.47, test_loss_seg=0.170, test_macc=87.91, test_miou=81.38] +================================================== + test_loss_seg = 0.17049561440944672 + test_acc = 93.47973066652233 + test_macc = 87.91379849874247 + test_miou = 81.38508807159097 + test_acc_per_class = {'Airplane': 91.60084654508798, 'Bag': 96.3623046875, 'Cap': 91.29527698863636, 'Car': 92.21005982990506, 'Chair': 95.03090598366477, 'Earphone': 92.07938058035714, 'Guitar': 96.52767541273585, 'Knife': 93.2769775390625, 'Lamp': 91.42810314685315, 'Laptop': 98.05511106927712, 'Motorbike': 87.25394454656863, 'Mug': 99.42691200657895, 'Pistol': 95.59326171875, 'Rocket': 84.4970703125, 'Skateboard': 95.64169606854838, 'Table': 95.39616422833137} + test_macc_per_class = {'Airplane': 89.37553842962157, 'Bag': 83.96831273114279, 'Cap': 84.88463843277356, 'Car': 88.07898466328095, 'Chair': 93.07200402125095, 'Earphone': 71.89347256901611, 'Guitar': 94.82483092774173, 'Knife': 93.27874008700499, 'Lamp': 88.31928677013046, 'Laptop': 98.04289951162029, 'Motorbike': 82.01864437252725, 'Mug': 95.99690653389229, 'Pistol': 88.76118787235113, 'Rocket': 75.93427848318211, 'Skateboard': 87.28773530425022, 'Table': 90.88331527009296} + test_miou_per_class = {'Airplane': 82.09484509600347, 'Bag': 81.05581407858054, 'Cap': 78.91618511981233, 'Car': 78.78685971018203, 'Chair': 85.85568940376274, 'Earphone': 65.25051183581648, 'Guitar': 90.7937725889489, 'Knife': 87.4004659761614, 'Lamp': 80.72760191350166, 'Laptop': 96.15772174475109, 'Motorbike': 71.9473666061127, 'Mug': 94.388149222982, 'Pistol': 82.22486083007755, 'Rocket': 65.20315983137539, 'Skateboard': 76.68323635452961, 'Table': 84.6751688328578} +================================================== +EPOCH 57 / 100 +100%|█████████████████████████████| 876/876 [15:31<00:00, 1.06s/it, data_loading=0.560, iteration=0.222, train_acc=95.63, train_loss_seg=0.105, train_macc=91.17, train_miou=86.47] +Learning rate = 0.000125 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.08it/s, test_acc=93.36, test_loss_seg=0.126, test_macc=87.51, test_miou=81.06] +================================================== + test_loss_seg = 0.12660755217075348 + test_acc = 93.36472935399682 + test_macc = 87.51534495748851 + test_miou = 81.06212253666911 + test_acc_per_class = {'Airplane': 91.52209150476538, 'Bag': 96.240234375, 'Cap': 91.19762073863636, 'Car': 92.12167474287975, 'Chair': 95.11455189098011, 'Earphone': 92.27120535714286, 'Guitar': 96.56759777908806, 'Knife': 93.2623291015625, 'Lamp': 91.50117460664336, 'Laptop': 97.94863045933735, 'Motorbike': 86.93895526960785, 'Mug': 99.42305715460526, 'Pistol': 95.64652876420455, 'Rocket': 84.02506510416666, 'Skateboard': 94.79744203629032, 'Table': 95.25751077903891} + test_macc_per_class = {'Airplane': 89.27392459766794, 'Bag': 84.21340088208389, 'Cap': 84.95001065851507, 'Car': 87.89771264888911, 'Chair': 92.60179940278309, 'Earphone': 71.76946242962913, 'Guitar': 94.8241922450697, 'Knife': 93.25493381633112, 'Lamp': 89.09306678589238, 'Laptop': 97.96488375869544, 'Motorbike': 83.98278928849717, 'Mug': 96.92295514452833, 'Pistol': 86.28074347810644, 'Rocket': 74.54361759493787, 'Skateboard': 81.9313693098231, 'Table': 90.74065727836643} + test_miou_per_class = {'Airplane': 81.95501813085801, 'Bag': 80.73302841167748, 'Cap': 78.7909439270815, 'Car': 78.70027624055551, 'Chair': 86.18020943281498, 'Earphone': 65.3925938824057, 'Guitar': 90.8990413393601, 'Knife': 87.37220480932446, 'Lamp': 81.43344238840471, 'Laptop': 95.95386862468517, 'Motorbike': 72.41370856092216, 'Mug': 94.45370755955771, 'Pistol': 81.29231172975273, 'Rocket': 64.00090156235028, 'Skateboard': 73.19549175663592, 'Table': 84.22721223031937} +================================================== +EPOCH 58 / 100 +100%|█████████████████████████████| 876/876 [15:24<00:00, 1.05s/it, data_loading=0.566, iteration=0.216, train_acc=95.91, train_loss_seg=0.100, train_macc=92.45, train_miou=87.66] +Learning rate = 0.000063 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.08it/s, test_acc=93.43, test_loss_seg=0.105, test_macc=87.56, test_miou=81.20] +================================================== + test_loss_seg = 0.10538526624441147 + test_acc = 93.43827427058025 + test_macc = 87.56831853753893 + test_miou = 81.20878540867594 + test_acc_per_class = {'Airplane': 91.69334791972142, 'Bag': 96.38323102678571, 'Cap': 90.98455255681817, 'Car': 92.1915175039557, 'Chair': 95.08882002397017, 'Earphone': 91.48995535714286, 'Guitar': 96.54917207154088, 'Knife': 92.9608154296875, 'Lamp': 91.85560533216784, 'Laptop': 98.00687123493977, 'Motorbike': 87.6043581495098, 'Mug': 99.45389597039474, 'Pistol': 95.60657848011364, 'Rocket': 84.92024739583334, 'Skateboard': 94.85414566532258, 'Table': 95.36927421137972} + test_macc_per_class = {'Airplane': 89.55651822731025, 'Bag': 85.57563667045993, 'Cap': 84.12881182819399, 'Car': 85.86819907484585, 'Chair': 92.44241383371849, 'Earphone': 71.4821174455458, 'Guitar': 94.7070546413691, 'Knife': 92.96164477485652, 'Lamp': 89.60131033664787, 'Laptop': 98.00713566894194, 'Motorbike': 83.8275390382765, 'Mug': 95.88353209854681, 'Pistol': 86.29666117752336, 'Rocket': 76.47002022714527, 'Skateboard': 83.3856715398786, 'Table': 90.8988300173624} + test_miou_per_class = {'Airplane': 82.271263416433, 'Bag': 81.6948891694764, 'Cap': 78.10831636397803, 'Car': 78.2944288795648, 'Chair': 86.11169410476566, 'Earphone': 64.21606678153931, 'Guitar': 90.81787302996021, 'Knife': 86.84675608898216, 'Lamp': 81.94355272134692, 'Laptop': 96.06526764943304, 'Motorbike': 73.07641002835375, 'Mug': 94.61356365970353, 'Pistol': 80.92277161823625, 'Rocket': 65.96719402179977, 'Skateboard': 73.79425920133428, 'Table': 84.59625980390769} +================================================== +EPOCH 59 / 100 +100%|█████████████████████████████| 876/876 [15:23<00:00, 1.05s/it, data_loading=0.568, iteration=0.224, train_acc=96.02, train_loss_seg=0.102, train_macc=92.71, train_miou=87.71] +Learning rate = 0.000063 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.08it/s, test_acc=93.43, test_loss_seg=0.098, test_macc=87.63, test_miou=81.24] +================================================== + test_loss_seg = 0.0983835980296135 + test_acc = 93.43495664515599 + test_macc = 87.63755242757166 + test_miou = 81.24369973916482 + test_acc_per_class = {'Airplane': 91.63693067265396, 'Bag': 96.27511160714286, 'Cap': 91.37517755681817, 'Car': 92.21747676028481, 'Chair': 95.05226828835227, 'Earphone': 91.62248883928571, 'Guitar': 96.54026631289308, 'Knife': 93.0682373046875, 'Lamp': 91.52336920891608, 'Laptop': 98.1574736445783, 'Motorbike': 87.35830269607843, 'Mug': 99.44490131578947, 'Pistol': 95.6265536221591, 'Rocket': 84.84293619791666, 'Skateboard': 94.87462197580645, 'Table': 95.34319031913326} + test_macc_per_class = {'Airplane': 89.63617660549588, 'Bag': 84.51619193726415, 'Cap': 85.09754906489898, 'Car': 86.65447641831281, 'Chair': 92.49907905846729, 'Earphone': 71.52097363529963, 'Guitar': 94.76195615087445, 'Knife': 93.06925947671434, 'Lamp': 89.09453857224375, 'Laptop': 98.1227775283306, 'Motorbike': 84.07336239295688, 'Mug': 95.78597821915149, 'Pistol': 87.33516534742455, 'Rocket': 75.99218854884265, 'Skateboard': 83.41363380569601, 'Table': 90.62753207917285} + test_miou_per_class = {'Airplane': 82.18959982286673, 'Bag': 80.95936566228063, 'Cap': 79.13081949010068, 'Car': 78.5425568457506, 'Chair': 86.02347102599953, 'Earphone': 64.49414261033142, 'Guitar': 90.86867713511792, 'Knife': 87.03450293725304, 'Lamp': 81.3737004435295, 'Laptop': 96.35458287136656, 'Motorbike': 72.76224989707075, 'Mug': 94.52363310779205, 'Pistol': 81.61833921172335, 'Rocket': 65.72038372706677, 'Skateboard': 73.8004264051804, 'Table': 84.50274463320707} +================================================== +EPOCH 60 / 100 +100%|█████████████████████████████| 876/876 [15:22<00:00, 1.05s/it, data_loading=0.573, iteration=0.221, train_acc=95.77, train_loss_seg=0.107, train_macc=92.65, train_miou=87.79] +Learning rate = 0.000063 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.09it/s, test_acc=93.42, test_loss_seg=0.118, test_macc=87.63, test_miou=81.15] +================================================== + test_loss_seg = 0.11819978058338165 + test_acc = 93.42168794060935 + test_macc = 87.63405398460577 + test_miou = 81.15195236866127 + test_acc_per_class = {'Airplane': 91.57879513379766, 'Bag': 96.25767299107143, 'Cap': 91.0555752840909, 'Car': 92.05028678797468, 'Chair': 95.0742548162287, 'Earphone': 91.59458705357143, 'Guitar': 96.57097582547169, 'Knife': 93.507080078125, 'Lamp': 91.37381173513987, 'Laptop': 98.08805534638554, 'Motorbike': 87.62063419117648, 'Mug': 99.4384765625, 'Pistol': 95.63210227272727, 'Rocket': 84.66796875, 'Skateboard': 94.84154485887096, 'Table': 95.39518536261792} + test_macc_per_class = {'Airplane': 89.59473284507841, 'Bag': 85.02840931894494, 'Cap': 84.6524619559452, 'Car': 86.74035608615017, 'Chair': 92.93292395591946, 'Earphone': 71.68646076561652, 'Guitar': 94.82546310519032, 'Knife': 93.49887551196242, 'Lamp': 88.39111323216711, 'Laptop': 98.07305439081121, 'Motorbike': 84.06468293363332, 'Mug': 96.37424104485675, 'Pistol': 86.24432824993633, 'Rocket': 75.48679432337335, 'Skateboard': 83.51767276041434, 'Table': 91.03329327369242} + test_miou_per_class = {'Airplane': 82.10562883315983, 'Bag': 81.08316762437062, 'Cap': 78.44489148041083, 'Car': 78.23538551548546, 'Chair': 86.1153068923095, 'Earphone': 64.5669882527477, 'Guitar': 90.9464955443775, 'Knife': 87.80261222475451, 'Lamp': 80.7147909279039, 'Laptop': 96.22134729261649, 'Motorbike': 73.16709167090076, 'Mug': 94.52961288041513, 'Pistol': 81.02128130293185, 'Rocket': 65.23007005207339, 'Skateboard': 73.55980278913853, 'Table': 84.68676461498454} +================================================== +EPOCH 61 / 100 +100%|█████████████████████████████| 876/876 [15:23<00:00, 1.05s/it, data_loading=0.565, iteration=0.224, train_acc=95.42, train_loss_seg=0.100, train_macc=91.10, train_miou=86.47] +Learning rate = 0.000063 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.08it/s, test_acc=93.37, test_loss_seg=0.105, test_macc=87.77, test_miou=81.13] +================================================== + test_loss_seg = 0.10501675307750702 + test_acc = 93.379627934502 + test_macc = 87.77650411686243 + test_miou = 81.13172840056505 + test_acc_per_class = {'Airplane': 91.64938828812316, 'Bag': 96.09723772321429, 'Cap': 91.23757102272727, 'Car': 92.23478293117088, 'Chair': 95.0897216796875, 'Earphone': 91.59109933035714, 'Guitar': 96.61642590408806, 'Knife': 93.1512451171875, 'Lamp': 91.67702414772727, 'Laptop': 98.11276355421687, 'Motorbike': 87.32479319852942, 'Mug': 99.42819695723685, 'Pistol': 95.74085582386364, 'Rocket': 83.86637369791666, 'Skateboard': 94.87934727822581, 'Table': 95.37722029775944} + test_macc_per_class = {'Airplane': 89.90631322089222, 'Bag': 85.25210168680036, 'Cap': 84.87166805660145, 'Car': 86.88640112019243, 'Chair': 92.61142608227246, 'Earphone': 71.56238082785235, 'Guitar': 95.10503041715995, 'Knife': 93.14731242326229, 'Lamp': 88.39455485897251, 'Laptop': 98.08339714612109, 'Motorbike': 84.21605117234243, 'Mug': 95.49874016269892, 'Pistol': 86.88915074063681, 'Rocket': 77.45324427013122, 'Skateboard': 83.77749519089002, 'Table': 90.76879849297241} + test_miou_per_class = {'Airplane': 82.26802163854434, 'Bag': 80.63471047644863, 'Cap': 78.82041342131531, 'Car': 78.79095522099612, 'Chair': 86.10496135591507, 'Earphone': 64.51532737429226, 'Guitar': 91.05907382737084, 'Knife': 87.17858998912662, 'Lamp': 80.78593233475696, 'Laptop': 96.26820442697688, 'Motorbike': 72.74828810566163, 'Mug': 94.34479283175395, 'Pistol': 81.65003387900707, 'Rocket': 64.67883599310835, 'Skateboard': 73.68076256897494, 'Table': 84.57875096479216} +================================================== +EPOCH 62 / 100 +100%|█████████████████████████████| 876/876 [15:23<00:00, 1.05s/it, data_loading=0.563, iteration=0.221, train_acc=95.79, train_loss_seg=0.102, train_macc=92.32, train_miou=85.52] +Learning rate = 0.000063 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:25<00:00, 2.10it/s, test_acc=93.44, test_loss_seg=0.134, test_macc=87.54, test_miou=81.18] +================================================== + test_loss_seg = 0.1340121179819107 + test_acc = 93.44642176096599 + test_macc = 87.54884688774098 + test_miou = 81.18235319786902 + test_acc_per_class = {'Airplane': 91.65712060117302, 'Bag': 96.01004464285714, 'Cap': 91.064453125, 'Car': 92.23447389240506, 'Chair': 95.0563604181463, 'Earphone': 92.35142299107143, 'Guitar': 96.58295253537736, 'Knife': 93.17626953125, 'Lamp': 91.47846782124127, 'Laptop': 97.98451618975903, 'Motorbike': 87.71541819852942, 'Mug': 99.44104646381578, 'Pistol': 95.75750177556817, 'Rocket': 84.22037760416666, 'Skateboard': 95.07465977822581, 'Table': 95.3376626068691} + test_macc_per_class = {'Airplane': 89.83592890541281, 'Bag': 85.4719026006979, 'Cap': 84.34172246031943, 'Car': 87.47014354776493, 'Chair': 92.43253599477138, 'Earphone': 71.82062133333694, 'Guitar': 94.83235442442357, 'Knife': 93.16476971080615, 'Lamp': 88.7088311753136, 'Laptop': 98.00815369048802, 'Motorbike': 81.53175316053867, 'Mug': 95.73754025331216, 'Pistol': 87.14945819557155, 'Rocket': 75.72129826169916, 'Skateboard': 84.00507552023981, 'Table': 90.54946096915968} + test_miou_per_class = {'Airplane': 82.2797976077493, 'Bag': 80.43155053618388, 'Cap': 78.32463221531141, 'Car': 78.82432383639781, 'Chair': 86.12801367991301, 'Earphone': 65.65353926214583, 'Guitar': 90.96297608294063, 'Knife': 87.21937987614098, 'Lamp': 80.92503287546958, 'Laptop': 96.02375125236031, 'Motorbike': 72.3919774331324, 'Mug': 94.4843651283495, 'Pistol': 81.72940427727077, 'Rocket': 64.76625940293378, 'Skateboard': 74.31272008931245, 'Table': 84.45992761029245} +================================================== +EPOCH 63 / 100 +100%|█████████████████████████████| 876/876 [15:23<00:00, 1.05s/it, data_loading=0.566, iteration=0.222, train_acc=96.00, train_loss_seg=0.101, train_macc=91.77, train_miou=87.23] +Learning rate = 0.000063 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.08it/s, test_acc=93.46, test_loss_seg=0.115, test_macc=87.45, test_miou=81.16] +================================================== + test_loss_seg = 0.11503757536411285 + test_acc = 93.46861047449224 + test_macc = 87.4567182531403 + test_miou = 81.16060007345021 + test_acc_per_class = {'Airplane': 91.64294469391496, 'Bag': 96.10770089285714, 'Cap': 91.06889204545455, 'Car': 92.28608336629746, 'Chair': 95.08923617276278, 'Earphone': 92.19098772321429, 'Guitar': 96.5685190644654, 'Knife': 93.2232666015625, 'Lamp': 91.7898751638986, 'Laptop': 98.10805722891565, 'Motorbike': 87.70009957107843, 'Mug': 99.44104646381578, 'Pistol': 95.6387606534091, 'Rocket': 84.130859375, 'Skateboard': 95.17389112903226, 'Table': 95.33754744619694} + test_macc_per_class = {'Airplane': 89.93056771022484, 'Bag': 83.02103945338857, 'Cap': 84.17579283147931, 'Car': 86.5423821894326, 'Chair': 91.87288697562556, 'Earphone': 71.55944921596267, 'Guitar': 94.68287918425263, 'Knife': 93.2165898540307, 'Lamp': 88.67539282700511, 'Laptop': 98.08552544759017, 'Motorbike': 80.46714589083398, 'Mug': 95.99276292810333, 'Pistol': 87.54257458962478, 'Rocket': 77.0500970778345, 'Skateboard': 85.46668752406428, 'Table': 91.02571835079158} + test_miou_per_class = {'Airplane': 82.30239644006076, 'Bag': 79.84593566994742, 'Cap': 78.25870752227404, 'Car': 78.70669638182628, 'Chair': 86.17536351338178, 'Earphone': 65.23158186672248, 'Guitar': 90.86994412461384, 'Knife': 87.30392062616828, 'Lamp': 81.1574788148222, 'Laptop': 96.25957671803755, 'Motorbike': 72.02327660468656, 'Mug': 94.51171667079427, 'Pistol': 81.62493900467874, 'Rocket': 65.06270544205783, 'Skateboard': 74.70554415984387, 'Table': 84.52981761528724} +================================================== +EPOCH 64 / 100 +100%|█████████████████████████████| 876/876 [15:27<00:00, 1.06s/it, data_loading=0.569, iteration=0.223, train_acc=95.58, train_loss_seg=0.101, train_macc=92.61, train_miou=87.54] +Learning rate = 0.000063 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.07it/s, test_acc=93.45, test_loss_seg=0.073, test_macc=87.57, test_miou=81.21] +================================================== + test_loss_seg = 0.07326789945363998 + test_acc = 93.45250785917825 + test_macc = 87.57269098467837 + test_miou = 81.21925946021004 + test_acc_per_class = {'Airplane': 91.66800311583577, 'Bag': 96.19838169642857, 'Cap': 91.50390625, 'Car': 92.32069570806962, 'Chair': 95.12433138760653, 'Earphone': 91.748046875, 'Guitar': 96.58848024764151, 'Knife': 93.292236328125, 'Lamp': 91.7724609375, 'Laptop': 98.16570971385542, 'Motorbike': 87.75658700980392, 'Mug': 99.43462171052632, 'Pistol': 95.6454190340909, 'Rocket': 83.447265625, 'Skateboard': 95.21011844758065, 'Table': 95.36386165978774} + test_macc_per_class = {'Airplane': 89.87698995993259, 'Bag': 83.93654498381748, 'Cap': 85.17481474413826, 'Car': 86.8505136207002, 'Chair': 92.26953481305203, 'Earphone': 71.67375964133635, 'Guitar': 94.75114461474044, 'Knife': 93.28537656867101, 'Lamp': 89.00828414573472, 'Laptop': 98.12521495671923, 'Motorbike': 82.32554663467019, 'Mug': 95.93136803615835, 'Pistol': 87.01087193602683, 'Rocket': 75.39902521885699, 'Skateboard': 85.25649254797997, 'Table': 90.28757333231917} + test_miou_per_class = {'Airplane': 82.31581679114603, 'Bag': 80.49192918514092, 'Cap': 79.3666237256631, 'Car': 78.82308193673435, 'Chair': 86.22554871698003, 'Earphone': 64.77435543813083, 'Guitar': 90.9274618219039, 'Knife': 87.42493695917985, 'Lamp': 81.40377315461458, 'Laptop': 96.37015618887501, 'Motorbike': 72.50465616228122, 'Mug': 94.4486530016361, 'Pistol': 81.50886178109134, 'Rocket': 63.52135318959626, 'Skateboard': 74.9771702779134, 'Table': 84.42377303247346} +================================================== +EPOCH 65 / 100 +100%|█████████████████████████████| 876/876 [15:25<00:00, 1.06s/it, data_loading=0.56 , iteration=0.226, train_acc=96.10, train_loss_seg=0.103, train_macc=92.50, train_miou=88.14] +Learning rate = 0.000063 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.09it/s, test_acc=93.46, test_loss_seg=0.13 , test_macc=87.39, test_miou=81.12] +================================================== + test_loss_seg = 0.13004061579704285 + test_acc = 93.46167601825279 + test_macc = 87.39695717691238 + test_miou = 81.12711334153677 + test_acc_per_class = {'Airplane': 91.6592684659091, 'Bag': 96.21233258928571, 'Cap': 91.18874289772727, 'Car': 92.19893443433544, 'Chair': 95.11455189098011, 'Earphone': 91.13071986607143, 'Guitar': 96.59431505503144, 'Knife': 93.23974609375, 'Lamp': 91.57424606643356, 'Laptop': 98.15453219126506, 'Motorbike': 87.70392922794117, 'Mug': 99.43462171052632, 'Pistol': 95.6997958096591, 'Rocket': 85.14404296875, 'Skateboard': 94.96597782258065, 'Table': 95.37105920179835} + test_macc_per_class = {'Airplane': 89.69137316129412, 'Bag': 85.22681313817813, 'Cap': 84.65883693558935, 'Car': 85.35549894251314, 'Chair': 92.53698970095435, 'Earphone': 71.31130448482583, 'Guitar': 94.98669904466138, 'Knife': 93.23304214850634, 'Lamp': 88.68892601601794, 'Laptop': 98.11390513393057, 'Motorbike': 80.6047442161693, 'Mug': 96.81304636725511, 'Pistol': 86.03019181296467, 'Rocket': 75.51149130062494, 'Skateboard': 84.92542146065645, 'Table': 90.66303096645659} + test_miou_per_class = {'Airplane': 82.3314875311629, 'Bag': 81.00233065016891, 'Cap': 78.6540266786758, 'Car': 78.20877007676744, 'Chair': 86.15886784644107, 'Earphone': 63.76158796287801, 'Guitar': 91.00607168906512, 'Knife': 87.33282800196783, 'Lamp': 80.90612324513266, 'Laptop': 96.34844038867622, 'Motorbike': 72.08805164903085, 'Mug': 94.54202175653883, 'Pistol': 81.07979694494247, 'Rocket': 65.81585279076316, 'Skateboard': 74.25858677222018, 'Table': 84.53896948015692} +================================================== +EPOCH 66 / 100 +100%|█████████████████████████████| 876/876 [15:22<00:00, 1.05s/it, data_loading=0.557, iteration=0.225, train_acc=95.92, train_loss_seg=0.100, train_macc=92.59, train_miou=87.85] +Learning rate = 0.000063 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.09it/s, test_acc=93.36, test_loss_seg=0.089, test_macc=87.84, test_miou=81.22] +================================================== + test_loss_seg = 0.08990087360143661 + test_acc = 93.36608057427193 + test_macc = 87.84893736443917 + test_miou = 81.2241158416873 + test_acc_per_class = {'Airplane': 91.59139594024927, 'Bag': 96.25418526785714, 'Cap': 91.47727272727273, 'Car': 92.28392009493672, 'Chair': 95.08209228515625, 'Earphone': 91.23186383928571, 'Guitar': 96.64037932389937, 'Knife': 93.3868408203125, 'Lamp': 91.7270473666958, 'Laptop': 98.1410015060241, 'Motorbike': 86.63641237745098, 'Mug': 99.45646587171053, 'Pistol': 95.60990767045455, 'Rocket': 84.09423828125, 'Skateboard': 94.88092237903226, 'Table': 95.36334343676297} + test_macc_per_class = {'Airplane': 89.41406376259545, 'Bag': 84.16153056825615, 'Cap': 85.13552417651144, 'Car': 87.06684347215288, 'Chair': 92.44519541872683, 'Earphone': 71.50410371898724, 'Guitar': 95.19641519686185, 'Knife': 93.37920438791602, 'Lamp': 88.90131978650832, 'Laptop': 98.10992243350745, 'Motorbike': 85.15075310848589, 'Mug': 96.16331273011596, 'Pistol': 87.17054101571192, 'Rocket': 76.41633702812098, 'Skateboard': 84.22621752797274, 'Table': 91.14171349859545} + test_miou_per_class = {'Airplane': 82.14917947423476, 'Bag': 80.76074724643227, 'Cap': 79.30805672228604, 'Car': 78.80704867826026, 'Chair': 85.94287982103704, 'Earphone': 64.04965295934463, 'Guitar': 91.1355749232694, 'Knife': 87.59098500732043, 'Lamp': 81.33836479144337, 'Laptop': 96.32286858443517, 'Motorbike': 72.96321043751118, 'Mug': 94.66547533366662, 'Pistol': 81.4176828822334, 'Rocket': 64.67825859884832, 'Skateboard': 73.86970734489768, 'Table': 84.58616066177619} +================================================== +EPOCH 67 / 100 +100%|█████████████████████████████| 876/876 [15:23<00:00, 1.05s/it, data_loading=0.566, iteration=0.219, train_acc=95.99, train_loss_seg=0.095, train_macc=92.53, train_miou=87.98] +Learning rate = 0.000063 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:25<00:00, 2.10it/s, test_acc=93.40, test_loss_seg=0.108, test_macc=87.67, test_miou=81.18] +================================================== + test_loss_seg = 0.10797581821680069 + test_acc = 93.40168527274128 + test_macc = 87.67911187203845 + test_miou = 81.18718255895207 + test_acc_per_class = {'Airplane': 91.61101310483872, 'Bag': 96.06236049107143, 'Cap': 91.09108664772727, 'Car': 92.19862539556962, 'Chair': 95.12863159179688, 'Earphone': 91.2353515625, 'Guitar': 96.65450569968553, 'Knife': 93.226318359375, 'Lamp': 91.5474418159965, 'Laptop': 98.1086455195783, 'Motorbike': 87.69626991421569, 'Mug': 99.45646587171053, 'Pistol': 95.67205255681817, 'Rocket': 84.59065755208334, 'Skateboard': 94.84154485887096, 'Table': 95.30599342202241} + test_macc_per_class = {'Airplane': 89.75803578772621, 'Bag': 85.38177747136967, 'Cap': 84.53413130284092, 'Car': 86.22877260085569, 'Chair': 92.33332708605289, 'Earphone': 71.644386630911, 'Guitar': 95.41092834401549, 'Knife': 93.224472271971, 'Lamp': 89.47561930215026, 'Laptop': 98.09808840018292, 'Motorbike': 82.34235695291763, 'Mug': 96.12850963809898, 'Pistol': 87.46487531226911, 'Rocket': 77.0068125155525, 'Skateboard': 83.38061232852316, 'Table': 90.45308400717792} + test_miou_per_class = {'Airplane': 82.23201368102967, 'Bag': 80.56760755760139, 'Cap': 78.44908469451164, 'Car': 78.3922529668571, 'Chair': 86.21465365218826, 'Earphone': 64.21578167058742, 'Guitar': 91.20576953938074, 'Knife': 87.3107977363651, 'Lamp': 81.50183371738599, 'Laptop': 96.26158306539156, 'Motorbike': 72.6608015695417, 'Mug': 94.66186845642852, 'Pistol': 81.68086593568525, 'Rocket': 65.67932300592668, 'Skateboard': 73.65383650145536, 'Table': 84.30684719289697} +================================================== +EPOCH 68 / 100 +100%|█████████████████████████████| 876/876 [15:23<00:00, 1.05s/it, data_loading=0.566, iteration=0.220, train_acc=96.05, train_loss_seg=0.098, train_macc=92.86, train_miou=88.27] +Learning rate = 0.000063 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.09it/s, test_acc=93.56, test_loss_seg=0.085, test_macc=87.90, test_miou=81.51] +================================================== + test_loss_seg = 0.08513511717319489 + test_acc = 93.56662997123917 + test_macc = 87.90273451874144 + test_miou = 81.51345742793414 + test_acc_per_class = {'Airplane': 91.60299440982405, 'Bag': 96.24372209821429, 'Cap': 91.52610085227273, 'Car': 92.27681220332279, 'Chair': 95.10664506392045, 'Earphone': 92.1875, 'Guitar': 96.54395145440252, 'Knife': 93.7420654296875, 'Lamp': 91.76341236888112, 'Laptop': 98.09158509036145, 'Motorbike': 87.51531862745098, 'Mug': 99.45004111842105, 'Pistol': 95.66206498579545, 'Rocket': 85.05045572916666, 'Skateboard': 94.96597782258065, 'Table': 95.33743228552476} + test_macc_per_class = {'Airplane': 89.93850013396317, 'Bag': 83.85744236175658, 'Cap': 85.30611577341467, 'Car': 87.47046990427334, 'Chair': 92.69227366082968, 'Earphone': 72.0316925312194, 'Guitar': 94.77202546915441, 'Knife': 93.73731384301954, 'Lamp': 89.54683729455955, 'Laptop': 98.07529676022516, 'Motorbike': 83.00195824830615, 'Mug': 96.32233742094517, 'Pistol': 86.85356899741016, 'Rocket': 79.01648425042903, 'Skateboard': 83.24907096075871, 'Table': 90.57236468959863} + test_miou_per_class = {'Airplane': 82.25793243982771, 'Bag': 80.61384443634891, 'Cap': 79.45599201291714, 'Car': 78.90469104909741, 'Chair': 86.26449611818889, 'Earphone': 65.60151266760845, 'Guitar': 90.85574554918155, 'Knife': 88.21923339323565, 'Lamp': 81.71222740639254, 'Laptop': 96.22809520070837, 'Motorbike': 72.68653508163193, 'Mug': 94.62543132816106, 'Pistol': 81.42037180380478, 'Rocket': 66.79346816449673, 'Skateboard': 74.15257780997926, 'Table': 84.423164385366} +================================================== +EPOCH 69 / 100 +100%|█████████████████████████████| 876/876 [15:25<00:00, 1.06s/it, data_loading=0.560, iteration=0.224, train_acc=95.69, train_loss_seg=0.100, train_macc=91.57, train_miou=86.37] +Learning rate = 0.000063 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.08it/s, test_acc=93.50, test_loss_seg=0.127, test_macc=87.63, test_miou=81.29] +================================================== + test_loss_seg = 0.1275930106639862 + test_acc = 93.50893581885047 + test_macc = 87.63670192793421 + test_miou = 81.29894702329905 + test_acc_per_class = {'Airplane': 91.62948474156892, 'Bag': 96.36579241071429, 'Cap': 91.52166193181817, 'Car': 92.08273585838607, 'Chair': 95.08757157759233, 'Earphone': 91.83175223214286, 'Guitar': 96.61274076257862, 'Knife': 92.918701171875, 'Lamp': 91.75829053758741, 'Laptop': 98.10040945030121, 'Motorbike': 87.59765625, 'Mug': 99.4371916118421, 'Pistol': 95.6787109375, 'Rocket': 85.14404296875, 'Skateboard': 94.99117943548387, 'Table': 95.38505122346697} + test_macc_per_class = {'Airplane': 89.68385089048596, 'Bag': 84.62641438678004, 'Cap': 85.27668760325122, 'Car': 85.84379030252008, 'Chair': 92.39032975774467, 'Earphone': 71.69912494756846, 'Guitar': 95.04967980037313, 'Knife': 92.92456347685787, 'Lamp': 89.36019310724981, 'Laptop': 98.0878727650913, 'Motorbike': 83.65010866444376, 'Mug': 96.57078061856961, 'Pistol': 86.6593532166407, 'Rocket': 74.78295833028416, 'Skateboard': 84.31133910848789, 'Table': 91.2701838705988} + test_miou_per_class = {'Airplane': 82.15655909181248, 'Bag': 81.30515974771943, 'Cap': 79.43685181743221, 'Car': 78.15886266497239, 'Chair': 86.17623197729307, 'Earphone': 64.8516156603958, 'Guitar': 91.0550437162396, 'Knife': 86.77390575496854, 'Lamp': 81.66707899171553, 'Laptop': 96.24547320174226, 'Motorbike': 72.8509041344011, 'Mug': 94.53904767221168, 'Pistol': 81.40886046947475, 'Rocket': 65.40323131462131, 'Skateboard': 74.03350262749974, 'Table': 84.72082353028495} +================================================== +EPOCH 70 / 100 +100%|█████████████████████████████| 876/876 [15:26<00:00, 1.06s/it, data_loading=0.565, iteration=0.223, train_acc=96.01, train_loss_seg=0.107, train_macc=92.64, train_miou=87.71] +Learning rate = 0.000063 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.07it/s, test_acc=93.48, test_loss_seg=0.231, test_macc=87.44, test_miou=81.20] +================================================== + test_loss_seg = 0.23108461499214172 + test_acc = 93.48800320656267 + test_macc = 87.44024134858017 + test_miou = 81.20890234549348 + test_acc_per_class = {'Airplane': 91.54629078079178, 'Bag': 96.35184151785714, 'Cap': 91.13991477272727, 'Car': 92.20357001582279, 'Chair': 95.09069269353692, 'Earphone': 92.13169642857143, 'Guitar': 96.635158706761, 'Knife': 92.8643798828125, 'Lamp': 91.65431736232517, 'Laptop': 98.046875, 'Motorbike': 87.94519761029412, 'Mug': 99.41920230263158, 'Pistol': 95.50115411931817, 'Rocket': 84.7900390625, 'Skateboard': 95.10773689516128, 'Table': 95.37998415389151} + test_macc_per_class = {'Airplane': 89.77256713628127, 'Bag': 84.43968230626318, 'Cap': 84.70472289974413, 'Car': 87.23514283145123, 'Chair': 92.65202755138048, 'Earphone': 71.93400435627639, 'Guitar': 95.0937981433646, 'Knife': 92.86698848980373, 'Lamp': 89.04151246609308, 'Laptop': 98.0235924660966, 'Motorbike': 81.8982911415504, 'Mug': 96.30606667574504, 'Pistol': 84.96638614075627, 'Rocket': 75.47565927955416, 'Skateboard': 83.99745790118969, 'Table': 90.63596179173281} + test_miou_per_class = {'Airplane': 82.1306673731024, 'Bag': 81.19099470292659, 'Cap': 78.59770001645029, 'Car': 78.67778890695249, 'Chair': 86.17073848560521, 'Earphone': 65.36253252863762, 'Guitar': 91.09171243977528, 'Knife': 86.67886670886601, 'Lamp': 81.65562593957958, 'Laptop': 96.14094561805537, 'Motorbike': 72.40039556658084, 'Mug': 94.35451142880456, 'Pistol': 80.4714599123745, 'Rocket': 65.3418710786756, 'Skateboard': 74.46162963449514, 'Table': 84.61499718701407} +================================================== +EPOCH 71 / 100 +100%|█████████████████████████████| 876/876 [15:23<00:00, 1.05s/it, data_loading=0.559, iteration=0.226, train_acc=95.74, train_loss_seg=0.100, train_macc=92.20, train_miou=87.63] +Learning rate = 0.000063 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:25<00:00, 2.11it/s, test_acc=93.48, test_loss_seg=0.071, test_macc=87.69, test_miou=81.21] +================================================== + test_loss_seg = 0.07190120220184326 + test_acc = 93.48237598277818 + test_macc = 87.6918111286846 + test_miou = 81.21532771180559 + test_acc_per_class = {'Airplane': 91.63363728005865, 'Bag': 96.04840959821429, 'Cap': 91.05113636363636, 'Car': 92.2471444818038, 'Chair': 95.0838262384588, 'Earphone': 92.05496651785714, 'Guitar': 96.59216538915094, 'Knife': 93.16162109375, 'Lamp': 91.68402398382867, 'Laptop': 98.03569747740963, 'Motorbike': 87.85520067401961, 'Mug': 99.44361636513158, 'Pistol': 95.80300071022727, 'Rocket': 84.70052083333334, 'Skateboard': 95.068359375, 'Table': 95.25468934257076} + test_macc_per_class = {'Airplane': 89.67634073952418, 'Bag': 84.88187974827552, 'Cap': 84.22175830674743, 'Car': 87.06928536322087, 'Chair': 92.2945482054622, 'Earphone': 71.5448023156399, 'Guitar': 94.78020324380351, 'Knife': 93.1603766848917, 'Lamp': 89.10156687564454, 'Laptop': 98.0348091380452, 'Motorbike': 83.21208492191495, 'Mug': 96.31894768236181, 'Pistol': 86.48766606751934, 'Rocket': 77.15477829477994, 'Skateboard': 84.09509752043357, 'Table': 91.03483295068912} + test_miou_per_class = {'Airplane': 82.18265940855744, 'Bag': 80.34563160866607, 'Cap': 78.25159442378111, 'Car': 78.74173084348439, 'Chair': 86.07467122111294, 'Earphone': 65.005545913611, 'Guitar': 90.93335315259147, 'Knife': 87.19750965410552, 'Lamp': 81.51404797695959, 'Laptop': 96.12096410250035, 'Motorbike': 72.61797289952825, 'Mug': 94.56875036600725, 'Pistol': 81.62716235283041, 'Rocket': 65.6512408422959, 'Skateboard': 74.29182239134575, 'Table': 84.32058623151185} +================================================== +EPOCH 72 / 100 +100%|█████████████████████████████| 876/876 [15:22<00:00, 1.05s/it, data_loading=0.562, iteration=0.225, train_acc=95.70, train_loss_seg=0.102, train_macc=92.30, train_miou=87.38] +Learning rate = 0.000031 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.09it/s, test_acc=93.59, test_loss_seg=0.102, test_macc=87.98, test_miou=81.53] +================================================== + test_loss_seg = 0.10293351113796234 + test_acc = 93.59233664837875 + test_macc = 87.98132432753593 + test_miou = 81.53718226991556 + test_acc_per_class = {'Airplane': 91.57836556085044, 'Bag': 96.35532924107143, 'Cap': 91.57936789772727, 'Car': 92.28762856012658, 'Chair': 95.05774758078836, 'Earphone': 92.10728236607143, 'Guitar': 96.56667649371069, 'Knife': 93.3184814453125, 'Lamp': 91.74685178103147, 'Laptop': 98.02393166415662, 'Motorbike': 87.5765931372549, 'Mug': 99.46032072368422, 'Pistol': 95.68314985795455, 'Rocket': 85.75846354166666, 'Skateboard': 94.99905493951613, 'Table': 95.3781415831368} + test_macc_per_class = {'Airplane': 89.875259748206, 'Bag': 84.57584122427161, 'Cap': 85.3371774440458, 'Car': 88.06281192040906, 'Chair': 92.45733545255715, 'Earphone': 71.75915788528961, 'Guitar': 94.73888643707213, 'Knife': 93.30594556738087, 'Lamp': 89.30292391816069, 'Laptop': 98.02315925891138, 'Motorbike': 83.88799534935335, 'Mug': 96.82660532158856, 'Pistol': 87.02679619850207, 'Rocket': 77.67303803931503, 'Skateboard': 83.96245745060168, 'Table': 90.88579802490968} + test_miou_per_class = {'Airplane': 82.19749144563943, 'Bag': 81.25155732579334, 'Cap': 79.55367469260983, 'Car': 79.0107798851113, 'Chair': 86.04527283138529, 'Earphone': 65.1839834306363, 'Guitar': 90.95099447483396, 'Knife': 87.46851260197161, 'Lamp': 81.81247264590584, 'Laptop': 96.0982005962207, 'Motorbike': 72.59836985063926, 'Mug': 94.76689174563613, 'Pistol': 81.54721306864268, 'Rocket': 67.2705601109945, 'Skateboard': 74.21759820327767, 'Table': 84.62134340935098} +================================================== +EPOCH 73 / 100 +100%|█████████████████████████████| 876/876 [15:22<00:00, 1.05s/it, data_loading=0.562, iteration=0.222, train_acc=95.50, train_loss_seg=0.099, train_macc=91.42, train_miou=86.61] +Learning rate = 0.000031 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.09it/s, test_acc=93.54, test_loss_seg=0.08 , test_macc=87.52, test_miou=81.32] +================================================== + test_loss_seg = 0.07999621331691742 + test_acc = 93.54411465726321 + test_macc = 87.52986380652371 + test_miou = 81.3225755283742 + test_acc_per_class = {'Airplane': 91.528678289956, 'Bag': 96.337890625, 'Cap': 91.2686434659091, 'Car': 92.27990259098101, 'Chair': 95.11711814186789, 'Earphone': 92.47698102678571, 'Guitar': 96.60567757468553, 'Knife': 92.7459716796875, 'Lamp': 91.63365930944056, 'Laptop': 98.08570218373494, 'Motorbike': 87.91647518382352, 'Mug': 99.45518092105263, 'Pistol': 95.71311257102273, 'Rocket': 85.24169921875, 'Skateboard': 94.95652721774194, 'Table': 95.34261451577241} + test_macc_per_class = {'Airplane': 89.7768741285928, 'Bag': 84.6108538172632, 'Cap': 84.78198857898343, 'Car': 87.16242997885414, 'Chair': 92.3492421247602, 'Earphone': 71.6255809636629, 'Guitar': 94.93073938993003, 'Knife': 92.75169843243609, 'Lamp': 88.8439285506709, 'Laptop': 98.0565418147104, 'Motorbike': 81.8583992418874, 'Mug': 96.04662447567604, 'Pistol': 87.48495520503225, 'Rocket': 75.89956535395524, 'Skateboard': 83.51214802823294, 'Table': 90.7862508197313} + test_miou_per_class = {'Airplane': 82.11867552347215, 'Bag': 81.20485842494159, 'Cap': 78.83110648487038, 'Car': 78.85881524791891, 'Chair': 86.16333334691964, 'Earphone': 65.55267466584405, 'Guitar': 91.05546605779992, 'Knife': 86.47310265894019, 'Lamp': 81.36981349775714, 'Laptop': 96.21574359322422, 'Motorbike': 72.31085346420436, 'Mug': 94.64200619474721, 'Pistol': 81.95717450443463, 'Rocket': 66.03231804795678, 'Skateboard': 73.92733869438212, 'Table': 84.44792804657384} +================================================== +EPOCH 74 / 100 +100%|█████████████████████████████| 876/876 [15:25<00:00, 1.06s/it, data_loading=0.562, iteration=0.225, train_acc=96.13, train_loss_seg=0.098, train_macc=92.05, train_miou=87.33] +Learning rate = 0.000031 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.08it/s, test_acc=93.49, test_loss_seg=0.124, test_macc=87.71, test_miou=81.33] +================================================== + test_loss_seg = 0.12418320029973984 + test_acc = 93.4998996617015 + test_macc = 87.7148267652354 + test_miou = 81.3372423493814 + test_acc_per_class = {'Airplane': 91.56705347324046, 'Bag': 96.27162388392857, 'Cap': 91.02894176136364, 'Car': 92.3203866693038, 'Chair': 95.07044011896308, 'Earphone': 92.42815290178571, 'Guitar': 96.6041420990566, 'Knife': 93.05419921875, 'Lamp': 91.54112489073427, 'Laptop': 98.08629047439759, 'Motorbike': 87.44638480392157, 'Mug': 99.46032072368422, 'Pistol': 95.6509676846591, 'Rocket': 84.912109375, 'Skateboard': 95.17546622983872, 'Table': 95.38079027859669} + test_macc_per_class = {'Airplane': 89.75145115421893, 'Bag': 84.11160532561803, 'Cap': 84.54453216164146, 'Car': 87.00434275879907, 'Chair': 92.29609178780215, 'Earphone': 71.79806779703726, 'Guitar': 94.83530761968946, 'Knife': 93.05162744275573, 'Lamp': 88.9726917494252, 'Laptop': 98.06324585835803, 'Motorbike': 84.41587819612592, 'Mug': 96.02613420519808, 'Pistol': 87.00988411200618, 'Rocket': 75.93601625258825, 'Skateboard': 84.65739583797203, 'Table': 90.96295598453061} + test_miou_per_class = {'Airplane': 82.1721292184926, 'Bag': 80.80099276212502, 'Cap': 78.35763834828032, 'Car': 78.8367927742536, 'Chair': 86.09932708370195, 'Earphone': 65.63367197895444, 'Guitar': 90.95155157984553, 'Knife': 87.00911972481795, 'Lamp': 81.29717402060642, 'Laptop': 96.2173369093627, 'Motorbike': 72.55807069123605, 'Mug': 94.68533922216704, 'Pistol': 81.50071221691437, 'Rocket': 65.63953233905943, 'Skateboard': 74.9758442789066, 'Table': 84.66064444137818} +================================================== +EPOCH 75 / 100 +100%|█████████████████████████████| 876/876 [15:25<00:00, 1.06s/it, data_loading=0.564, iteration=0.223, train_acc=95.65, train_loss_seg=0.102, train_macc=92.02, train_miou=87.07] +Learning rate = 0.000031 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.08it/s, test_acc=93.51, test_loss_seg=0.113, test_macc=88.01, test_miou=81.46] +================================================== + test_loss_seg = 0.11391552537679672 + test_acc = 93.51108240489727 + test_macc = 88.01383357611358 + test_miou = 81.4699478560723 + test_acc_per_class = {'Airplane': 91.63134622434018, 'Bag': 96.44252232142857, 'Cap': 90.9579190340909, 'Car': 92.28577432753164, 'Chair': 95.07994218306108, 'Earphone': 91.92592075892857, 'Guitar': 96.6554269850629, 'Knife': 93.0035400390625, 'Lamp': 91.7548759833916, 'Laptop': 98.0456984186747, 'Motorbike': 87.46266084558823, 'Mug': 99.4397615131579, 'Pistol': 95.6210049715909, 'Rocket': 85.48583984375, 'Skateboard': 95.04788306451613, 'Table': 95.33720196418042} + test_macc_per_class = {'Airplane': 89.76143468197888, 'Bag': 85.9218685232605, 'Cap': 84.42215751292458, 'Car': 87.4655036845764, 'Chair': 92.3401508768114, 'Earphone': 71.68673533728412, 'Guitar': 95.17558157473684, 'Knife': 93.00122164866855, 'Lamp': 89.23103687224965, 'Laptop': 98.03695353174024, 'Motorbike': 84.91906072206343, 'Mug': 95.7368623055955, 'Pistol': 86.76908815963937, 'Rocket': 77.9483867139955, 'Skateboard': 84.74630397676934, 'Table': 91.05899109552324} + test_miou_per_class = {'Airplane': 82.23068902511528, 'Bag': 82.0137133836888, 'Cap': 78.195741529906, 'Car': 78.91120099187206, 'Chair': 86.14485610612323, 'Earphone': 64.91989876947795, 'Guitar': 91.15720869814177, 'Knife': 86.92063895804321, 'Lamp': 81.6230276194154, 'Laptop': 96.13974891657742, 'Motorbike': 73.0651544741091, 'Mug': 94.47294189144925, 'Pistol': 81.40401179170024, 'Rocket': 67.10583725648095, 'Skateboard': 74.67741701336537, 'Table': 84.53707927169037} +================================================== +EPOCH 76 / 100 +100%|█████████████████████████████| 876/876 [15:26<00:00, 1.06s/it, data_loading=0.563, iteration=0.222, train_acc=95.67, train_loss_seg=0.099, train_macc=92.00, train_miou=87.34] +Learning rate = 0.000031 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:27<00:00, 2.07it/s, test_acc=93.53, test_loss_seg=0.151, test_macc=88.03, test_miou=81.47] +================================================== + test_loss_seg = 0.15097211301326752 + test_acc = 93.5336258539574 + test_macc = 88.03795073463186 + test_miou = 81.47365537629832 + test_acc_per_class = {'Airplane': 91.64681085043989, 'Bag': 96.337890625, 'Cap': 91.09996448863636, 'Car': 92.26970431170885, 'Chair': 95.00253850763495, 'Earphone': 92.578125, 'Guitar': 96.61028400157232, 'Knife': 93.253173828125, 'Lamp': 91.79260680725524, 'Laptop': 97.9851044804217, 'Motorbike': 87.27500765931373, 'Mug': 99.44618626644737, 'Pistol': 95.68093039772727, 'Rocket': 85.33935546875, 'Skateboard': 94.90454889112904, 'Table': 95.31578207915685} + test_macc_per_class = {'Airplane': 89.84819589962248, 'Bag': 84.67050441584935, 'Cap': 84.75666579908972, 'Car': 87.19042692732482, 'Chair': 92.58161493844226, 'Earphone': 71.8986889122919, 'Guitar': 94.79896852686961, 'Knife': 93.24535944596042, 'Lamp': 89.47633101064687, 'Laptop': 97.9782900610644, 'Motorbike': 85.34309118158019, 'Mug': 96.58712728325864, 'Pistol': 87.19717674421496, 'Rocket': 78.43467603869419, 'Skateboard': 83.67621036390473, 'Table': 90.92388420529511} + test_miou_per_class = {'Airplane': 82.26044568555221, 'Bag': 81.22609316840246, 'Cap': 78.55780746196695, 'Car': 78.81272376047963, 'Chair': 85.97079538281722, 'Earphone': 65.90521597269495, 'Guitar': 90.97994989769165, 'Knife': 87.35596426007697, 'Lamp': 81.77103239497399, 'Laptop': 96.02263663166926, 'Motorbike': 72.90563694680506, 'Mug': 94.6191167570525, 'Pistol': 81.57198274136852, 'Rocket': 67.0883255626187, 'Skateboard': 74.06155416980982, 'Table': 84.46920522679369} +================================================== +EPOCH 77 / 100 +100%|█████████████████████████████| 876/876 [15:26<00:00, 1.06s/it, data_loading=0.563, iteration=0.218, train_acc=95.94, train_loss_seg=0.101, train_macc=92.54, train_miou=87.90] +Learning rate = 0.000031 +100%|██████████████████████████████████████████████████████████████████████| 180/180 [01:26<00:00, 2.07it/s, test_acc=93.56, test_loss_seg=0.110, test_macc=87.67, test_miou=81.41] +================================================== + test_loss_seg = 0.11060494184494019 + test_acc = 93.56243476524268 + test_macc = 87.67020941029652 + test_miou = 81.41458651996867 + test_acc_per_class = {'Airplane': 91.60814928519062, 'Bag': 96.35532924107143, 'Cap': 91.1709872159091, 'Car': 92.2703223892405, 'Chair': 95.0990156693892, 'Earphone': 92.45256696428571, 'Guitar': 96.60444919418238, 'Knife': 93.1280517578125, 'Lamp': 91.5336128715035, 'Laptop': 98.10335090361446, 'Motorbike': 87.80732996323529, 'Mug': 99.45775082236842, 'Pistol': 95.70201526988636, 'Rocket': 85.49397786458334, 'Skateboard': 94.86517137096774, 'Table': 95.34687546064269} + test_macc_per_class = {'Airplane': 89.65373251980641, 'Bag': 85.30656105695195, 'Cap': 84.73120211342552, 'Car': 87.02253734268159, 'Chair': 92.40939540447329, 'Earphone': 71.58845636525862, 'Guitar': 94.74508580554156, 'Knife': 93.12303977965223, 'Lamp': 89.20919583795185, 'Laptop': 98.07209733565327, 'Motorbike': 82.99097965568535, 'Mug': 96.18719273917729, 'Pistol': 86.58798129826496, 'Rocket': 76.76136352411852, 'Skateboard': 83.39737662659411, 'Table': 90.93715315950762} + test_miou_per_class = {'Airplane': 82.20207196539907, 'Bag': 81.50803393860748, 'Cap': 78.65722834122613, 'Car': 78.74743266112239, 'Chair': 86.10100041348142, 'Earphone': 65.5017545887069, 'Guitar': 90.97426555868329, 'Knife': 87.13761380322912, 'Lamp': 81.62452711878434, 'Laptop': 96.24980920736026, 'Motorbike': 72.75294981776521, 'Mug': 94.679280956964, 'Pistol': 81.43662990679782, 'Rocket': 66.76754352678827, 'Skateboard': 73.76460084657239, 'Table': 84.52864166801048} +================================================== +``` +BEST +* best_loss_seg: 0.06631797552108765 +* test_Cmiou = 97.9368103974044 +* test_Imiou = 96.3840916196669 +* miou_per_class +```json +{ +'Airplane':0.9688190282061018, +'Cap':0.9915819532701674, +'Car':0.9696876538536436, +'Chair':0.9739581531329329, +'Earphone':0.9742590536947074, +'Guitar':0.984593913468968, +'Knife':0.9901378835358035, +'Lamp':0.9745299819648745, +'Laptop':0.9977452919677099, +'Motorbike':0.979398697349813, +'Mug':0.9969803181260177, +'Pistol':0.985717691380712, +'Rocket':0.9706233311081442, +'Skateboard':0.9947034192761516, +'Table':0.9377851892749126 +} +``` \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/shapenet/pointnet2_original.md b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/shapenet/pointnet2_original.md new file mode 100644 index 00000000..ed30b071 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/shapenet/pointnet2_original.md @@ -0,0 +1,1000 @@ +``` +SegmentationModel( +(model): UnetSkipConnectionBlock( +(down): PointNetMSGDown( + (mlps): ModuleList( + (0): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(6, 16, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer1): Conv2d( + (conv): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer2): Conv2d( + (conv): Conv2d(16, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + ) + (1): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(6, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer1): Conv2d( + (conv): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer2): Conv2d( + (conv): Conv2d(32, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + ) + ) +) +(submodule): UnetSkipConnectionBlock( + (down): PointNetMSGDown( + (mlps): ModuleList( + (0): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(99, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer1): Conv2d( + (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer2): Conv2d( + (conv): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + ) + (1): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(99, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer1): Conv2d( + (conv): Conv2d(64, 96, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer2): Conv2d( + (conv): Conv2d(96, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + ) + ) + ) + (submodule): UnetSkipConnectionBlock( + (down): PointNetMSGDown( + (mlps): ModuleList( + (0): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(259, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer1): Conv2d( + (conv): Conv2d(128, 196, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(196, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer2): Conv2d( + (conv): Conv2d(196, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + ) + (1): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(259, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer1): Conv2d( + (conv): Conv2d(128, 196, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(196, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer2): Conv2d( + (conv): Conv2d(196, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + ) + ) + ) + (submodule): UnetSkipConnectionBlock( + (down): PointNetMSGDown( + (mlps): ModuleList( + (0): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(515, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer1): Conv2d( + (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer2): Conv2d( + (conv): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + ) + (1): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(515, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer1): Conv2d( + (conv): Conv2d(256, 384, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer2): Conv2d( + (conv): Conv2d(384, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + ) + ) + ) + (submodule): Identity() + (up): DenseFPModule( + (nn): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(1536, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer1): Conv2d( + (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + ) + ) + ) + (up): DenseFPModule( + (nn): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(768, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer1): Conv2d( + (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + ) + ) + ) + (up): DenseFPModule( + (nn): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(608, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer1): Conv2d( + (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + ) + ) +) +(up): DenseFPModule( + (nn): SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(259, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + (layer1): Conv2d( + (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace=True) + ) + ) +) +) +) +Model size = 3031074 +EPOCH 48 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.385, iteration=0.233, train_acc=94.05, train_loss_seg=0.159, train_macc=87.97, train_miou=82.02] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.54it/s, test_acc=92.23, test_loss_seg=0.136, test_macc=84.79, test_miou=77.96] +================================================== +test_loss_seg = 0.13694451749324799 +test_acc = 92.23585026085786 +test_macc = 84.79950890358096 +test_miou = 77.96565079826186 +test_acc_per_class = {'Airplane': 89.70411727910388, 'Bag': 95.31766000943544, 'Cap': 93.51807688638748, 'Car': 89.47064569179926, 'Chair': 94.66742430346181, 'Earphone': 93.37222202340479, 'Guitar': 95.99519287210445, 'Knife': 90.12021225762645, 'Lamp': 91.30890184954701, 'Laptop': 98.0789277736411, 'Motorbike': 85.25061036909378, 'Mug': 98.85631357511849, 'Pistol': 95.22651620670932, 'Rocket': 75.61040575335169, 'Skateboard': 95.86223310213519, 'Table': 93.41414422080591} +test_macc_per_class = {'Airplane': 89.05569336868405, 'Bag': 78.4997784515664, 'Cap': 86.95107682938534, 'Car': 79.3342427650161, 'Chair': 91.9488475222346, 'Earphone': 67.18101166217906, 'Guitar': 93.46312576570946, 'Knife': 90.09280312919688, 'Lamp': 90.29490647511884, 'Laptop': 98.03925508170154, 'Motorbike': 70.7361264981618, 'Mug': 92.27836321402745, 'Pistol': 86.75048262632475, 'Rocket': 77.03721618374136, 'Skateboard': 82.33039226358963, 'Table': 82.79882062065803} +test_miou_per_class = {'Airplane': 79.14738339090582, 'Bag': 75.34212145313919, 'Cap': 82.67170700364103, 'Car': 71.42114879447522, 'Chair': 84.46583816508183, 'Earphone': 63.69526992385298, 'Guitar': 89.2557350657739, 'Knife': 82.0021289905475, 'Lamp': 80.62561192804749, 'Laptop': 96.20737916023164, 'Motorbike': 63.43150865406491, 'Mug': 89.65174193463838, 'Pistol': 80.42020491343428, 'Rocket': 58.429299294218154, 'Skateboard': 74.54147233332658, 'Table': 76.14186176681062} +================================================== + +EPOCH 49 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.380, iteration=0.237, train_acc=93.17, train_loss_seg=0.162, train_macc=85.53, train_miou=79.58] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.53it/s, test_acc=92.79, test_loss_seg=0.207, test_macc=85.96, test_miou=78.94] +================================================== +test_loss_seg = 0.20756636559963226 +test_acc = 92.79110422714442 +test_macc = 85.96491974995148 +test_miou = 78.9492204679207 +test_acc_per_class = {'Airplane': 90.339866031032, 'Bag': 95.32267838206332, 'Cap': 92.99537430378551, 'Car': 90.4205426176132, 'Chair': 94.61407877027197, 'Earphone': 91.17564039235057, 'Guitar': 95.763391516255, 'Knife': 92.74354936858153, 'Lamp': 91.02847059537213, 'Laptop': 98.03259319373915, 'Motorbike': 85.50666360294117, 'Mug': 98.9142147835932, 'Pistol': 95.27379512869236, 'Rocket': 81.92855803487717, 'Skateboard': 95.86466809421842, 'Table': 94.73358281892406} +test_macc_per_class = {'Airplane': 87.41651159831837, 'Bag': 79.32357869470358, 'Cap': 85.51607037140221, 'Car': 84.44068402219483, 'Chair': 91.87457355974078, 'Earphone': 69.91041744737808, 'Guitar': 95.13969854805319, 'Knife': 92.74158653238744, 'Lamp': 87.62451926898093, 'Laptop': 98.03761328530142, 'Motorbike': 76.97296361279032, 'Mug': 96.42437150586476, 'Pistol': 82.0142810201944, 'Rocket': 77.27171693972656, 'Skateboard': 82.31274976706993, 'Table': 88.41737982511701} +test_miou_per_class = {'Airplane': 79.82197648503191, 'Bag': 75.69880304902726, 'Cap': 81.09046356655823, 'Car': 75.12879583698628, 'Chair': 84.56523190373643, 'Earphone': 63.27691441973508, 'Guitar': 89.48263627408683, 'Knife': 86.46807415252464, 'Lamp': 78.97610329788016, 'Laptop': 96.12246533721157, 'Motorbike': 64.69246020653127, 'Mug': 90.97638201960982, 'Pistol': 78.15753232454442, 'Rocket': 62.40414517397442, 'Skateboard': 74.0043363465546, 'Table': 82.32120709273829} +================================================== +acc: 92.70168674820557 -> 92.79110422714442, macc: 85.24097836905904 -> 85.96491974995148, miou: 78.74573405858823 -> 78.9492204679207 + +EPOCH 50 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.384, iteration=0.234, train_acc=94.32, train_loss_seg=0.176, train_macc=88.99, train_miou=83.35] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.56it/s, test_acc=92.29, test_loss_seg=0.165, test_macc=85.37, test_miou=78.09] +================================================== +test_loss_seg = 0.1652200073003769 +test_acc = 92.29238665560658 +test_macc = 85.3718340854832 +test_miou = 78.0937910858786 +test_acc_per_class = {'Airplane': 90.51737599111682, 'Bag': 95.24891308605224, 'Cap': 89.32960381511373, 'Car': 90.07707641611876, 'Chair': 94.68981585806063, 'Earphone': 92.71252229254817, 'Guitar': 95.98799572332237, 'Knife': 90.75219248931865, 'Lamp': 90.51337752748881, 'Laptop': 97.994654008894, 'Motorbike': 85.22805606617648, 'Mug': 98.55107723082746, 'Pistol': 95.40589626143672, 'Rocket': 79.17491597219025, 'Skateboard': 95.83175835881929, 'Table': 94.66295539222081} +test_macc_per_class = {'Airplane': 88.47989527704037, 'Bag': 82.5191785209833, 'Cap': 78.8683051950302, 'Car': 84.53980670906698, 'Chair': 91.34572229074648, 'Earphone': 70.70143401175267, 'Guitar': 94.14286430593563, 'Knife': 90.77226281876966, 'Lamp': 86.04294412402595, 'Laptop': 97.88591107710607, 'Motorbike': 69.61391377780947, 'Mug': 95.23815313010682, 'Pistol': 86.31449071821832, 'Rocket': 75.89949610823254, 'Skateboard': 85.08841651654639, 'Table': 88.4965507863601} +test_miou_per_class = {'Airplane': 80.52114438488962, 'Bag': 76.82976451426644, 'Cap': 72.57016767622437, 'Car': 74.94365778078654, 'Chair': 84.84112938807358, 'Earphone': 65.66462942091108, 'Guitar': 89.60528053715778, 'Knife': 82.99331545729551, 'Lamp': 77.42326536441087, 'Laptop': 96.03054086257659, 'Motorbike': 60.94387700493219, 'Mug': 88.2638090762702, 'Pistol': 80.60826860134073, 'Rocket': 60.28761040524289, 'Skateboard': 75.8650629102917, 'Table': 82.1091339893873} +================================================== + +EPOCH 51 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.388, iteration=0.235, train_acc=94.10, train_loss_seg=0.152, train_macc=87.76, train_miou=82.41] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.53it/s, test_acc=92.30, test_loss_seg=0.131, test_macc=85.57, test_miou=77.71] +================================================== +test_loss_seg = 0.13169628381729126 +test_acc = 92.30253826154946 +test_macc = 85.57594685748823 +test_miou = 77.71107208526453 +test_acc_per_class = {'Airplane': 90.34803106134949, 'Bag': 95.6486243775002, 'Cap': 90.68905563689604, 'Car': 90.5320358982051, 'Chair': 94.67876175416285, 'Earphone': 91.15521556738464, 'Guitar': 96.04791417692671, 'Knife': 92.985041445676, 'Lamp': 90.9541032858372, 'Laptop': 98.03972926476453, 'Motorbike': 83.8311887254902, 'Mug': 98.40284417630662, 'Pistol': 95.19187385031873, 'Rocket': 79.40609876427196, 'Skateboard': 93.98382536655927, 'Table': 94.94626883314193} +test_macc_per_class = {'Airplane': 86.52735465073414, 'Bag': 80.6574531627642, 'Cap': 82.07353082843679, 'Car': 84.68146490443571, 'Chair': 92.48674835550932, 'Earphone': 66.76643058996504, 'Guitar': 93.93821357385926, 'Knife': 92.96787931912553, 'Lamp': 90.39581309806343, 'Laptop': 97.9785764151673, 'Motorbike': 74.33940760220618, 'Mug': 96.58597601748644, 'Pistol': 82.07282293319432, 'Rocket': 74.6669920586862, 'Skateboard': 83.89049062906301, 'Table': 89.18599558111477} +test_miou_per_class = {'Airplane': 79.25700993583148, 'Bag': 77.17118920698954, 'Cap': 76.32024759483734, 'Car': 75.48295601943431, 'Chair': 84.6007691294542, 'Earphone': 62.65546257671383, 'Guitar': 89.67407505427568, 'Knife': 86.87005473668994, 'Lamp': 78.6563227564693, 'Laptop': 96.13045941388532, 'Motorbike': 61.629574617972814, 'Mug': 87.60389400723294, 'Pistol': 78.28179712458501, 'Rocket': 56.91936457112187, 'Skateboard': 68.92455272885152, 'Table': 83.19942388988751} +================================================== + +EPOCH 52 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.381, iteration=0.239, train_acc=93.44, train_loss_seg=0.156, train_macc=85.99, train_miou=80.00] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.53it/s, test_acc=92.52, test_loss_seg=0.244, test_macc=85.42, test_miou=78.32] +================================================== +test_loss_seg = 0.24458159506320953 +test_acc = 92.52308038323106 +test_macc = 85.42024985954077 +test_miou = 78.32203287681014 +test_acc_per_class = {'Airplane': 90.06504528174663, 'Bag': 95.07698304424089, 'Cap': 93.92058420812415, 'Car': 90.58027338933475, 'Chair': 94.55317232083789, 'Earphone': 89.2060786820054, 'Guitar': 96.03199724649127, 'Knife': 92.26445241563408, 'Lamp': 90.9055980191433, 'Laptop': 98.05112373278344, 'Motorbike': 85.45496323529412, 'Mug': 98.76652341837567, 'Pistol': 95.05207440180175, 'Rocket': 80.72174307762143, 'Skateboard': 95.39312657166806, 'Table': 94.32554708659413} +test_macc_per_class = {'Airplane': 89.75794360716219, 'Bag': 77.24952635881918, 'Cap': 89.07808913201016, 'Car': 85.05985074521887, 'Chair': 92.113677876953, 'Earphone': 66.24017238185394, 'Guitar': 94.41852587976362, 'Knife': 92.28560147688822, 'Lamp': 87.83610488714987, 'Laptop': 98.05410805504697, 'Motorbike': 76.6928832610052, 'Mug': 91.84197272749425, 'Pistol': 85.04768114235043, 'Rocket': 64.84899085769237, 'Skateboard': 88.82023000550019, 'Table': 87.37863935774385} +test_miou_per_class = {'Airplane': 80.29926075338587, 'Bag': 74.48914866565175, 'Cap': 84.60454310617868, 'Car': 75.56709662864164, 'Chair': 84.24394487393741, 'Earphone': 57.918504190153186, 'Guitar': 89.69597108964273, 'Knife': 85.63974729158106, 'Lamp': 79.14087036222391, 'Laptop': 96.1577958772321, 'Motorbike': 65.61807209040809, 'Mug': 89.03868098582677, 'Pistol': 79.38524752311166, 'Rocket': 55.20726428048827, 'Skateboard': 75.65513021792866, 'Table': 80.49124809257036} +================================================== + +EPOCH 53 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.384, iteration=0.233, train_acc=93.94, train_loss_seg=0.160, train_macc=89.32, train_miou=82.76] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.53it/s, test_acc=92.28, test_loss_seg=0.218, test_macc=83.29, test_miou=76.89] +================================================== +test_loss_seg = 0.21879932284355164 +test_acc = 92.28455012835606 +test_macc = 83.29175643670159 +test_miou = 76.89567246730873 +test_acc_per_class = {'Airplane': 89.097157274923, 'Bag': 94.27200365616788, 'Cap': 92.29469260984744, 'Car': 90.52477588476391, 'Chair': 94.60622499662877, 'Earphone': 93.69324806557609, 'Guitar': 95.7923732888292, 'Knife': 91.6550508232928, 'Lamp': 91.06529463747593, 'Laptop': 98.01892601160907, 'Motorbike': 83.61482928323855, 'Mug': 99.07308518914071, 'Pistol': 94.94905434575938, 'Rocket': 77.89213483146067, 'Skateboard': 95.6040416369986, 'Table': 94.39990951798505} +test_macc_per_class = {'Airplane': 87.83500261446726, 'Bag': 70.65529787837572, 'Cap': 85.27638738389125, 'Car': 86.0815578785282, 'Chair': 90.50005506298409, 'Earphone': 70.49780635777353, 'Guitar': 92.66983457873842, 'Knife': 91.55995591867094, 'Lamp': 86.73271608105291, 'Laptop': 98.05226658266253, 'Motorbike': 58.42967084546967, 'Mug': 94.14202190521141, 'Pistol': 85.42626532944831, 'Rocket': 59.559729286908016, 'Skateboard': 89.18749236145884, 'Table': 86.06204292158411} +test_miou_per_class = {'Airplane': 78.03203788877818, 'Bag': 67.62963073915711, 'Cap': 80.36665279836225, 'Car': 75.74939564226304, 'Chair': 84.44164114196052, 'Earphone': 66.7864610318397, 'Guitar': 88.4622434312297, 'Knife': 84.52636186942215, 'Lamp': 79.06409749471304, 'Laptop': 96.09673924131256, 'Motorbike': 52.876247067644385, 'Mug': 91.54220442012154, 'Pistol': 79.24297871444762, 'Rocket': 48.77056847265169, 'Skateboard': 76.50636910702525, 'Table': 80.23713041601096} +================================================== + +EPOCH 54 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.381, iteration=0.235, train_acc=94.35, train_loss_seg=0.158, train_macc=87.46, train_miou=81.17] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.56it/s, test_acc=92.70, test_loss_seg=0.147, test_macc=85.39, test_miou=78.86] +================================================== +test_loss_seg = 0.14782103896141052 +test_acc = 92.70841655397545 +test_macc = 85.39474220295739 +test_miou = 78.86685983775284 +test_acc_per_class = {'Airplane': 90.20578445451397, 'Bag': 95.30989543870282, 'Cap': 93.80359612724757, 'Car': 90.80999549052233, 'Chair': 94.65351987613766, 'Earphone': 89.15820853147845, 'Guitar': 95.63629052214799, 'Knife': 92.92119100214678, 'Lamp': 90.92568859866029, 'Laptop': 97.950462548493, 'Motorbike': 84.92647058823529, 'Mug': 98.8716236231923, 'Pistol': 95.64125601861451, 'Rocket': 81.9056192551881, 'Skateboard': 95.85904381557197, 'Table': 94.75601897275425} +test_macc_per_class = {'Airplane': 89.33722816702412, 'Bag': 77.67234930266444, 'Cap': 89.57584707715708, 'Car': 83.42935619892506, 'Chair': 89.82941460895276, 'Earphone': 70.21549345371993, 'Guitar': 94.74457482800234, 'Knife': 92.89936770436802, 'Lamp': 86.92909254626537, 'Laptop': 97.90382633628482, 'Motorbike': 72.51099350170979, 'Mug': 94.68265745840587, 'Pistol': 84.32975084783494, 'Rocket': 70.07828003288536, 'Skateboard': 83.09161317367736, 'Table': 89.0860300094408} +test_miou_per_class = {'Airplane': 80.20511447231856, 'Bag': 74.75810021769003, 'Cap': 84.35950434935143, 'Car': 75.51920843972802, 'Chair': 84.37251016899407, 'Earphone': 60.90501094912998, 'Guitar': 89.35835899654832, 'Knife': 86.76471529962721, 'Lamp': 79.07201411558295, 'Laptop': 95.95943060756204, 'Motorbike': 63.48353303903719, 'Mug': 90.3009535478671, 'Pistol': 80.0781863629592, 'Rocket': 59.86021754011143, 'Skateboard': 74.42925145858761, 'Table': 82.44364783895008} +================================================== + +EPOCH 55 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.384, iteration=0.232, train_acc=94.21, train_loss_seg=0.148, train_macc=87.64, train_miou=81.65] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.55it/s, test_acc=92.76, test_loss_seg=0.114, test_macc=84.99, test_miou=78.48] +================================================== +test_loss_seg = 0.11473225802183151 +test_acc = 92.76388727152874 +test_macc = 84.99094419022491 +test_miou = 78.4887924664926 +test_acc_per_class = {'Airplane': 90.51290505787631, 'Bag': 95.53972727965262, 'Cap': 91.51053339309728, 'Car': 90.73405872096818, 'Chair': 94.7046705711132, 'Earphone': 92.77453580901856, 'Guitar': 95.89038546825911, 'Knife': 92.7643329727554, 'Lamp': 91.43131621316302, 'Laptop': 97.99136676864497, 'Motorbike': 85.00689338235294, 'Mug': 98.76915683238106, 'Pistol': 95.29491014545704, 'Rocket': 80.29891908722921, 'Skateboard': 96.00307712890495, 'Table': 94.9954075135861} +test_macc_per_class = {'Airplane': 89.64379762769057, 'Bag': 77.79714897785786, 'Cap': 84.53038691626008, 'Car': 83.11800883287496, 'Chair': 91.30950829168111, 'Earphone': 68.852458121408, 'Guitar': 93.63406099530364, 'Knife': 92.72243206337903, 'Lamp': 87.50706863178952, 'Laptop': 97.98005178063072, 'Motorbike': 82.00853087584275, 'Mug': 92.51050207682174, 'Pistol': 82.72196433429157, 'Rocket': 60.76742367130712, 'Skateboard': 85.39692688458388, 'Table': 89.35483696187596} +test_miou_per_class = {'Airplane': 80.76288573962451, 'Bag': 75.31753431516151, 'Cap': 79.09371386396185, 'Car': 75.4025418767961, 'Chair': 84.86696999862325, 'Earphone': 63.61448896480982, 'Guitar': 89.46014559710316, 'Knife': 86.48419877115467, 'Lamp': 79.6225557688306, 'Laptop': 96.03908977717349, 'Motorbike': 65.1393804100078, 'Mug': 89.10245666763059, 'Pistol': 78.3826165594264, 'Rocket': 52.61719490356889, 'Skateboard': 76.66048120381247, 'Table': 83.25442504619667} +================================================== + +EPOCH 56 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.383, iteration=0.230, train_acc=94.15, train_loss_seg=0.146, train_macc=88.01, train_miou=82.42] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.54it/s, test_acc=93.13, test_loss_seg=0.170, test_macc=85.85, test_miou=79.73] +================================================== +test_loss_seg = 0.1704130470752716 +test_acc = 93.131845465478 +test_macc = 85.85748081514947 +test_miou = 79.73526909626351 +test_acc_per_class = {'Airplane': 90.6807388016883, 'Bag': 95.68952351863165, 'Cap': 94.00926013417745, 'Car': 90.79766035368047, 'Chair': 94.7150087348264, 'Earphone': 93.3110129648098, 'Guitar': 96.10476933658629, 'Knife': 92.38905175899717, 'Lamp': 91.1809138888344, 'Laptop': 98.04983229516053, 'Motorbike': 85.7421875, 'Mug': 99.17272837690118, 'Pistol': 96.05001287784776, 'Rocket': 81.92915482954545, 'Skateboard': 95.64049272570588, 'Table': 94.6471793502551} +test_macc_per_class = {'Airplane': 89.03526298975807, 'Bag': 79.83077057743174, 'Cap': 87.89109201654651, 'Car': 84.52361390796307, 'Chair': 91.71165053666688, 'Earphone': 70.55116065984252, 'Guitar': 94.36227811947853, 'Knife': 92.3933661503129, 'Lamp': 89.87764402131914, 'Laptop': 98.05139066739888, 'Motorbike': 72.38545392779888, 'Mug': 93.78183875722128, 'Pistol': 87.11018354241989, 'Rocket': 73.95618377849392, 'Skateboard': 80.65354807599972, 'Table': 87.60425531373998} +test_miou_per_class = {'Airplane': 80.82132461618077, 'Bag': 77.0492067114566, 'Cap': 83.87532565294975, 'Car': 75.77676197184725, 'Chair': 84.64668413485094, 'Earphone': 66.10543307928207, 'Guitar': 89.8792547312869, 'Knife': 85.85142350123478, 'Lamp': 80.97976999177779, 'Laptop': 96.1533486114766, 'Motorbike': 62.802923152925615, 'Mug': 92.39192532531273, 'Pistol': 82.4934666466875, 'Rocket': 61.442860126816136, 'Skateboard': 73.54590713398014, 'Table': 81.9486901521505} +================================================== +acc: 92.79110422714442 -> 93.131845465478, miou: 78.9492204679207 -> 79.73526909626351 + +EPOCH 57 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.382, iteration=0.235, train_acc=94.63, train_loss_seg=0.154, train_macc=89.64, train_miou=83.89] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.53it/s, test_acc=91.76, test_loss_seg=0.205, test_macc=81.62, test_miou=75.66] +================================================== +test_loss_seg = 0.20528805255889893 +test_acc = 91.7685235274328 +test_macc = 81.62028500187428 +test_miou = 75.66130203912586 +test_acc_per_class = {'Airplane': 90.0271159216932, 'Bag': 95.2878473837824, 'Cap': 94.53357671246036, 'Car': 89.27724086375054, 'Chair': 94.47114824950738, 'Earphone': 93.43918726911055, 'Guitar': 95.5221244969593, 'Knife': 93.12268937000732, 'Lamp': 89.05066470335426, 'Laptop': 98.0543671585829, 'Motorbike': 73.32565615853366, 'Mug': 98.9611030399386, 'Pistol': 95.02194545522752, 'Rocket': 78.1401126935534, 'Skateboard': 95.50878001415334, 'Table': 94.55281694831034} +test_macc_per_class = {'Airplane': 88.01839109939762, 'Bag': 77.34741242280516, 'Cap': 88.68421414094203, 'Car': 79.0118369298465, 'Chair': 90.20547234025382, 'Earphone': 65.95617037556207, 'Guitar': 92.91539215020812, 'Knife': 93.06447801068707, 'Lamp': 84.61425059644807, 'Laptop': 97.9849314653962, 'Motorbike': 44.52983530843859, 'Mug': 92.33679669694442, 'Pistol': 85.83820861108329, 'Rocket': 57.5824106030692, 'Skateboard': 78.9636076979208, 'Table': 88.87115158098553} +test_miou_per_class = {'Airplane': 79.63633038198137, 'Bag': 74.62045075702594, 'Cap': 84.98380516810285, 'Car': 71.65525420063248, 'Chair': 84.25380630052418, 'Earphone': 61.903814239180576, 'Guitar': 87.97374608648227, 'Knife': 87.1048540677627, 'Lamp': 75.07946411416098, 'Laptop': 96.15521001363123, 'Motorbike': 35.4036030678853, 'Mug': 90.46514407536317, 'Pistol': 80.03612081415449, 'Rocket': 48.787119370921936, 'Skateboard': 70.63609186444943, 'Table': 81.88601810375475} +================================================== + +EPOCH 58 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.378, iteration=0.234, train_acc=93.89, train_loss_seg=0.163, train_macc=88.02, train_miou=82.11] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.54it/s, test_acc=92.32, test_loss_seg=0.136, test_macc=84.48, test_miou=77.88] +================================================== +test_loss_seg = 0.1367768943309784 +test_acc = 92.3241045142741 +test_macc = 84.48316738820772 +test_miou = 77.88193934999433 +test_acc_per_class = {'Airplane': 90.38977242696912, 'Bag': 95.42406461937057, 'Cap': 90.35528316667435, 'Car': 90.32707463273995, 'Chair': 94.8348626103164, 'Earphone': 88.57102888717912, 'Guitar': 96.06606244453309, 'Knife': 91.01418747289512, 'Lamp': 90.49472577247066, 'Laptop': 98.09924326122338, 'Motorbike': 84.62739015118873, 'Mug': 99.0565786672344, 'Pistol': 95.09044828869048, 'Rocket': 82.1688565340909, 'Skateboard': 96.01962829346692, 'Table': 94.64646499934206} +test_macc_per_class = {'Airplane': 88.3673876273744, 'Bag': 79.02780689061272, 'Cap': 80.73647287891738, 'Car': 85.01993625877111, 'Chair': 91.78737551004626, 'Earphone': 72.08383583992742, 'Guitar': 93.8093332882249, 'Knife': 90.95062284583116, 'Lamp': 80.46136257068255, 'Laptop': 98.04320364553764, 'Motorbike': 70.66370850047313, 'Mug': 92.69680402100815, 'Pistol': 82.50992625861811, 'Rocket': 69.89601073744339, 'Skateboard': 86.85293297614055, 'Table': 88.82395836171432} +test_miou_per_class = {'Airplane': 80.21990834751858, 'Bag': 75.05239143206568, 'Cap': 74.92773494895337, 'Car': 74.81764463036242, 'Chair': 85.08846766867705, 'Earphone': 61.253054361909165, 'Guitar': 89.62853133441186, 'Knife': 83.43690668700854, 'Lamp': 73.84157124537538, 'Laptop': 96.24501395825168, 'Motorbike': 62.50030773459066, 'Mug': 91.23744403089343, 'Pistol': 78.79193002937771, 'Rocket': 60.117247829692154, 'Skateboard': 76.82235856838162, 'Table': 82.13051679244002} +================================================== + +EPOCH 59 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.381, iteration=0.233, train_acc=94.04, train_loss_seg=0.159, train_macc=83.97, train_miou=77.88] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.56it/s, test_acc=92.41, test_loss_seg=0.163, test_macc=84.75, test_miou=78.12] +================================================== +test_loss_seg = 0.1634737253189087 +test_acc = 92.41088380735549 +test_macc = 84.75909672876756 +test_miou = 78.12219814319732 +test_acc_per_class = {'Airplane': 90.70844109289237, 'Bag': 94.67969598262758, 'Cap': 82.51949117767748, 'Car': 90.47268576703456, 'Chair': 94.71678452430471, 'Earphone': 92.88040784535863, 'Guitar': 96.01214418912879, 'Knife': 93.08735525473911, 'Lamp': 91.28805495416032, 'Laptop': 98.02742993440667, 'Motorbike': 85.0174249387255, 'Mug': 98.96698100534644, 'Pistol': 95.73407523329276, 'Rocket': 83.77759457726481, 'Skateboard': 96.0455009782364, 'Table': 94.64007346249166} +test_macc_per_class = {'Airplane': 89.69587262776557, 'Bag': 76.13990929238798, 'Cap': 65.85835789891506, 'Car': 82.77772893285514, 'Chair': 91.32075425290816, 'Earphone': 71.40027876077686, 'Guitar': 94.21111214595132, 'Knife': 93.07473970527742, 'Lamp': 88.7836331022031, 'Laptop': 98.0394441686467, 'Motorbike': 67.69972636262328, 'Mug': 91.60944830405879, 'Pistol': 87.33743454144974, 'Rocket': 80.70352048331458, 'Skateboard': 90.14179705565778, 'Table': 87.35179002548924} +test_miou_per_class = {'Airplane': 81.0527011723585, 'Bag': 72.76698387523129, 'Cap': 56.34660197812741, 'Car': 74.87404303072496, 'Chair': 84.9078577141909, 'Earphone': 66.20190100308459, 'Guitar': 89.83244657947755, 'Knife': 87.06001101032281, 'Lamp': 81.24693337126368, 'Laptop': 96.11288612982787, 'Motorbike': 61.183761459501675, 'Mug': 90.41321482948372, 'Pistol': 82.1058138124949, 'Rocket': 66.32405764003096, 'Skateboard': 78.09624415826283, 'Table': 81.4297125267735} +================================================== + +EPOCH 60 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.383, iteration=0.235, train_acc=95.25, train_loss_seg=0.142, train_macc=89.69, train_miou=84.49] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.55it/s, test_acc=93.00, test_loss_seg=0.119, test_macc=86.31, test_miou=79.71] +================================================== +test_loss_seg = 0.1197643056511879 +test_acc = 93.00543872671068 +test_macc = 86.31261257112098 +test_miou = 79.7159002106574 +test_acc_per_class = {'Airplane': 90.49692737793758, 'Bag': 95.67549064354176, 'Cap': 93.18878261677035, 'Car': 90.92409807366447, 'Chair': 94.74400178755373, 'Earphone': 92.97708311331714, 'Guitar': 96.16711738241561, 'Knife': 90.88628121967413, 'Lamp': 90.97871437663531, 'Laptop': 98.12346503254727, 'Motorbike': 86.19216809201392, 'Mug': 99.08096397863207, 'Pistol': 95.4342269581354, 'Rocket': 82.71791352093342, 'Skateboard': 95.64771025127489, 'Table': 94.85207520232336} +test_macc_per_class = {'Airplane': 88.07244940073059, 'Bag': 79.08539971093087, 'Cap': 86.5726527113506, 'Car': 84.15077764917022, 'Chair': 90.10236786494403, 'Earphone': 70.58711172585525, 'Guitar': 94.65857290787115, 'Knife': 90.77420892412817, 'Lamp': 89.7306756323798, 'Laptop': 98.0672191907832, 'Motorbike': 76.32556072472309, 'Mug': 93.73608203147734, 'Pistol': 83.89307187104068, 'Rocket': 78.28079305482115, 'Skateboard': 88.05767263482815, 'Table': 88.90718510290144} +test_miou_per_class = {'Airplane': 80.07059897309891, 'Bag': 76.2288523510962, 'Cap': 82.1583135258815, 'Car': 76.02430410269227, 'Chair': 84.77664137228423, 'Earphone': 66.3719750625665, 'Guitar': 90.14210215498714, 'Knife': 83.2156064370032, 'Lamp': 80.11172683274064, 'Laptop': 96.29100046370984, 'Motorbike': 66.23227577482412, 'Mug': 91.64622945990473, 'Pistol': 79.59677033352216, 'Rocket': 63.939738496531284, 'Skateboard': 75.95270912800522, 'Table': 82.69555890167055} +================================================== +macc: 85.96491974995148 -> 86.31261257112098 + +EPOCH 61 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.385, iteration=0.234, train_acc=94.25, train_loss_seg=0.141, train_macc=89.02, train_miou=83.23] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.53it/s, test_acc=92.87, test_loss_seg=0.117, test_macc=86.37, test_miou=79.36] +================================================== +test_loss_seg = 0.117872454226017 +test_acc = 92.87962886820966 +test_macc = 86.37245826583312 +test_miou = 79.3639047491631 +test_acc_per_class = {'Airplane': 90.64753352221007, 'Bag': 96.1657292244316, 'Cap': 93.02503710086648, 'Car': 90.28284415511533, 'Chair': 94.74920815877259, 'Earphone': 93.10124106539597, 'Guitar': 95.9963258298477, 'Knife': 93.01948153671665, 'Lamp': 89.29875517167608, 'Laptop': 97.97121906288503, 'Motorbike': 84.95410377539655, 'Mug': 99.14241677754212, 'Pistol': 95.78153867207992, 'Rocket': 81.78671282119558, 'Skateboard': 95.37026041224871, 'Table': 94.78165460497449} +test_macc_per_class = {'Airplane': 90.02966722009702, 'Bag': 82.22388519649502, 'Cap': 85.1270992562751, 'Car': 86.40296727092496, 'Chair': 91.43559465167858, 'Earphone': 71.48308094620037, 'Guitar': 94.35807513105803, 'Knife': 92.94825526669895, 'Lamp': 87.12594658501436, 'Laptop': 98.0140601260765, 'Motorbike': 68.08880732189975, 'Mug': 94.43724353875132, 'Pistol': 89.74532884940677, 'Rocket': 76.45385008466378, 'Skateboard': 85.12947215438737, 'Table': 88.95599865370176} +test_miou_per_class = {'Airplane': 80.42430945412644, 'Bag': 79.27822306261896, 'Cap': 80.84125127827, 'Car': 75.30213894422036, 'Chair': 84.83927250382426, 'Earphone': 66.87621469276654, 'Guitar': 89.5756431750184, 'Knife': 86.91965548764463, 'Lamp': 76.90391243140708, 'Laptop': 96.00449779472928, 'Motorbike': 59.143976596709535, 'Mug': 92.1192397585591, 'Pistol': 82.83754438299337, 'Rocket': 62.96092420073474, 'Skateboard': 73.19709973248315, 'Table': 82.59857249050353} +================================================== +macc: 86.31261257112098 -> 86.37245826583312 + +EPOCH 62 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.383, iteration=0.238, train_acc=95.16, train_loss_seg=0.152, train_macc=89.55, train_miou=84.30] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.54it/s, test_acc=92.30, test_loss_seg=0.162, test_macc=85.64, test_miou=77.99] +================================================== +test_loss_seg = 0.16277746856212616 +test_acc = 92.30130056492402 +test_macc = 85.64476131997272 +test_miou = 77.9993940335307 +test_acc_per_class = {'Airplane': 90.61531986409399, 'Bag': 95.40193309733105, 'Cap': 93.74284624189241, 'Car': 90.36542469833535, 'Chair': 94.65472609460114, 'Earphone': 88.70618628683144, 'Guitar': 95.81750234542774, 'Knife': 91.44847180154697, 'Lamp': 89.90037250144341, 'Laptop': 98.04484272827074, 'Motorbike': 84.14617800245098, 'Mug': 98.43220620474324, 'Pistol': 95.85943947830332, 'Rocket': 78.87972289321362, 'Skateboard': 96.17055567234252, 'Table': 94.63508112795643} +test_macc_per_class = {'Airplane': 90.10221569098812, 'Bag': 80.41686738696579, 'Cap': 87.1395359564963, 'Car': 85.47932000102773, 'Chair': 91.9122744272451, 'Earphone': 66.06827143742566, 'Guitar': 94.66175962105675, 'Knife': 91.48323691517601, 'Lamp': 88.98651574631357, 'Laptop': 98.07414305256074, 'Motorbike': 65.39644212855687, 'Mug': 91.78077141439076, 'Pistol': 86.96048937205255, 'Rocket': 74.94616351925288, 'Skateboard': 87.272595348101, 'Table': 89.63557910195377} +test_miou_per_class = {'Airplane': 80.65748607313186, 'Bag': 76.91903392736643, 'Cap': 83.09091178967446, 'Car': 75.13521540140496, 'Chair': 84.53914600641119, 'Earphone': 56.77214033236412, 'Guitar': 88.75785828033962, 'Knife': 84.23641400053741, 'Lamp': 78.56052658545153, 'Laptop': 96.14523333364883, 'Motorbike': 57.30329853913167, 'Mug': 86.64303974718379, 'Pistol': 82.36803775136416, 'Rocket': 57.426860633070994, 'Skateboard': 76.98171077373424, 'Table': 82.45339136167587} +================================================== + +EPOCH 63 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.378, iteration=0.236, train_acc=94.25, train_loss_seg=0.144, train_macc=89.69, train_miou=82.14] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.56it/s, test_acc=92.51, test_loss_seg=0.173, test_macc=85.99, test_miou=78.63] +================================================== +test_loss_seg = 0.17372559010982513 +test_acc = 92.51545705874182 +test_macc = 85.99037318017011 +test_miou = 78.63202433378295 +test_acc_per_class = {'Airplane': 90.29297084186058, 'Bag': 95.50779734657459, 'Cap': 91.1312592047128, 'Car': 90.83917284432306, 'Chair': 94.39041396113605, 'Earphone': 92.10470161462409, 'Guitar': 95.68993878347922, 'Knife': 92.681007178719, 'Lamp': 89.96344401326516, 'Laptop': 98.10643727461152, 'Motorbike': 84.4027650122549, 'Mug': 98.9568522185162, 'Pistol': 94.59634963705095, 'Rocket': 80.71506472568637, 'Skateboard': 96.1320085166785, 'Table': 94.73712976637634} +test_macc_per_class = {'Airplane': 88.80203654948882, 'Bag': 83.22971032949896, 'Cap': 82.76164980011175, 'Car': 84.95118439833603, 'Chair': 92.93238182097652, 'Earphone': 71.49732417728913, 'Guitar': 93.22771690188901, 'Knife': 92.64098407567747, 'Lamp': 86.72124446138774, 'Laptop': 98.0807606656613, 'Motorbike': 77.53940542763083, 'Mug': 95.47560989404302, 'Pistol': 80.90224248656938, 'Rocket': 69.46681761036344, 'Skateboard': 88.51471021695416, 'Table': 89.10219206684422} +test_miou_per_class = {'Airplane': 80.00748325059912, 'Bag': 78.30002696922848, 'Cap': 77.26384824189304, 'Car': 76.07853352480315, 'Chair': 83.20806173314702, 'Earphone': 64.32530644496957, 'Guitar': 88.85517410999725, 'Knife': 86.32510533325356, 'Lamp': 76.65088271840746, 'Laptop': 96.26256565374307, 'Motorbike': 63.504370455340286, 'Mug': 90.94374614747775, 'Pistol': 77.19127360849816, 'Rocket': 58.97784226592343, 'Skateboard': 77.6376337796293, 'Table': 82.58053510361633} +================================================== + +EPOCH 64 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.384, iteration=0.236, train_acc=95.44, train_loss_seg=0.139, train_macc=91.27, train_miou=85.95] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.54it/s, test_acc=92.62, test_loss_seg=0.128, test_macc=87.04, test_miou=79.39] +================================================== +test_loss_seg = 0.1284656822681427 +test_acc = 92.62450610265954 +test_macc = 87.04467619836664 +test_miou = 79.39879005088274 +test_acc_per_class = {'Airplane': 90.57067610791766, 'Bag': 95.20990106573971, 'Cap': 94.34173669467786, 'Car': 90.61873263824009, 'Chair': 94.65992209175238, 'Earphone': 89.12639079007405, 'Guitar': 96.12340210256157, 'Knife': 92.09059364391386, 'Lamp': 91.25982490893986, 'Laptop': 98.08305178156374, 'Motorbike': 86.24549911897648, 'Mug': 99.17527981577679, 'Pistol': 95.34309198374584, 'Rocket': 78.4885530117804, 'Skateboard': 95.93776863822544, 'Table': 94.71767324866696} +test_macc_per_class = {'Airplane': 90.2659436666069, 'Bag': 82.48853680386745, 'Cap': 90.03288776004379, 'Car': 80.86426540176421, 'Chair': 91.63363530660936, 'Earphone': 69.1942459917266, 'Guitar': 94.0334339165522, 'Knife': 92.03739758258918, 'Lamp': 87.26279680352299, 'Laptop': 98.09902714673188, 'Motorbike': 82.829667201505, 'Mug': 93.92011413266259, 'Pistol': 85.76961353695981, 'Rocket': 80.69457562803427, 'Skateboard': 85.19425272857796, 'Table': 88.3944255661122} +test_miou_per_class = {'Airplane': 80.97045202417297, 'Bag': 76.71064185778131, 'Cap': 85.33355572017486, 'Car': 74.15851731187432, 'Chair': 84.84536686396491, 'Earphone': 60.72189937356125, 'Guitar': 89.8727528321944, 'Knife': 85.30696305946599, 'Lamp': 79.0063541128735, 'Laptop': 96.21733553987501, 'Motorbike': 67.86411386720678, 'Mug': 92.40013449498099, 'Pistol': 80.25102804815326, 'Rocket': 59.1144340941628, 'Skateboard': 75.49703665183377, 'Table': 82.11005496184787} +================================================== +macc: 86.37245826583312 -> 87.04467619836664 + +EPOCH 65 / 100 +100%|█████████████████████████████| 438/438 [05:16<00:00, 1.38it/s, data_loading=0.381, iteration=0.236, train_acc=94.70, train_loss_seg=0.140, train_macc=90.18, train_miou=84.57] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.52it/s, test_acc=93.02, test_loss_seg=0.199, test_macc=87.32, test_miou=80.11] +================================================== +test_loss_seg = 0.19943374395370483 +test_acc = 93.0218022902602 +test_macc = 87.32110078339136 +test_miou = 80.11268496674514 +test_acc_per_class = {'Airplane': 89.94987228050576, 'Bag': 94.98063133602571, 'Cap': 97.32160905599353, 'Car': 90.6327983234483, 'Chair': 94.4580972681486, 'Earphone': 89.94761084350058, 'Guitar': 96.18334511313236, 'Knife': 93.44226402347098, 'Lamp': 89.18570660697587, 'Laptop': 97.98406530961972, 'Motorbike': 85.43390012254902, 'Mug': 99.11247721315183, 'Pistol': 95.96283316425344, 'Rocket': 83.18651685393257, 'Skateboard': 96.16550177798798, 'Table': 94.40160735146681} +test_macc_per_class = {'Airplane': 88.66018079119769, 'Bag': 85.84981328439896, 'Cap': 94.16714413037943, 'Car': 86.65523427756689, 'Chair': 91.16160806137394, 'Earphone': 69.87003987576449, 'Guitar': 94.80076702417223, 'Knife': 93.44108776099223, 'Lamp': 89.28577501783025, 'Laptop': 97.89655253153242, 'Motorbike': 77.060544152636, 'Mug': 93.80158296382861, 'Pistol': 86.49047941914517, 'Rocket': 71.76967382981984, 'Skateboard': 89.13432069470001, 'Table': 87.09280871892369} +test_miou_per_class = {'Airplane': 79.7431071038529, 'Bag': 77.57803842979482, 'Cap': 91.97972628911354, 'Car': 75.98652591806712, 'Chair': 84.22421840542545, 'Earphone': 62.01503750088827, 'Guitar': 90.1036343759137, 'Knife': 87.69090232844347, 'Lamp': 74.92740297147697, 'Laptop': 96.02037039498533, 'Motorbike': 66.49139559417361, 'Mug': 91.79938856044028, 'Pistol': 82.37966310362701, 'Rocket': 61.81239908487637, 'Skateboard': 78.18297916674304, 'Table': 80.86817024010018} +================================================== +macc: 87.04467619836664 -> 87.32110078339136, miou: 79.73526909626351 -> 80.11268496674514 + +EPOCH 66 / 100 +100%|█████████████████████████████| 438/438 [05:16<00:00, 1.38it/s, data_loading=0.381, iteration=0.234, train_acc=94.10, train_loss_seg=0.150, train_macc=88.17, train_miou=82.38] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.54it/s, test_acc=92.87, test_loss_seg=0.145, test_macc=85.55, test_miou=79.13] +================================================== +test_loss_seg = 0.1450541466474533 +test_acc = 92.8764114927877 +test_macc = 85.55259893955437 +test_miou = 79.13305419290589 +test_acc_per_class = {'Airplane': 90.90814223474473, 'Bag': 95.9128777972877, 'Cap': 93.42738668318955, 'Car': 90.60044089623993, 'Chair': 94.94618492247004, 'Earphone': 93.58509176735829, 'Guitar': 95.04879063959673, 'Knife': 93.06660213834957, 'Lamp': 90.22191450204508, 'Laptop': 98.07786323257616, 'Motorbike': 83.38280068169198, 'Mug': 99.16382454800437, 'Pistol': 95.22044190727452, 'Rocket': 81.50421743205249, 'Skateboard': 96.23164932111104, 'Table': 94.72435518061087} +test_macc_per_class = {'Airplane': 88.55651313705624, 'Bag': 80.02302079139136, 'Cap': 87.02999105686911, 'Car': 87.85473805574723, 'Chair': 91.77183190506445, 'Earphone': 65.18755400597982, 'Guitar': 95.00329897565335, 'Knife': 93.06530673234785, 'Lamp': 89.00983370746202, 'Laptop': 98.01812595305381, 'Motorbike': 74.00351068512336, 'Mug': 94.70482072648332, 'Pistol': 83.68167306153657, 'Rocket': 65.06797902249839, 'Skateboard': 88.21709935710692, 'Table': 87.64628585949646} +test_miou_per_class = {'Airplane': 81.02623499173906, 'Bag': 77.02572768300885, 'Cap': 82.65239000228418, 'Car': 76.24831260908518, 'Chair': 85.4522957797802, 'Earphone': 61.02423002300308, 'Guitar': 88.5556413950363, 'Knife': 87.03094072544178, 'Lamp': 77.54182119976561, 'Laptop': 96.20270754164927, 'Motorbike': 64.4743728981254, 'Mug': 92.32124630328072, 'Pistol': 79.7037892707439, 'Rocket': 56.48617002995645, 'Skateboard': 78.20127323640938, 'Table': 82.18171339718458} +================================================== + +EPOCH 67 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.385, iteration=0.232, train_acc=94.36, train_loss_seg=0.140, train_macc=88.31, train_miou=82.59] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.56it/s, test_acc=93.08, test_loss_seg=0.139, test_macc=86.09, test_miou=79.82] +================================================== +test_loss_seg = 0.1394781619310379 +test_acc = 93.08790359364168 +test_macc = 86.09888817537468 +test_miou = 79.8199854602785 +test_acc_per_class = {'Airplane': 91.10324069487584, 'Bag': 95.81599123767799, 'Cap': 92.46316758747697, 'Car': 90.96866654853281, 'Chair': 94.60656842241058, 'Earphone': 92.82069206571128, 'Guitar': 95.78976851267625, 'Knife': 92.65335235378032, 'Lamp': 91.49570860372619, 'Laptop': 97.85616397545135, 'Motorbike': 85.61590038314176, 'Mug': 98.9683602867047, 'Pistol': 95.8761585757548, 'Rocket': 82.16122718375894, 'Skateboard': 96.2480047046963, 'Table': 94.96348636189124} +test_macc_per_class = {'Airplane': 90.33491102586643, 'Bag': 80.20238874456393, 'Cap': 85.52694462651087, 'Car': 84.35882340881656, 'Chair': 90.95275363299878, 'Earphone': 70.4312984400606, 'Guitar': 93.20539559228312, 'Knife': 92.63139203369501, 'Lamp': 88.41433739424735, 'Laptop': 97.92882619906167, 'Motorbike': 71.85421832473551, 'Mug': 92.99652508610802, 'Pistol': 85.73697756154357, 'Rocket': 75.28802942625286, 'Skateboard': 87.82200671333598, 'Table': 89.89738259591476} +test_miou_per_class = {'Airplane': 81.59300643333793, 'Bag': 77.08283428866093, 'Cap': 80.65094504155952, 'Car': 76.14054063849063, 'Chair': 84.80466342363664, 'Earphone': 65.3733535695619, 'Guitar': 89.13832691607482, 'Knife': 86.28708159087711, 'Lamp': 80.50104717521688, 'Laptop': 95.78625107860461, 'Motorbike': 63.92270735293452, 'Mug': 90.82947642960428, 'Pistol': 81.31891271800392, 'Rocket': 62.564237833885684, 'Skateboard': 77.83987820864185, 'Table': 83.28650466536493} +================================================== + +EPOCH 68 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.383, iteration=0.237, train_acc=94.92, train_loss_seg=0.135, train_macc=90.94, train_miou=85.38] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.54it/s, test_acc=93.00, test_loss_seg=0.165, test_macc=86.12, test_miou=79.63] +================================================== +test_loss_seg = 0.16555707156658173 +test_acc = 93.00289570083956 +test_macc = 86.12143956905781 +test_miou = 79.63028800482006 +test_acc_per_class = {'Airplane': 90.70879402926538, 'Bag': 94.65287710419187, 'Cap': 92.16085047376936, 'Car': 91.08599891042104, 'Chair': 94.46678234759457, 'Earphone': 92.57530759439966, 'Guitar': 95.79738491629575, 'Knife': 92.4491917003719, 'Lamp': 90.84666435805806, 'Laptop': 98.04927273161528, 'Motorbike': 86.69296918391971, 'Mug': 99.20550633847311, 'Pistol': 95.18822724161534, 'Rocket': 83.69749482011677, 'Skateboard': 96.2294220665499, 'Table': 94.23958739677529} +test_macc_per_class = {'Airplane': 88.21255882761105, 'Bag': 73.77942596486614, 'Cap': 84.53652568157298, 'Car': 85.46858701440281, 'Chair': 92.58217051740725, 'Earphone': 70.92895008621974, 'Guitar': 94.95486737638504, 'Knife': 92.40960441250904, 'Lamp': 88.4689579523705, 'Laptop': 98.07043290122597, 'Motorbike': 80.84401164530716, 'Mug': 95.04207954814717, 'Pistol': 82.52683578703292, 'Rocket': 77.27254867458258, 'Skateboard': 85.80130341896508, 'Table': 87.04417329631947} +test_miou_per_class = {'Airplane': 80.61633541490748, 'Bag': 70.82655330080776, 'Cap': 79.57762497359036, 'Car': 76.54832529152057, 'Chair': 84.31856004030502, 'Earphone': 65.77282672239716, 'Guitar': 89.5995331723568, 'Knife': 85.9318316947101, 'Lamp': 80.0387147051126, 'Laptop': 96.15397973675348, 'Motorbike': 70.09020110247714, 'Mug': 92.81176236592437, 'Pistol': 78.80925633529633, 'Rocket': 65.38456373839868, 'Skateboard': 77.33782287609327, 'Table': 80.26671660647001} +================================================== + +EPOCH 69 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.384, iteration=0.236, train_acc=94.72, train_loss_seg=0.131, train_macc=88.11, train_miou=83.17] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.52it/s, test_acc=93.01, test_loss_seg=0.148, test_macc=87.42, test_miou=80.04] +================================================== +test_loss_seg = 0.14869752526283264 +test_acc = 93.01873481038872 +test_macc = 87.42139484363707 +test_miou = 80.04624852304367 +test_acc_per_class = {'Airplane': 90.1900835312162, 'Bag': 96.21922626025791, 'Cap': 92.81334200501719, 'Car': 90.93631208325543, 'Chair': 94.69096325159178, 'Earphone': 91.62638612453796, 'Guitar': 96.09173076982178, 'Knife': 92.45909782372482, 'Lamp': 90.981179801522, 'Laptop': 98.13508304383001, 'Motorbike': 85.95325411492095, 'Mug': 98.97474868677173, 'Pistol': 95.59640140933399, 'Rocket': 82.6232732799857, 'Skateboard': 96.1202732249699, 'Table': 94.8884015554622} +test_macc_per_class = {'Airplane': 90.83674764647832, 'Bag': 84.76861829998052, 'Cap': 86.40107254862207, 'Car': 86.87621315070272, 'Chair': 91.65141217818615, 'Earphone': 70.26747936265207, 'Guitar': 94.97347438656408, 'Knife': 92.41281056773153, 'Lamp': 91.90532305886005, 'Laptop': 98.13799488515141, 'Motorbike': 81.9995112986718, 'Mug': 93.02990977157184, 'Pistol': 85.27430159356729, 'Rocket': 73.09392442376023, 'Skateboard': 87.06022899099212, 'Table': 90.05329533470108} +test_miou_per_class = {'Airplane': 80.28633116462544, 'Bag': 80.38111187722394, 'Cap': 81.44403559391658, 'Car': 76.43710434040584, 'Chair': 84.80925598049178, 'Earphone': 64.11674946872597, 'Guitar': 89.8664018848273, 'Knife': 85.9465129244788, 'Lamp': 79.23185155183884, 'Laptop': 96.31829848559404, 'Motorbike': 67.80844480851087, 'Mug': 90.66690520677582, 'Pistol': 81.2352609494221, 'Rocket': 62.126446132252724, 'Skateboard': 76.8294321520969, 'Table': 83.23583384751176} +================================================== +macc: 87.32110078339136 -> 87.42139484363707 + +EPOCH 70 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.383, iteration=0.231, train_acc=94.81, train_loss_seg=0.141, train_macc=88.74, train_miou=83.45] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.52it/s, test_acc=92.52, test_loss_seg=0.161, test_macc=84.55, test_miou=78.35] +================================================== +test_loss_seg = 0.16122622787952423 +test_acc = 92.52321900383492 +test_macc = 84.55202965157446 +test_miou = 78.35170710116245 +test_acc_per_class = {'Airplane': 90.15798428729805, 'Bag': 95.51282051282051, 'Cap': 91.04367816091954, 'Car': 90.73683751036722, 'Chair': 94.84879365235541, 'Earphone': 92.74920234213386, 'Guitar': 96.10011208887843, 'Knife': 92.36280169715518, 'Lamp': 91.25637051695259, 'Laptop': 98.15791206094809, 'Motorbike': 85.60786679305623, 'Mug': 99.0007863553784, 'Pistol': 95.3419925226423, 'Rocket': 76.86069389126158, 'Skateboard': 95.83534524384355, 'Table': 94.79830642534779} +test_macc_per_class = {'Airplane': 89.95389671100173, 'Bag': 79.37441081429431, 'Cap': 82.2705333747462, 'Car': 84.60097483596407, 'Chair': 91.8157133530355, 'Earphone': 71.37648063603204, 'Guitar': 94.50577977396364, 'Knife': 92.30106682339465, 'Lamp': 89.02091132284359, 'Laptop': 98.10126990687311, 'Motorbike': 76.47057179079594, 'Mug': 95.41146895888275, 'Pistol': 82.95481299200097, 'Rocket': 55.22699242492218, 'Skateboard': 81.31001952170699, 'Table': 88.1375711847333} +test_miou_per_class = {'Airplane': 80.48542024725084, 'Bag': 76.38420060143493, 'Cap': 76.82364804725732, 'Car': 76.0124908897895, 'Chair': 85.45744853300663, 'Earphone': 65.95317352705197, 'Guitar': 89.87649480845803, 'Knife': 85.7649308511309, 'Lamp': 81.0604828929326, 'Laptop': 96.3591564289571, 'Motorbike': 66.39900782173002, 'Mug': 91.34936520390262, 'Pistol': 78.58903722341694, 'Rocket': 46.400852692186476, 'Skateboard': 74.39974866516309, 'Table': 82.31185518493011} +================================================== + +EPOCH 71 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.384, iteration=0.236, train_acc=94.78, train_loss_seg=0.143, train_macc=89.55, train_miou=84.51] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.53it/s, test_acc=93.09, test_loss_seg=0.275, test_macc=86.75, test_miou=79.83] +================================================== +test_loss_seg = 0.27547940611839294 +test_acc = 93.09112976492203 +test_macc = 86.74998640091617 +test_miou = 79.83659342215158 +test_acc_per_class = {'Airplane': 90.77664556815283, 'Bag': 95.85308513508305, 'Cap': 91.14201395099266, 'Car': 91.02741450646738, 'Chair': 94.81404607834469, 'Earphone': 92.90582735610035, 'Guitar': 96.17139873255408, 'Knife': 92.80158580328488, 'Lamp': 90.9910280828882, 'Laptop': 98.11782836458339, 'Motorbike': 86.18711512493176, 'Mug': 99.22146626126556, 'Pistol': 95.9232751216719, 'Rocket': 82.40106714095154, 'Skateboard': 96.23742284603027, 'Table': 94.88685616545006} +test_macc_per_class = {'Airplane': 89.01113044478457, 'Bag': 81.64907438282177, 'Cap': 83.81392720107041, 'Car': 83.6180431548558, 'Chair': 91.78718358323717, 'Earphone': 70.23042300442324, 'Guitar': 94.94428517142268, 'Knife': 92.7527007512216, 'Lamp': 88.80122961163555, 'Laptop': 98.06846010869073, 'Motorbike': 79.95837217976587, 'Mug': 95.08901811279495, 'Pistol': 87.08327298459447, 'Rocket': 73.47890342475843, 'Skateboard': 87.12884521056958, 'Table': 90.58491308801186} +test_miou_per_class = {'Airplane': 80.96475711501161, 'Bag': 78.37316537370144, 'Cap': 78.16566227013988, 'Car': 75.93324504235689, 'Chair': 85.2040603740793, 'Earphone': 65.09130775413225, 'Guitar': 90.0297622531476, 'Knife': 86.54493693059754, 'Lamp': 77.75197010116783, 'Laptop': 96.28230103337532, 'Motorbike': 65.94673399200502, 'Mug': 92.99768755039608, 'Pistol': 82.20875178298037, 'Rocket': 60.85897081696415, 'Skateboard': 77.76398209326736, 'Table': 83.26820027110278} +================================================== + +EPOCH 72 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.383, iteration=0.231, train_acc=94.74, train_loss_seg=0.142, train_macc=89.81, train_miou=84.35] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.56it/s, test_acc=92.91, test_loss_seg=0.187, test_macc=85.75, test_miou=79.00] +================================================== +test_loss_seg = 0.18705867230892181 +test_acc = 92.91235892237097 +test_macc = 85.755530923388 +test_miou = 79.0008046563338 +test_acc_per_class = {'Airplane': 90.5787676094079, 'Bag': 94.94454573078092, 'Cap': 93.79870281368112, 'Car': 90.95865035556218, 'Chair': 94.73850902389003, 'Earphone': 92.90252757994693, 'Guitar': 96.13411851170201, 'Knife': 93.11408789885611, 'Lamp': 90.76937723068743, 'Laptop': 97.88146940330039, 'Motorbike': 85.53634344362744, 'Mug': 99.07727954204717, 'Pistol': 94.87671451902102, 'Rocket': 80.54128440366972, 'Skateboard': 96.16310588076838, 'Table': 94.58225881098646} +test_macc_per_class = {'Airplane': 89.64650326386983, 'Bag': 79.51567836346705, 'Cap': 88.08938514958396, 'Car': 87.3025488646948, 'Chair': 90.24402116273859, 'Earphone': 72.95808499589594, 'Guitar': 94.52677468581861, 'Knife': 93.06161337843379, 'Lamp': 89.59720641901431, 'Laptop': 97.9516054766005, 'Motorbike': 65.73101418154765, 'Mug': 95.26656239145484, 'Pistol': 83.8868545147554, 'Rocket': 70.84071080841694, 'Skateboard': 84.40211743121799, 'Table': 89.06781368669758} +test_miou_per_class = {'Airplane': 80.61875884904286, 'Bag': 73.66348409007878, 'Cap': 83.80453673690332, 'Car': 76.59883111153198, 'Chair': 84.73436063880622, 'Earphone': 66.94196781175475, 'Guitar': 89.95142418115798, 'Knife': 87.08978144677937, 'Lamp': 80.09384634470702, 'Laptop': 95.8346906190208, 'Motorbike': 59.278922539107405, 'Mug': 91.93064529907993, 'Pistol': 78.23410903329714, 'Rocket': 56.579481777808795, 'Skateboard': 76.39095066663869, 'Table': 82.26708335562591} +================================================== + +EPOCH 73 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.386, iteration=0.236, train_acc=94.87, train_loss_seg=0.128, train_macc=89.59, train_miou=84.35] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.55it/s, test_acc=92.57, test_loss_seg=0.141, test_macc=86.73, test_miou=78.93] +================================================== +test_loss_seg = 0.14109750092029572 +test_acc = 92.57595104328712 +test_macc = 86.73925479993594 +test_miou = 78.93218531266963 +test_acc_per_class = {'Airplane': 90.42576637657793, 'Bag': 95.43941538814707, 'Cap': 91.87063341507427, 'Car': 90.86268441015113, 'Chair': 94.8529236163694, 'Earphone': 90.31594624304103, 'Guitar': 96.18535341215413, 'Knife': 92.66473478644323, 'Lamp': 90.35584974077469, 'Laptop': 98.12130906991568, 'Motorbike': 86.47031224566007, 'Mug': 98.9951452360934, 'Pistol': 96.0906038332391, 'Rocket': 80.34540934114722, 'Skateboard': 95.87378640776699, 'Table': 92.3453431700387} +test_macc_per_class = {'Airplane': 90.0703525945487, 'Bag': 80.28730628677579, 'Cap': 84.1824902409883, 'Car': 84.35086386103214, 'Chair': 92.39515804223245, 'Earphone': 70.09270100469594, 'Guitar': 94.40004849333154, 'Knife': 92.59468259403359, 'Lamp': 90.42657748424396, 'Laptop': 98.11180948450556, 'Motorbike': 83.85761757842538, 'Mug': 93.82629290513384, 'Pistol': 87.26142219512049, 'Rocket': 79.4450656578635, 'Skateboard': 87.56142854848896, 'Table': 78.9642598275551} +test_miou_per_class = {'Airplane': 80.63665326814457, 'Bag': 76.86782804113778, 'Cap': 78.87830979441918, 'Car': 75.94108903917278, 'Chair': 85.3225461312451, 'Earphone': 62.105565326954206, 'Guitar': 90.06247438674282, 'Knife': 86.2831032887118, 'Lamp': 76.88493851393126, 'Laptop': 96.28976810743399, 'Motorbike': 69.64646254592805, 'Mug': 91.09072575657954, 'Pistol': 82.59784744076187, 'Rocket': 62.477762374073684, 'Skateboard': 76.52500251461188, 'Table': 71.30488847286557} +================================================== + +EPOCH 74 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.385, iteration=0.237, train_acc=94.96, train_loss_seg=0.143, train_macc=90.04, train_miou=84.55] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.53it/s, test_acc=92.99, test_loss_seg=0.139, test_macc=85.44, test_miou=78.89] +================================================== +test_loss_seg = 0.1390828639268875 +test_acc = 92.99941952844539 +test_macc = 85.44686557740827 +test_miou = 78.89476783604931 +test_acc_per_class = {'Airplane': 90.51344043677626, 'Bag': 94.74537298805082, 'Cap': 91.96063452118815, 'Car': 90.37527854893125, 'Chair': 94.78816127824592, 'Earphone': 93.14967278868482, 'Guitar': 95.63022781837242, 'Knife': 92.96464646464646, 'Lamp': 90.59664876874652, 'Laptop': 98.10982010783987, 'Motorbike': 86.20273068957583, 'Mug': 98.51944154138864, 'Pistol': 95.73136612271466, 'Rocket': 83.79870852816744, 'Skateboard': 96.12895410811933, 'Table': 94.77560774367761} +test_macc_per_class = {'Airplane': 89.73269248697777, 'Bag': 75.84920255300942, 'Cap': 85.25556849162999, 'Car': 88.38695164263844, 'Chair': 89.8521037304255, 'Earphone': 68.45065408965107, 'Guitar': 92.19981691599303, 'Knife': 92.97091083755035, 'Lamp': 91.14124166788599, 'Laptop': 98.14091133281964, 'Motorbike': 73.4649609334596, 'Mug': 87.53790470443032, 'Pistol': 85.54920826531443, 'Rocket': 70.72882423094123, 'Skateboard': 89.87342341990286, 'Table': 88.01547393590278} +test_miou_per_class = {'Airplane': 80.23687210810182, 'Bag': 72.87358261931756, 'Cap': 80.0849636962337, 'Car': 75.65536930253901, 'Chair': 84.63056851731703, 'Earphone': 63.35416168336112, 'Guitar': 88.32705476172514, 'Knife': 86.84337485701583, 'Lamp': 79.05707145159344, 'Laptop': 96.27311698091073, 'Motorbike': 66.40445785667772, 'Mug': 86.20272356640095, 'Pistol': 80.98481261370823, 'Rocket': 61.7254883512663, 'Skateboard': 78.08764154035322, 'Table': 81.57502547026736} +================================================== + +EPOCH 75 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.385, iteration=0.238, train_acc=93.89, train_loss_seg=0.165, train_macc=86.26, train_miou=81.12] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.53it/s, test_acc=93.06, test_loss_seg=0.132, test_macc=85.52, test_miou=79.26] +================================================== +test_loss_seg = 0.13204242289066315 +test_acc = 93.06435266075621 +test_macc = 85.52724144953164 +test_miou = 79.26780384054021 +test_acc_per_class = {'Airplane': 90.70025849538858, 'Bag': 95.43528362203554, 'Cap': 96.07306828172317, 'Car': 90.4052391273118, 'Chair': 94.72313978603633, 'Earphone': 93.66968277998802, 'Guitar': 95.86069700919914, 'Knife': 92.81649526628729, 'Lamp': 91.06468898593265, 'Laptop': 98.11738318649002, 'Motorbike': 85.46549479166666, 'Mug': 99.04304282687039, 'Pistol': 95.43078257693361, 'Rocket': 79.07236783254316, 'Skateboard': 96.35996992275618, 'Table': 94.79204808093718} +test_macc_per_class = {'Airplane': 89.5921580228554, 'Bag': 77.5955139777634, 'Cap': 92.18597332283245, 'Car': 83.71889770938348, 'Chair': 92.12176142403304, 'Earphone': 69.58736678885846, 'Guitar': 92.75322019597321, 'Knife': 92.7706037215169, 'Lamp': 89.56057623478361, 'Laptop': 98.1372004860254, 'Motorbike': 71.78719699197083, 'Mug': 92.14330549505573, 'Pistol': 81.95298368644673, 'Rocket': 66.72077568826978, 'Skateboard': 88.75451806164244, 'Table': 89.05381138509505} +test_miou_per_class = {'Airplane': 80.8514573536228, 'Bag': 74.91948398130158, 'Cap': 89.12903814724753, 'Car': 75.1602345150327, 'Chair': 85.05450246016748, 'Earphone': 65.47044528257814, 'Guitar': 88.9079324920521, 'Knife': 86.56393217694105, 'Lamp': 78.23253080389213, 'Laptop': 96.28750161240139, 'Motorbike': 63.4280796709342, 'Mug': 91.13279598845988, 'Pistol': 78.22153024356201, 'Rocket': 53.98345471321618, 'Skateboard': 78.15212012572212, 'Table': 82.78982188151245} +================================================== + +EPOCH 76 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.383, iteration=0.233, train_acc=94.62, train_loss_seg=0.137, train_macc=90.46, train_miou=84.97] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.55it/s, test_acc=92.97, test_loss_seg=0.124, test_macc=86.38, test_miou=79.73] +================================================== +test_loss_seg = 0.12437088787555695 +test_acc = 92.9709503513897 +test_macc = 86.38500057599977 +test_miou = 79.73104777069315 +test_acc_per_class = {'Airplane': 90.88245871778706, 'Bag': 95.94835262689226, 'Cap': 92.23014395349894, 'Car': 91.16284139391486, 'Chair': 94.80883098000525, 'Earphone': 94.09891148367225, 'Guitar': 96.11418433599862, 'Knife': 91.93012173077652, 'Lamp': 90.53041127596545, 'Laptop': 98.25336637506345, 'Motorbike': 86.13759957107843, 'Mug': 99.10303159351628, 'Pistol': 95.86555044142673, 'Rocket': 80.25185778489742, 'Skateboard': 95.54068207108726, 'Table': 94.67686128665459} +test_macc_per_class = {'Airplane': 90.15289968359052, 'Bag': 82.72470215855692, 'Cap': 85.6722193219344, 'Car': 85.40172823653836, 'Chair': 92.62107793092545, 'Earphone': 72.52307246386326, 'Guitar': 94.24242509590633, 'Knife': 91.93325770327202, 'Lamp': 89.56787410095205, 'Laptop': 98.21968726018319, 'Motorbike': 79.84971616518789, 'Mug': 95.78027150900728, 'Pistol': 85.6498952491527, 'Rocket': 66.99756854584264, 'Skateboard': 82.19937062841453, 'Table': 88.62424316266906} +test_miou_per_class = {'Airplane': 81.09886677609654, 'Bag': 79.01372444350685, 'Cap': 80.6075321980393, 'Car': 76.7027238573233, 'Chair': 85.20312738040452, 'Earphone': 68.54324133758146, 'Guitar': 90.02137662703458, 'Knife': 85.06245176599082, 'Lamp': 79.63502002305044, 'Laptop': 96.54706763547057, 'Motorbike': 68.08617767464388, 'Mug': 92.07977574753727, 'Pistol': 81.46919617625858, 'Rocket': 55.587431921493234, 'Skateboard': 73.83195972813837, 'Table': 82.20709103852087} +================================================== + +EPOCH 77 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.384, iteration=0.234, train_acc=94.66, train_loss_seg=0.136, train_macc=88.74, train_miou=83.37] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.55it/s, test_acc=93.30, test_loss_seg=0.152, test_macc=87.29, test_miou=80.39] +================================================== +test_loss_seg = 0.15225756168365479 +test_acc = 93.3047479983827 +test_macc = 87.29179021076165 +test_miou = 80.39827298478326 +test_acc_per_class = {'Airplane': 90.6390837708498, 'Bag': 95.61865088065483, 'Cap': 95.16880372664842, 'Car': 90.82472753482708, 'Chair': 94.95613235463233, 'Earphone': 93.13870151770658, 'Guitar': 96.01143815017502, 'Knife': 92.90609857248876, 'Lamp': 90.7975394794288, 'Laptop': 98.13628352307913, 'Motorbike': 86.7502298146162, 'Mug': 99.14073635801307, 'Pistol': 95.55847869602516, 'Rocket': 82.27133265151852, 'Skateboard': 96.05310962116458, 'Table': 94.90462132229491} +test_macc_per_class = {'Airplane': 89.47592812367571, 'Bag': 79.83823175427489, 'Cap': 90.23032792994272, 'Car': 88.42178268108194, 'Chair': 91.96255656816076, 'Earphone': 70.33495755442455, 'Guitar': 94.4824987729434, 'Knife': 92.90513932741608, 'Lamp': 90.29565117349262, 'Laptop': 98.09372839931699, 'Motorbike': 78.93974529306882, 'Mug': 96.20373008862167, 'Pistol': 85.46719045427923, 'Rocket': 68.95905668611947, 'Skateboard': 90.71515804153131, 'Table': 90.34296052383624} +test_miou_per_class = {'Airplane': 80.79171003472835, 'Bag': 76.72792281930516, 'Cap': 86.96941452133488, 'Car': 76.74198776844663, 'Chair': 85.44553516009394, 'Earphone': 65.5916855362211, 'Guitar': 89.80096345963466, 'Knife': 86.75130212360452, 'Lamp': 78.76009088812455, 'Laptop': 96.317319214838, 'Motorbike': 68.91771908854004, 'Mug': 92.4812414420112, 'Pistol': 80.49494041060453, 'Rocket': 59.23216227392081, 'Skateboard': 78.32041317915431, 'Table': 83.02795983596974} +================================================== +acc: 93.131845465478 -> 93.3047479983827, miou: 80.11268496674514 -> 80.39827298478326 + +EPOCH 78 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.381, iteration=0.233, train_acc=94.55, train_loss_seg=0.141, train_macc=89.88, train_miou=84.36] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:35<00:00, 2.54it/s, test_acc=93.38, test_loss_seg=0.118, test_macc=87.47, test_miou=80.75] +================================================== +test_loss_seg = 0.11885284632444382 +test_acc = 93.38952466037554 +test_macc = 87.47376673758077 +test_miou = 80.75561479818244 +test_acc_per_class = {'Airplane': 90.74407529988059, 'Bag': 95.76852791878171, 'Cap': 98.10781306856579, 'Car': 91.06668695671077, 'Chair': 94.85324224424664, 'Earphone': 91.5950197995077, 'Guitar': 96.12735334200774, 'Knife': 92.65084068923248, 'Lamp': 91.342512133846, 'Laptop': 98.10579866143105, 'Motorbike': 86.48309669018737, 'Mug': 99.21712604270712, 'Pistol': 95.80559797624183, 'Rocket': 81.76984770658736, 'Skateboard': 95.87945879458795, 'Table': 94.71539724148636} +test_macc_per_class = {'Airplane': 88.94277529979978, 'Bag': 84.6732831757643, 'Cap': 97.67430021819195, 'Car': 86.46799281003538, 'Chair': 91.10199909616783, 'Earphone': 70.10878542771323, 'Guitar': 93.9804109547127, 'Knife': 92.64233160859212, 'Lamp': 90.75097034427111, 'Laptop': 98.01977839818736, 'Motorbike': 78.99167033940043, 'Mug': 94.52938361179018, 'Pistol': 87.6252334349872, 'Rocket': 65.82067695062698, 'Skateboard': 89.86066675932824, 'Table': 88.39000937172365} +test_miou_per_class = {'Airplane': 80.742959212211, 'Bag': 78.88241320080229, 'Cap': 94.69538045513967, 'Car': 76.76547982160095, 'Chair': 85.24347058326546, 'Earphone': 63.40326926758867, 'Guitar': 89.8158695982111, 'Knife': 86.30268360989554, 'Lamp': 79.90065147995985, 'Laptop': 96.25021225410497, 'Motorbike': 68.08453671214623, 'Mug': 92.86548763755454, 'Pistol': 82.40543742059234, 'Rocket': 57.05849225562227, 'Skateboard': 77.48755019673716, 'Table': 82.18594306548685} +================================================== +acc: 93.3047479983827 -> 93.38952466037554, macc: 87.42139484363707 -> 87.47376673758077, miou: 80.39827298478326 -> 80.75561479818244 + +EPOCH 79 / 100 +100%|█████████████████████████████| 438/438 [05:17<00:00, 1.38it/s, data_loading=0.378, iteration=0.234, train_acc=95.05, train_loss_seg=0.131, train_macc=87.81, train_miou=83.1 ] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:34<00:00, 2.63it/s, test_acc=92.46, test_loss_seg=0.118, test_macc=84.93, test_miou=78.20] +================================================== +test_loss_seg = 0.11817304790019989 +test_acc = 92.4612757975796 +test_macc = 84.93344257771454 +test_miou = 78.20497023863263 +test_acc_per_class = {'Airplane': 90.67131559738375, 'Bag': 95.73568958893584, 'Cap': 92.12902625209458, 'Car': 91.15255501678479, 'Chair': 94.80507613214138, 'Earphone': 93.39426232400578, 'Guitar': 95.73619991244891, 'Knife': 86.5669700910273, 'Lamp': 90.98953359190753, 'Laptop': 98.00219394994456, 'Motorbike': 86.26671900581157, 'Mug': 99.2172131147541, 'Pistol': 95.48045697731467, 'Rocket': 78.91993116712219, 'Skateboard': 95.68496161780122, 'Table': 94.62830842179545} +test_macc_per_class = {'Airplane': 89.10105203953998, 'Bag': 81.6996529211836, 'Cap': 84.38287897406848, 'Car': 84.02906195470187, 'Chair': 91.53007269202818, 'Earphone': 70.13268877416508, 'Guitar': 92.00745573771512, 'Knife': 86.48101658163864, 'Lamp': 90.77949431140088, 'Laptop': 97.94297717470342, 'Motorbike': 77.49474467216973, 'Mug': 94.99849767316118, 'Pistol': 83.05336898843208, 'Rocket': 57.945445872613334, 'Skateboard': 88.4450346864966, 'Table': 88.9116381894143} +test_miou_per_class = {'Airplane': 80.84040451114713, 'Bag': 78.10568147296479, 'Cap': 79.36632797312096, 'Car': 76.19409582185848, 'Chair': 84.99735344696862, 'Earphone': 65.61006931994163, 'Guitar': 88.17254064586827, 'Knife': 75.99852146634825, 'Lamp': 79.15136117179595, 'Laptop': 96.05859621179471, 'Motorbike': 68.00849341360873, 'Mug': 92.97840517108638, 'Pistol': 78.70294364905867, 'Rocket': 48.61748066115888, 'Skateboard': 76.28391475171506, 'Table': 82.1933341296855} +================================================== + +EPOCH 80 / 100 +100%|█████████████████████████████| 438/438 [05:15<00:00, 1.39it/s, data_loading=0.380, iteration=0.236, train_acc=95.15, train_loss_seg=0.136, train_macc=89.58, train_miou=84.36] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:34<00:00, 2.64it/s, test_acc=92.78, test_loss_seg=0.120, test_macc=86.20, test_miou=79.26] +================================================== +test_loss_seg = 0.12038599699735641 +test_acc = 92.78651144586685 +test_macc = 86.2046934668664 +test_miou = 79.26165595431695 +test_acc_per_class = {'Airplane': 91.17509594338034, 'Bag': 95.31951438497872, 'Cap': 93.56176650798888, 'Car': 91.03134400473823, 'Chair': 94.87209680494342, 'Earphone': 92.29801069817852, 'Guitar': 96.26243365380765, 'Knife': 89.26108003712157, 'Lamp': 91.2138218566443, 'Laptop': 98.10222370522202, 'Motorbike': 86.1739813112745, 'Mug': 99.2248062015504, 'Pistol': 95.43770423896534, 'Rocket': 80.46420763756453, 'Skateboard': 95.34494866735778, 'Table': 94.84114748015323} +test_macc_per_class = {'Airplane': 89.97417146410025, 'Bag': 79.9076225018702, 'Cap': 87.0548956273095, 'Car': 85.25914612702117, 'Chair': 91.13731301432779, 'Earphone': 71.04223639697476, 'Guitar': 94.78757857015728, 'Knife': 89.22859384496431, 'Lamp': 90.33195793827113, 'Laptop': 98.0436369985853, 'Motorbike': 78.98634339118004, 'Mug': 95.55364436747504, 'Pistol': 84.24965312654513, 'Rocket': 67.54200547834822, 'Skateboard': 87.43227360281138, 'Table': 88.74402301992104} +test_miou_per_class = {'Airplane': 81.66815266239877, 'Bag': 75.49556674886895, 'Cap': 82.80588604918972, 'Car': 76.25200676439664, 'Chair': 85.092525475393, 'Earphone': 65.15219874951073, 'Guitar': 90.41831290168865, 'Knife': 80.46761174951301, 'Lamp': 79.61999800079043, 'Laptop': 96.25105257273677, 'Motorbike': 68.49934621131153, 'Mug': 93.0212123732129, 'Pistol': 79.50043513283894, 'Rocket': 56.93180660743762, 'Skateboard': 74.39952046605902, 'Table': 82.61086280372457} +================================================== + +EPOCH 81 / 100 +100%|█████████████████████████████| 438/438 [05:15<00:00, 1.39it/s, data_loading=0.381, iteration=0.234, train_acc=94.96, train_loss_seg=0.137, train_macc=90.45, train_miou=85.33] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:34<00:00, 2.64it/s, test_acc=93.23, test_loss_seg=0.093, test_macc=86.51, test_miou=80.08] +================================================== +test_loss_seg = 0.09392178803682327 +test_acc = 93.2302702808586 +test_macc = 86.5188119900699 +test_miou = 80.08840357158344 +test_acc_per_class = {'Airplane': 90.99603270710077, 'Bag': 94.91620740074232, 'Cap': 94.76846424384526, 'Car': 90.94047622790772, 'Chair': 94.7530939801501, 'Earphone': 93.15484501535883, 'Guitar': 96.19828446777832, 'Knife': 92.04170243159761, 'Lamp': 90.25468207184989, 'Laptop': 97.94804497599094, 'Motorbike': 86.15674785539215, 'Mug': 99.16990226268577, 'Pistol': 95.8496947136664, 'Rocket': 83.64029837001566, 'Skateboard': 95.77778511087645, 'Table': 95.11806265877962} +test_macc_per_class = {'Airplane': 89.65528501725069, 'Bag': 75.95910427799485, 'Cap': 89.03979621621048, 'Car': 83.46170302854294, 'Chair': 91.81877429387895, 'Earphone': 74.49881297522597, 'Guitar': 95.04013657385264, 'Knife': 92.04292989093942, 'Lamp': 88.75682476778296, 'Laptop': 98.00515647964443, 'Motorbike': 75.32036297825448, 'Mug': 94.24326789323328, 'Pistol': 89.86212953382858, 'Rocket': 76.2752407127929, 'Skateboard': 80.71066749783786, 'Table': 89.61079970384787} +test_miou_per_class = {'Airplane': 80.88795026114784, 'Bag': 73.19191379029716, 'Cap': 85.5353030829303, 'Car': 75.82666911823102, 'Chair': 84.87322747927176, 'Earphone': 69.45270872127128, 'Guitar': 90.28606934190564, 'Knife': 85.25242273179249, 'Lamp': 76.36575542848266, 'Laptop': 95.96005716314376, 'Motorbike': 66.74958665722338, 'Mug': 92.40555080227712, 'Pistol': 82.77433271109726, 'Rocket': 64.84264325198376, 'Skateboard': 73.47122930923452, 'Table': 83.53903729504503} +================================================== +loss_seg: 0.10609838366508484 -> 0.09392178803682327 + +EPOCH 82 / 100 +100%|█████████████████████████████| 438/438 [05:15<00:00, 1.39it/s, data_loading=0.382, iteration=0.232, train_acc=95.20, train_loss_seg=0.128, train_macc=89.20, train_miou=83.99] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:34<00:00, 2.63it/s, test_acc=92.85, test_loss_seg=0.149, test_macc=86.74, test_miou=79.54] +================================================== +test_loss_seg = 0.14912843704223633 +test_acc = 92.85367904557583 +test_macc = 86.74303179288145 +test_miou = 79.54684933932674 +test_acc_per_class = {'Airplane': 91.03220562836533, 'Bag': 95.22657635865183, 'Cap': 94.92790673893036, 'Car': 90.94464295666928, 'Chair': 94.63829637945551, 'Earphone': 93.13718589427747, 'Guitar': 96.20427305244088, 'Knife': 92.9368520263902, 'Lamp': 89.97513079327179, 'Laptop': 98.20399985689242, 'Motorbike': 85.74221759901371, 'Mug': 99.01125729011258, 'Pistol': 95.41382154805005, 'Rocket': 78.31019754404699, 'Skateboard': 96.21944286526436, 'Table': 93.73485819738072} +test_macc_per_class = {'Airplane': 88.01915928710109, 'Bag': 76.68098685249281, 'Cap': 89.92271622421636, 'Car': 85.92866385695895, 'Chair': 91.45173599095634, 'Earphone': 71.43822690759343, 'Guitar': 94.46186188013188, 'Knife': 92.90867824236616, 'Lamp': 91.18106176435131, 'Laptop': 98.19440336425664, 'Motorbike': 83.60971793387981, 'Mug': 95.16086512796423, 'Pistol': 83.89476550906923, 'Rocket': 68.01821281431867, 'Skateboard': 86.57890457979495, 'Table': 90.43854835065139} +test_miou_per_class = {'Airplane': 80.90386640559728, 'Bag': 74.13184645622901, 'Cap': 86.41665410319112, 'Car': 76.43615038240965, 'Chair': 84.79600685754114, 'Earphone': 67.1997963312802, 'Guitar': 90.24324230413062, 'Knife': 86.78349430302731, 'Lamp': 78.05820262082611, 'Laptop': 96.45024815397771, 'Motorbike': 69.19914299877775, 'Mug': 91.42708952485397, 'Pistol': 80.24717252047267, 'Rocket': 53.01248594386389, 'Skateboard': 77.68817049025442, 'Table': 79.75602003279515} +================================================== + +EPOCH 83 / 100 +100%|█████████████████████████████| 438/438 [05:15<00:00, 1.39it/s, data_loading=0.377, iteration=0.234, train_acc=95.55, train_loss_seg=0.127, train_macc=90.93, train_miou=86.11] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:34<00:00, 2.64it/s, test_acc=93.05, test_loss_seg=0.123, test_macc=86.65, test_miou=79.98] +================================================== +test_loss_seg = 0.12374790012836456 +test_acc = 93.05331358686607 +test_macc = 86.65310503834752 +test_miou = 79.98882076384447 +test_acc_per_class = {'Airplane': 90.32542017746519, 'Bag': 96.15253272623791, 'Cap': 92.65814487053426, 'Car': 91.11328915025358, 'Chair': 94.89553311582611, 'Earphone': 91.13500176242509, 'Guitar': 96.14740994886274, 'Knife': 93.01321357630192, 'Lamp': 90.71550393600104, 'Laptop': 98.08040069479145, 'Motorbike': 86.93899437917133, 'Mug': 99.20376747288046, 'Pistol': 95.65112345977289, 'Rocket': 82.79872444194335, 'Skateboard': 95.57606301314999, 'Table': 94.44789466423994} +test_macc_per_class = {'Airplane': 88.99095584796648, 'Bag': 82.20366729840869, 'Cap': 85.03816469745557, 'Car': 85.30548822311842, 'Chair': 91.01766578839442, 'Earphone': 72.69943398498793, 'Guitar': 94.73364107466665, 'Knife': 93.00934762291898, 'Lamp': 90.46960159541626, 'Laptop': 97.99937359180113, 'Motorbike': 78.24939304743891, 'Mug': 94.45057076804322, 'Pistol': 86.46757024199727, 'Rocket': 72.96027336546122, 'Skateboard': 87.28942153642035, 'Table': 85.56511192906467} +test_miou_per_class = {'Airplane': 80.07256724818052, 'Bag': 79.26270820297259, 'Cap': 80.48402595177618, 'Car': 76.43603036184643, 'Chair': 85.05919042565574, 'Earphone': 65.10377938481008, 'Guitar': 89.91906102934179, 'Knife': 86.9361357062451, 'Lamp': 78.9493889241436, 'Laptop': 96.20710540065383, 'Motorbike': 69.16428168441483, 'Mug': 92.77835565032666, 'Pistol': 81.34384554606643, 'Rocket': 62.22911214941872, 'Skateboard': 75.3363359436623, 'Table': 80.5392086119967} +================================================== + +EPOCH 84 / 100 +100%|█████████████████████████████| 438/438 [05:15<00:00, 1.39it/s, data_loading=0.379, iteration=0.230, train_acc=95.23, train_loss_seg=0.124, train_macc=90.72, train_miou=85.26] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:34<00:00, 2.64it/s, test_acc=93.18, test_loss_seg=0.118, test_macc=87.23, test_miou=80.36] +================================================== +test_loss_seg = 0.11829501390457153 +test_acc = 93.18433750848855 +test_macc = 87.235969583182 +test_miou = 80.36118939234017 +test_acc_per_class = {'Airplane': 90.94243305368033, 'Bag': 95.25529593257328, 'Cap': 93.35950278050376, 'Car': 91.11793013363285, 'Chair': 94.87956643492174, 'Earphone': 92.20425413013866, 'Guitar': 96.18212942050012, 'Knife': 92.15254813487034, 'Lamp': 90.09596649172961, 'Laptop': 98.10243161457772, 'Motorbike': 86.77929739876643, 'Mug': 99.03467905428084, 'Pistol': 95.49619719600476, 'Rocket': 84.23439445136049, 'Skateboard': 96.22729631551634, 'Table': 94.88547759275974} +test_macc_per_class = {'Airplane': 89.4191221977074, 'Bag': 80.79324698698814, 'Cap': 86.7337434045937, 'Car': 85.25519565657478, 'Chair': 90.53315453119866, 'Earphone': 72.91881730100708, 'Guitar': 93.85177421908341, 'Knife': 92.11117605203911, 'Lamp': 89.71125701022865, 'Laptop': 98.13128301918978, 'Motorbike': 79.76891829944078, 'Mug': 92.7652491519541, 'Pistol': 84.30340925605346, 'Rocket': 79.40371464627201, 'Skateboard': 90.83493098952337, 'Table': 89.24052060905728} +test_miou_per_class = {'Airplane': 81.32181479191446, 'Bag': 76.63636104672923, 'Cap': 82.46260631949124, 'Car': 76.53690882515936, 'Chair': 85.12279870256432, 'Earphone': 66.14504559398746, 'Guitar': 89.80160072593077, 'Knife': 85.40756376154748, 'Lamp': 77.7880727602585, 'Laptop': 96.2592035185952, 'Motorbike': 69.35262085322151, 'Mug': 91.12862090462717, 'Pistol': 80.31680308450917, 'Rocket': 65.78136296654814, 'Skateboard': 78.87764008214103, 'Table': 82.84000634021746} +================================================== + +EPOCH 85 / 100 +100%|█████████████████████████████| 438/438 [05:15<00:00, 1.39it/s, data_loading=0.377, iteration=0.230, train_acc=94.63, train_loss_seg=0.140, train_macc=90.44, train_miou=84.81] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:34<00:00, 2.63it/s, test_acc=92.58, test_loss_seg=0.115, test_macc=86.70, test_miou=79.50] +================================================== +test_loss_seg = 0.1150493398308754 +test_acc = 92.58341196384566 +test_macc = 86.70767506819344 +test_miou = 79.5087678667701 +test_acc_per_class = {'Airplane': 91.12133774441686, 'Bag': 95.60684470970423, 'Cap': 95.13658841038935, 'Car': 91.21919444142873, 'Chair': 94.81828645102937, 'Earphone': 82.8890527868507, 'Guitar': 96.21389623544843, 'Knife': 91.83546586090698, 'Lamp': 89.67490756826585, 'Laptop': 98.15656082631982, 'Motorbike': 86.33925125434196, 'Mug': 99.2040932347925, 'Pistol': 95.75008329790778, 'Rocket': 83.51760579015142, 'Skateboard': 95.36183245902707, 'Table': 94.48959035054936} +test_macc_per_class = {'Airplane': 89.31776832759589, 'Bag': 85.01417097660435, 'Cap': 89.9265479957261, 'Car': 84.96326929497681, 'Chair': 90.09889541061102, 'Earphone': 68.71844442325111, 'Guitar': 94.7784965450369, 'Knife': 91.83517779955821, 'Lamp': 88.82348640432524, 'Laptop': 98.15861007606645, 'Motorbike': 75.03223067832802, 'Mug': 94.81612834964312, 'Pistol': 85.65934856068053, 'Rocket': 72.38651117207687, 'Skateboard': 90.45425612220762, 'Table': 87.3394589544071} +test_miou_per_class = {'Airplane': 81.4456440188381, 'Bag': 79.39806288126653, 'Cap': 86.52215206000265, 'Car': 76.72053866278033, 'Chair': 84.91424363748587, 'Earphone': 54.38697354329024, 'Guitar': 90.33304259304596, 'Knife': 84.90150392271529, 'Lamp': 76.5917658454053, 'Laptop': 96.35989230851025, 'Motorbike': 66.98234439133799, 'Mug': 92.79409886274688, 'Pistol': 81.13729786452099, 'Rocket': 62.617425446049616, 'Skateboard': 76.18723575173445, 'Table': 80.84806407859105} +================================================== + +EPOCH 86 / 100 +100%|█████████████████████████████| 438/438 [05:15<00:00, 1.39it/s, data_loading=0.378, iteration=0.239, train_acc=95.14, train_loss_seg=0.129, train_macc=90.11, train_miou=85.20] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:33<00:00, 2.65it/s, test_acc=93.00, test_loss_seg=0.111, test_macc=87.59, test_miou=80.18] +================================================== +test_loss_seg = 0.11165614426136017 +test_acc = 93.00872606869858 +test_macc = 87.59876800041448 +test_miou = 80.1856342105089 +test_acc_per_class = {'Airplane': 91.17744672500173, 'Bag': 96.07695776341501, 'Cap': 92.74903367019047, 'Car': 91.28363237410295, 'Chair': 94.80433094043097, 'Earphone': 90.4657184407477, 'Guitar': 95.82164362675655, 'Knife': 90.84700725757887, 'Lamp': 91.14650205466958, 'Laptop': 98.07448045558903, 'Motorbike': 86.17628592127505, 'Mug': 99.04791655348508, 'Pistol': 95.7701849345963, 'Rocket': 83.98329928044772, 'Skateboard': 95.8340197015122, 'Table': 94.88115739937808} +test_macc_per_class = {'Airplane': 89.7049342962199, 'Bag': 84.56883950499812, 'Cap': 85.65614317507017, 'Car': 86.05523206449341, 'Chair': 91.73124137045936, 'Earphone': 70.77288691157034, 'Guitar': 94.33666152764, 'Knife': 90.80823413640755, 'Lamp': 91.03189522719165, 'Laptop': 98.01066573910083, 'Motorbike': 81.78258931873013, 'Mug': 95.55748213499507, 'Pistol': 85.7799461324334, 'Rocket': 80.13453664141218, 'Skateboard': 86.74462785691604, 'Table': 88.90437196899353} +test_miou_per_class = {'Airplane': 81.60617968305299, 'Bag': 80.36780568788954, 'Cap': 80.98821086695254, 'Car': 77.11060561671765, 'Chair': 85.10936560988782, 'Earphone': 62.777024382504635, 'Guitar': 88.61576759992292, 'Knife': 83.15197760933935, 'Lamp': 79.24029121546793, 'Laptop': 96.19778215630494, 'Motorbike': 70.06915211788667, 'Mug': 91.70399616342795, 'Pistol': 81.29015983785308, 'Rocket': 66.15467611592058, 'Skateboard': 75.80995479369254, 'Table': 82.77719791132128} +================================================== +macc: 87.47376673758077 -> 87.59876800041448 + +EPOCH 87 / 100 +100%|█████████████████████████████| 438/438 [05:15<00:00, 1.39it/s, data_loading=0.379, iteration=0.236, train_acc=94.83, train_loss_seg=0.127, train_macc=89.01, train_miou=83.24] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:33<00:00, 2.65it/s, test_acc=93.16, test_loss_seg=2.787, test_macc=86.67, test_miou=80.09] +================================================== +test_loss_seg = 2.7872393131256104 +test_acc = 93.1652741091659 +test_macc = 86.6723169550004 +test_miou = 80.09818244041884 +test_acc_per_class = {'Airplane': 91.34345503683512, 'Bag': 96.01676357320848, 'Cap': 93.04629542680064, 'Car': 91.17030978147535, 'Chair': 94.78147202592343, 'Earphone': 93.70619418650158, 'Guitar': 96.2923548573709, 'Knife': 92.24896802108718, 'Lamp': 90.93381831654773, 'Laptop': 97.73743500349235, 'Motorbike': 86.57207129261117, 'Mug': 99.17181722674744, 'Pistol': 95.38110390267487, 'Rocket': 81.2573284026337, 'Skateboard': 96.38962211725479, 'Table': 94.59537657549014} +test_macc_per_class = {'Airplane': 89.6307285609302, 'Bag': 83.01295409263223, 'Cap': 86.2362102407157, 'Car': 85.76755556747358, 'Chair': 91.20202811736759, 'Earphone': 71.41222126632614, 'Guitar': 94.56623210465308, 'Knife': 92.20485538596823, 'Lamp': 89.70407337645747, 'Laptop': 97.83457507853257, 'Motorbike': 83.71648363570304, 'Mug': 95.93090301390477, 'Pistol': 82.04277333671773, 'Rocket': 67.48395845328513, 'Skateboard': 88.16568674777191, 'Table': 87.84583230156694} +test_miou_per_class = {'Airplane': 81.82459138815776, 'Bag': 79.12114648751862, 'Cap': 81.61921924572528, 'Car': 76.8944499069875, 'Chair': 84.6842322182774, 'Earphone': 67.06466840771067, 'Guitar': 90.39938043831134, 'Knife': 85.57335617714106, 'Lamp': 80.54172277510384, 'Laptop': 95.5604850234959, 'Motorbike': 71.31894549919066, 'Mug': 92.76407172402487, 'Pistol': 78.48940300380417, 'Rocket': 56.19662544509821, 'Skateboard': 78.22493914085896, 'Table': 81.29368216529528} +================================================== + +EPOCH 88 / 100 +100%|█████████████████████████████| 438/438 [05:15<00:00, 1.39it/s, data_loading=0.382, iteration=0.235, train_acc=94.82, train_loss_seg=0.132, train_macc=89.29, train_miou=84.13] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:33<00:00, 2.65it/s, test_acc=92.97, test_loss_seg=0.089, test_macc=87.10, test_miou=80.07] +================================================== +test_loss_seg = 0.08968085050582886 +test_acc = 92.97788522309538 +test_macc = 87.1099039389467 +test_miou = 80.07005470681347 +test_acc_per_class = {'Airplane': 90.93112745133509, 'Bag': 95.34629587760946, 'Cap': 92.67034657497058, 'Car': 91.22209151059076, 'Chair': 94.91245603515084, 'Earphone': 93.11621284926144, 'Guitar': 96.01013260332344, 'Knife': 92.05964411928824, 'Lamp': 89.67969102097028, 'Laptop': 98.10513047666046, 'Motorbike': 86.46257265702056, 'Mug': 99.05075424053729, 'Pistol': 96.02157556601504, 'Rocket': 80.94258629940977, 'Skateboard': 96.34697957979613, 'Table': 94.76856670758632} +test_macc_per_class = {'Airplane': 88.71213304781693, 'Bag': 78.98195728266914, 'Cap': 87.65504089737706, 'Car': 84.65568313726922, 'Chair': 92.40941076112681, 'Earphone': 74.53870410734899, 'Guitar': 93.88071608963243, 'Knife': 92.05471251993607, 'Lamp': 88.22858341970961, 'Laptop': 98.02070287495827, 'Motorbike': 74.4068323739551, 'Mug': 93.42229120923156, 'Pistol': 88.0090587201941, 'Rocket': 83.94957569527686, 'Skateboard': 88.18774320048661, 'Table': 86.6453176861583} +test_miou_per_class = {'Airplane': 80.81503776585586, 'Bag': 75.93791242692629, 'Cap': 81.86526179502349, 'Car': 76.43333120116607, 'Chair': 85.16314090595601, 'Earphone': 68.46597604833391, 'Guitar': 89.65877577832782, 'Knife': 85.28495059405257, 'Lamp': 75.24510089850685, 'Laptop': 96.2542908261269, 'Motorbike': 67.04448376277865, 'Mug': 91.2711846137616, 'Pistol': 83.54048800349281, 'Rocket': 64.37640198914215, 'Skateboard': 78.36782116867002, 'Table': 81.39671753089445} +================================================== +loss_seg: 0.09392178803682327 -> 0.08968085050582886 + +EPOCH 89 / 100 +100%|█████████████████████████████| 438/438 [05:15<00:00, 1.39it/s, data_loading=0.383, iteration=0.238, train_acc=95.15, train_loss_seg=0.125, train_macc=90.75, train_miou=85.24] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:33<00:00, 2.65it/s, test_acc=92.60, test_loss_seg=0.114, test_macc=85.70, test_miou=79.19] +================================================== +test_loss_seg = 0.11469707638025284 +test_acc = 92.60705652755442 +test_macc = 85.70805008690316 +test_miou = 79.19103749140974 +test_acc_per_class = {'Airplane': 91.2799272565902, 'Bag': 96.06877745796092, 'Cap': 94.31395622499412, 'Car': 91.07563375505899, 'Chair': 94.92916306390416, 'Earphone': 88.60696340116175, 'Guitar': 96.1182366955942, 'Knife': 91.50463581094088, 'Lamp': 90.71447275370116, 'Laptop': 98.16451067604991, 'Motorbike': 86.6794496908106, 'Mug': 99.08128539337933, 'Pistol': 95.95210881890299, 'Rocket': 76.43491444661036, 'Skateboard': 95.93342344836572, 'Table': 94.85544554684576} +test_macc_per_class = {'Airplane': 89.57905844157743, 'Bag': 81.18271489787719, 'Cap': 89.23221787844449, 'Car': 84.62044056703934, 'Chair': 90.6264637873171, 'Earphone': 71.96969667925966, 'Guitar': 94.563986511869, 'Knife': 91.48175578253108, 'Lamp': 85.77278381796222, 'Laptop': 98.11770667976761, 'Motorbike': 72.1426036167948, 'Mug': 92.30443903626295, 'Pistol': 85.81080689769918, 'Rocket': 66.7549099143807, 'Skateboard': 88.15123216720994, 'Table': 89.01798471445787} +test_miou_per_class = {'Airplane': 81.60660418087954, 'Bag': 78.68716197473354, 'Cap': 85.02970411570178, 'Car': 76.06149250311822, 'Chair': 85.29067425977284, 'Earphone': 61.26220976929737, 'Guitar': 90.00133220880743, 'Knife': 84.30503972839553, 'Lamp': 77.59128816681124, 'Laptop': 96.37218307989922, 'Motorbike': 65.49960012039918, 'Mug': 91.43100755937431, 'Pistol': 81.56295082568434, 'Rocket': 52.83565088993818, 'Skateboard': 76.7970208659727, 'Table': 82.72267961377041} +================================================== + +EPOCH 90 / 100 +100%|█████████████████████████████| 438/438 [05:15<00:00, 1.39it/s, data_loading=0.380, iteration=0.231, train_acc=94.85, train_loss_seg=0.119, train_macc=90.47, train_miou=84.63] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:34<00:00, 2.64it/s, test_acc=93.21, test_loss_seg=0.106, test_macc=87.90, test_miou=80.55] +================================================== +test_loss_seg = 0.10670359432697296 +test_acc = 93.21122525991969 +test_macc = 87.90512718023592 +test_miou = 80.55905524886462 +test_acc_per_class = {'Airplane': 90.92713079772676, 'Bag': 96.16311469237141, 'Cap': 94.55587392550143, 'Car': 91.22016648089443, 'Chair': 94.68442027995975, 'Earphone': 93.0275687435358, 'Guitar': 96.31574071214601, 'Knife': 91.68879299087794, 'Lamp': 89.7405844522981, 'Laptop': 98.03113062725359, 'Motorbike': 87.30181525735294, 'Mug': 98.82296484890495, 'Pistol': 95.91094645123667, 'Rocket': 81.46236755571795, 'Skateboard': 96.42255134604369, 'Table': 95.10443499689349} +test_macc_per_class = {'Airplane': 89.55106641281898, 'Bag': 83.14725241079867, 'Cap': 89.47782267241138, 'Car': 87.70710296658754, 'Chair': 92.04627547826291, 'Earphone': 72.79498270645858, 'Guitar': 94.68263606763216, 'Knife': 91.66042584377135, 'Lamp': 89.33359237406437, 'Laptop': 97.9271682147707, 'Motorbike': 77.68227956271053, 'Mug': 91.42691485653043, 'Pistol': 89.32185538527267, 'Rocket': 80.32158629143574, 'Skateboard': 89.26007707285962, 'Table': 90.14099656738918} +test_miou_per_class = {'Airplane': 81.04211113700924, 'Bag': 79.70865061489636, 'Cap': 85.55018230256628, 'Car': 76.92854983853802, 'Chair': 84.75873456379614, 'Earphone': 67.6884651576405, 'Guitar': 90.52575695909142, 'Knife': 84.62786664736923, 'Lamp': 75.83369635761974, 'Laptop': 96.10978314491341, 'Motorbike': 69.47637019172448, 'Mug': 89.25673319759049, 'Pistol': 83.14836535201376, 'Rocket': 61.701946480635684, 'Skateboard': 78.95875093544012, 'Table': 83.62892110098925} +================================================== +macc: 87.59876800041448 -> 87.90512718023592 + +EPOCH 91 / 100 +100%|█████████████████████████████| 438/438 [05:15<00:00, 1.39it/s, data_loading=0.380, iteration=0.233, train_acc=94.82, train_loss_seg=0.127, train_macc=90.38, train_miou=85.39] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:34<00:00, 2.64it/s, test_acc=93.08, test_loss_seg=0.143, test_macc=87.01, test_miou=80.04] +================================================== +test_loss_seg = 0.14328664541244507 +test_acc = 93.08578496784574 +test_macc = 87.01605375246184 +test_miou = 80.04563215539382 +test_acc_per_class = {'Airplane': 91.29300373107286, 'Bag': 95.48421095301795, 'Cap': 92.57903031426717, 'Car': 91.090709894073, 'Chair': 94.71026406855506, 'Earphone': 91.7924961985926, 'Guitar': 96.2468803696974, 'Knife': 92.68442775802481, 'Lamp': 90.0179843202573, 'Laptop': 97.87081725061596, 'Motorbike': 86.92842371323529, 'Mug': 99.01707610562039, 'Pistol': 95.97379242408623, 'Rocket': 83.96138419923514, 'Skateboard': 95.06798763256758, 'Table': 94.65407055261306} +test_macc_per_class = {'Airplane': 89.42093906618132, 'Bag': 80.48119048490318, 'Cap': 85.18386630706814, 'Car': 84.75430559403168, 'Chair': 90.37242933239979, 'Earphone': 69.47929630524298, 'Guitar': 94.54941387240972, 'Knife': 92.68696055112625, 'Lamp': 87.32075723014081, 'Laptop': 97.75568372546326, 'Motorbike': 84.16283264329289, 'Mug': 94.92940540055757, 'Pistol': 87.26501197008702, 'Rocket': 76.78195137835611, 'Skateboard': 89.08550935995429, 'Table': 88.02730681817467} +test_miou_per_class = {'Airplane': 81.74461642006929, 'Bag': 77.39952076908598, 'Cap': 80.55158705597569, 'Car': 75.95256497249883, 'Chair': 84.69022901271785, 'Earphone': 63.21350905842452, 'Guitar': 90.22122610057664, 'Knife': 86.36399427688588, 'Lamp': 78.36262659445775, 'Laptop': 95.79997277231566, 'Motorbike': 70.86012264776967, 'Mug': 91.41505393798681, 'Pistol': 82.50159081587206, 'Rocket': 65.05584827506722, 'Skateboard': 74.60708576820782, 'Table': 81.99056600838945} +================================================== + +EPOCH 92 / 100 +100%|█████████████████████████████| 438/438 [05:15<00:00, 1.39it/s, data_loading=0.378, iteration=0.235, train_acc=94.87, train_loss_seg=0.127, train_macc=89.36, train_miou=83.36] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:34<00:00, 2.64it/s, test_acc=93.27, test_loss_seg=0.099, test_macc=87.42, test_miou=80.53] +================================================== +test_loss_seg = 0.09936504065990448 +test_acc = 93.27866306128325 +test_macc = 87.42907139566856 +test_miou = 80.53359235730746 +test_acc_per_class = {'Airplane': 90.71603030460878, 'Bag': 95.5511486941555, 'Cap': 93.50286499029218, 'Car': 91.1799079945294, 'Chair': 94.91112648912083, 'Earphone': 92.91794585542841, 'Guitar': 96.14696416747509, 'Knife': 92.33005450786291, 'Lamp': 89.87701125034104, 'Laptop': 98.14570169827901, 'Motorbike': 86.72587590995848, 'Mug': 99.28965746591287, 'Pistol': 95.85655911270847, 'Rocket': 84.06296114117069, 'Skateboard': 96.1730421934001, 'Table': 95.07175720528801} +test_macc_per_class = {'Airplane': 90.54355775237532, 'Bag': 81.89054624532137, 'Cap': 87.12241597302746, 'Car': 85.08915114952632, 'Chair': 91.00489966976512, 'Earphone': 71.77936097721012, 'Guitar': 94.23820157897276, 'Knife': 92.32529287511251, 'Lamp': 88.39247729842309, 'Laptop': 98.11082816757803, 'Motorbike': 77.98041589904238, 'Mug': 95.26045878274016, 'Pistol': 85.3292717963067, 'Rocket': 78.61164455961568, 'Skateboard': 91.42550524952355, 'Table': 89.76111435615616} +test_miou_per_class = {'Airplane': 80.76822305319175, 'Bag': 77.9969258613679, 'Cap': 82.92681717584024, 'Car': 76.44574144451579, 'Chair': 85.19192384247685, 'Earphone': 66.81005493062231, 'Guitar': 90.15921201513262, 'Knife': 85.73623795144478, 'Lamp': 75.8240841464974, 'Laptop': 96.33878763378267, 'Motorbike': 68.3160187472466, 'Mug': 93.45965864744312, 'Pistol': 81.24648323641985, 'Rocket': 64.8429431012464, 'Skateboard': 78.981751845811, 'Table': 83.49261408388004} +================================================== + +EPOCH 93 / 100 +100%|█████████████████████████████| 438/438 [05:15<00:00, 1.39it/s, data_loading=0.382, iteration=0.237, train_acc=94.46, train_loss_seg=0.129, train_macc=89.36, train_miou=83.89] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:34<00:00, 2.63it/s, test_acc=92.81, test_loss_seg=0.146, test_macc=86.83, test_miou=79.70] +================================================== +test_loss_seg = 0.14603674411773682 +test_acc = 92.81652878626542 +test_macc = 86.83488444415204 +test_miou = 79.70096955284632 +test_acc_per_class = {'Airplane': 91.12508298625588, 'Bag': 95.83570156492743, 'Cap': 92.70348295772025, 'Car': 90.89882726598823, 'Chair': 94.95276428107951, 'Earphone': 87.84518680697789, 'Guitar': 96.25773933605053, 'Knife': 92.0160498888185, 'Lamp': 90.33859932205087, 'Laptop': 98.19201772251402, 'Motorbike': 86.5661339647784, 'Mug': 99.26117969453139, 'Pistol': 95.61619636465065, 'Rocket': 83.73073436083409, 'Skateboard': 95.19960936842283, 'Table': 94.52515469464622} +test_macc_per_class = {'Airplane': 90.35065402452231, 'Bag': 80.36752482717304, 'Cap': 85.39443953237055, 'Car': 82.4741693549508, 'Chair': 91.40772403814672, 'Earphone': 72.1652401683305, 'Guitar': 94.80406371023818, 'Knife': 91.97297448244399, 'Lamp': 87.16948595497323, 'Laptop': 98.1417406079363, 'Motorbike': 76.6361310158059, 'Mug': 95.20541582824119, 'Pistol': 84.56035437820468, 'Rocket': 83.62345807036942, 'Skateboard': 88.59923584200233, 'Table': 86.4855392707233} +test_miou_per_class = {'Airplane': 81.85993609946343, 'Bag': 77.59404926505628, 'Cap': 80.82144361782748, 'Car': 75.35211670643011, 'Chair': 85.56766275577736, 'Earphone': 60.73439617652405, 'Guitar': 90.32061437809688, 'Knife': 85.18183242362879, 'Lamp': 78.24044431100249, 'Laptop': 96.42451851011934, 'Motorbike': 68.0012685243909, 'Mug': 93.25393919549525, 'Pistol': 80.0079606420931, 'Rocket': 66.42584044498216, 'Skateboard': 74.60135707907443, 'Table': 80.82813271557887} +================================================== + +EPOCH 94 / 100 +100%|█████████████████████████████| 438/438 [05:15<00:00, 1.39it/s, data_loading=0.383, iteration=0.230, train_acc=94.39, train_loss_seg=0.133, train_macc=89.48, train_miou=83.36] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:34<00:00, 2.63it/s, test_acc=92.85, test_loss_seg=0.115, test_macc=86.97, test_miou=79.65] +================================================== +test_loss_seg = 0.11558126658201218 +test_acc = 92.85348434111364 +test_macc = 86.9709719790031 +test_miou = 79.6544670434743 +test_acc_per_class = {'Airplane': 91.08334098517874, 'Bag': 94.8794724075197, 'Cap': 90.49543617456065, 'Car': 91.11493726077963, 'Chair': 94.92446225852473, 'Earphone': 91.76175023552811, 'Guitar': 96.11971920158375, 'Knife': 91.48449982041342, 'Lamp': 89.06578530225237, 'Laptop': 98.12184221332207, 'Motorbike': 86.78540291443517, 'Mug': 99.31101835500615, 'Pistol': 95.61626718164263, 'Rocket': 84.14955946151437, 'Skateboard': 95.7025654382081, 'Table': 95.03969024734872} +test_macc_per_class = {'Airplane': 90.07764283424264, 'Bag': 85.81495778137818, 'Cap': 82.15098933644285, 'Car': 84.84020716214731, 'Chair': 90.61421967916878, 'Earphone': 73.1585086403548, 'Guitar': 94.28790648188645, 'Knife': 91.41851198465312, 'Lamp': 89.42258438367298, 'Laptop': 98.07212769633402, 'Motorbike': 77.71033042060644, 'Mug': 96.16295689637829, 'Pistol': 84.92512844734563, 'Rocket': 74.991049288561, 'Skateboard': 88.27556500838621, 'Table': 89.61286562249107} +test_miou_per_class = {'Airplane': 81.55551547472136, 'Bag': 77.18890252996175, 'Cap': 76.38674629096212, 'Car': 76.24955944952066, 'Chair': 84.97961156978468, 'Earphone': 66.18228483806475, 'Guitar': 89.93809791957892, 'Knife': 84.25015070573426, 'Lamp': 74.85312108619114, 'Laptop': 96.28992663577411, 'Motorbike': 68.73858862750009, 'Mug': 93.75772987411179, 'Pistol': 80.37308707117795, 'Rocket': 64.25819767931713, 'Skateboard': 75.95197482857127, 'Table': 83.51797811461671} +================================================== + +EPOCH 95 / 100 +100%|█████████████████████████████| 438/438 [05:15<00:00, 1.39it/s, data_loading=0.381, iteration=0.233, train_acc=95.58, train_loss_seg=0.130, train_macc=90.19, train_miou=85.16] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:34<00:00, 2.64it/s, test_acc=93.10, test_loss_seg=0.136, test_macc=86.78, test_miou=80.00] +================================================== +test_loss_seg = 0.13685207068920135 +test_acc = 93.10178777385265 +test_macc = 86.78739465025231 +test_miou = 80.00258100180159 +test_acc_per_class = {'Airplane': 91.34933720719086, 'Bag': 95.2391713747646, 'Cap': 93.4859886431836, 'Car': 91.37187122698755, 'Chair': 94.88479746764524, 'Earphone': 93.41892367217727, 'Guitar': 96.07985749419066, 'Knife': 91.55502199889851, 'Lamp': 89.30681842570269, 'Laptop': 98.17062266782705, 'Motorbike': 87.17818606566009, 'Mug': 99.01802311484684, 'Pistol': 95.47437484812367, 'Rocket': 81.8440853034148, 'Skateboard': 96.3421579315619, 'Table': 94.90936693946716} +test_macc_per_class = {'Airplane': 88.8548628020447, 'Bag': 78.39795363861309, 'Cap': 87.60275331607691, 'Car': 87.30896186182679, 'Chair': 91.60552124255385, 'Earphone': 73.10368376105573, 'Guitar': 93.80575418266034, 'Knife': 91.49712193197594, 'Lamp': 89.20608296841256, 'Laptop': 98.19315502289795, 'Motorbike': 82.17271052539813, 'Mug': 95.11769453058172, 'Pistol': 83.55762169846788, 'Rocket': 72.1653614085911, 'Skateboard': 87.5540655366732, 'Table': 88.45500997620702} +test_miou_per_class = {'Airplane': 81.8484199065403, 'Bag': 75.36943780752388, 'Cap': 83.22261563890746, 'Car': 77.56658066608361, 'Chair': 85.34762291827893, 'Earphone': 68.82317828357358, 'Guitar': 89.81155674320021, 'Knife': 84.37460624780645, 'Lamp': 76.13985279035123, 'Laptop': 96.39008565254235, 'Motorbike': 70.61659994191245, 'Mug': 91.42375718937247, 'Pistol': 79.33559310486933, 'Rocket': 59.39635174241167, 'Skateboard': 77.79302005871838, 'Table': 82.58201733673296} +================================================== + +EPOCH 96 / 100 +100%|█████████████████████████████| 438/438 [05:15<00:00, 1.39it/s, data_loading=0.377, iteration=0.234, train_acc=94.94, train_loss_seg=0.128, train_macc=91.00, train_miou=85.61] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:34<00:00, 2.64it/s, test_acc=93.01, test_loss_seg=0.104, test_macc=86.40, test_miou=79.90] +================================================== +test_loss_seg = 0.10471294075250626 +test_acc = 93.01601794612648 +test_macc = 86.40539439988467 +test_miou = 79.90635923921728 +test_acc_per_class = {'Airplane': 91.14787989186219, 'Bag': 95.6687898089172, 'Cap': 90.82066036443348, 'Car': 91.31835089894606, 'Chair': 94.72913194832154, 'Earphone': 93.31046312178388, 'Guitar': 96.07324825010511, 'Knife': 92.79677319267763, 'Lamp': 90.36074897409341, 'Laptop': 98.15521249329439, 'Motorbike': 84.93359397458747, 'Mug': 99.25112331502746, 'Pistol': 96.00179384572927, 'Rocket': 82.61155163515225, 'Skateboard': 96.3797549424435, 'Table': 94.69721048064922} +test_macc_per_class = {'Airplane': 89.15199990015873, 'Bag': 83.72409789470619, 'Cap': 82.60678299391935, 'Car': 86.84734592529931, 'Chair': 90.99776625112438, 'Earphone': 71.73459304480447, 'Guitar': 94.39935474558658, 'Knife': 92.76173494516907, 'Lamp': 86.11426431378592, 'Laptop': 98.1327597980705, 'Motorbike': 73.53013606817656, 'Mug': 94.58203217985393, 'Pistol': 87.34714620603383, 'Rocket': 71.83576783587885, 'Skateboard': 89.70504726118837, 'Table': 89.0154810343985} +test_miou_per_class = {'Airplane': 81.7091865087779, 'Bag': 78.29044790973634, 'Cap': 76.9400331938652, 'Car': 77.09493054735182, 'Chair': 84.9577131723765, 'Earphone': 66.56869139620404, 'Guitar': 89.81608474985372, 'Knife': 86.53963834431178, 'Lamp': 77.57622605291934, 'Laptop': 96.35694301239293, 'Motorbike': 64.57930510050429, 'Mug': 93.12963810559552, 'Pistol': 82.61282681816687, 'Rocket': 61.393083544394, 'Skateboard': 78.6996877793615, 'Table': 82.23731159166512} +================================================== + +EPOCH 97 / 100 +100%|█████████████████████████████| 438/438 [05:15<00:00, 1.39it/s, data_loading=0.383, iteration=0.237, train_acc=94.98, train_loss_seg=0.118, train_macc=90.68, train_miou=85.16] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:34<00:00, 2.64it/s, test_acc=93.37, test_loss_seg=0.168, test_macc=87.51, test_miou=80.80] +================================================== +test_loss_seg = 0.16858600080013275 +test_acc = 93.3726286675049 +test_macc = 87.51132029760808 +test_miou = 80.8068111630587 +test_acc_per_class = {'Airplane': 90.94032997130873, 'Bag': 95.7348748193571, 'Cap': 94.29349101318691, 'Car': 91.43918596508624, 'Chair': 94.88049803686154, 'Earphone': 91.57791822340165, 'Guitar': 96.15990237980051, 'Knife': 92.90471183215662, 'Lamp': 91.01394403261281, 'Laptop': 98.11570346870978, 'Motorbike': 86.96851034495965, 'Mug': 99.32558357802716, 'Pistol': 95.72660352695668, 'Rocket': 83.95226645293437, 'Skateboard': 96.16443877208303, 'Table': 94.76409626263563} +test_macc_per_class = {'Airplane': 90.32767934827336, 'Bag': 82.35025364524621, 'Cap': 89.37579090181377, 'Car': 86.74686852921086, 'Chair': 91.64992023048669, 'Earphone': 71.36439985686746, 'Guitar': 94.81667377787687, 'Knife': 92.88823894778969, 'Lamp': 89.42682353055869, 'Laptop': 98.04649087339428, 'Motorbike': 80.00942476890124, 'Mug': 95.91105419132795, 'Pistol': 84.15181376881749, 'Rocket': 75.44892509140352, 'Skateboard': 89.16510424534566, 'Table': 88.50166305441547} +test_miou_per_class = {'Airplane': 81.48507094181876, 'Bag': 78.56555968423405, 'Cap': 85.16778558088515, 'Car': 77.30538530536867, 'Chair': 85.43198773332949, 'Earphone': 64.61889476435002, 'Guitar': 90.24490767424632, 'Knife': 86.74247081391024, 'Lamp': 80.07133629529048, 'Laptop': 96.27681261649958, 'Motorbike': 68.76280820765497, 'Mug': 93.86945618139819, 'Pistol': 80.14826124120047, 'Rocket': 64.25450041488746, 'Skateboard': 77.86338391474168, 'Table': 82.10035723912368} +================================================== +miou: 80.75561479818244 -> 80.8068111630587 + +EPOCH 98 / 100 +100%|█████████████████████████████| 438/438 [05:15<00:00, 1.39it/s, data_loading=0.379, iteration=0.231, train_acc=94.97, train_loss_seg=0.120, train_macc=91.28, train_miou=86.00] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:34<00:00, 2.64it/s, test_acc=93.21, test_loss_seg=0.126, test_macc=86.39, test_miou=80.15] +================================================== +test_loss_seg = 0.12605129182338715 +test_acc = 93.21017534107288 +test_macc = 86.39076702086211 +test_miou = 80.15821616366036 +test_acc_per_class = {'Airplane': 91.40600701680953, 'Bag': 95.52687680828384, 'Cap': 92.03519649858667, 'Car': 91.2845079723033, 'Chair': 94.89557004123985, 'Earphone': 93.98256729793117, 'Guitar': 96.06193885633797, 'Knife': 92.02952076112553, 'Lamp': 90.68358079242942, 'Laptop': 98.08393397162357, 'Motorbike': 86.29940257352942, 'Mug': 99.10680757470092, 'Pistol': 95.62704768838734, 'Rocket': 83.1454190340909, 'Skateboard': 96.2145215068925, 'Table': 94.97990706289427} +test_macc_per_class = {'Airplane': 89.6416448623498, 'Bag': 80.24838303963804, 'Cap': 84.96428238920176, 'Car': 87.55697061109828, 'Chair': 90.85785945523126, 'Earphone': 71.765866369458, 'Guitar': 93.50921017134773, 'Knife': 92.02317562649749, 'Lamp': 89.64039281530685, 'Laptop': 97.978564375121, 'Motorbike': 71.97188985305648, 'Mug': 95.53265762336156, 'Pistol': 84.34944412853754, 'Rocket': 76.22019898933345, 'Skateboard': 87.4664340242488, 'Table': 88.5252980000058} +test_miou_per_class = {'Airplane': 81.93198868454705, 'Bag': 76.80253491037557, 'Cap': 79.84888951694897, 'Car': 77.26995028684135, 'Chair': 85.1896855003547, 'Earphone': 68.06403079945648, 'Guitar': 89.61422962282803, 'Knife': 85.23284612302389, 'Lamp': 81.37455891804913, 'Laptop': 96.21059379347913, 'Motorbike': 65.39987975604, 'Mug': 92.11442577262132, 'Pistol': 80.11358169699014, 'Rocket': 63.24042708465375, 'Skateboard': 77.3388542802266, 'Table': 82.78498187212945} +================================================== + +EPOCH 99 / 100 +100%|█████████████████████████████| 438/438 [05:15<00:00, 1.39it/s, data_loading=0.381, iteration=0.233, train_acc=95.16, train_loss_seg=0.117, train_macc=90.13, train_miou=84.97] +100%|████████████████████████████████████████████████████████████████████████| 90/90 [00:33<00:00, 2.65it/s, test_acc=90.95, test_loss_seg=0.145, test_macc=85.65, test_miou=78.11] +================================================== +test_loss_seg = 0.1452503651380539 +test_acc = 90.95297564684472 +test_macc = 85.65776756924717 +test_miou = 78.11274449046947 +test_acc_per_class = {'Airplane': 91.23435971464201, 'Bag': 95.17076683977105, 'Cap': 92.69182796692382, 'Car': 91.27479035199737, 'Chair': 94.95432327375337, 'Earphone': 60.35234606663178, 'Guitar': 96.3230757841226, 'Knife': 91.89853572884333, 'Lamp': 91.12655373722723, 'Laptop': 97.97885387024182, 'Motorbike': 85.44174687544893, 'Mug': 99.07868963188112, 'Pistol': 96.0812690085588, 'Rocket': 80.49982244318183, 'Skateboard': 96.08940430839766, 'Table': 95.05124474789305} +test_macc_per_class = {'Airplane': 88.25942547942684, 'Bag': 77.48168720099096, 'Cap': 86.12101404505337, 'Car': 85.18370125671044, 'Chair': 90.6221513472541, 'Earphone': 58.11280106850963, 'Guitar': 94.66731076596221, 'Knife': 91.89939621009773, 'Lamp': 88.7259710347473, 'Laptop': 97.89071223623822, 'Motorbike': 85.98362183299261, 'Mug': 94.73713875984299, 'Pistol': 88.98826862494582, 'Rocket': 63.14551026531346, 'Skateboard': 89.51093897539162, 'Table': 89.19463200447773} +test_miou_per_class = {'Airplane': 81.49246129952866, 'Bag': 74.3862941399426, 'Cap': 81.30122292984763, 'Car': 76.60484812811767, 'Chair': 85.18206870127682, 'Earphone': 37.40464086586233, 'Guitar': 90.48232783792707, 'Knife': 85.01125471838384, 'Lamp': 80.28907756433219, 'Laptop': 96.009515440567, 'Motorbike': 70.72768977006673, 'Mug': 91.83625047863121, 'Pistol': 83.83845099881488, 'Rocket': 54.208046584411996, 'Skateboard': 77.90017930279588, 'Table': 83.12958308700505} + + +BEST: +* loss_seg: 0.08968085050582886 +* acc: 93.38952466037554 +* miou: 80.8068111630587 +* macc: 87.90512718023592 +* miou_per_class for best_miou: +{ + 'Airplane': 81.48507094181876, + 'Bag': 78.56555968423405, + 'Cap': 85.16778558088515, + 'Car': 77.30538530536867, + 'Chair': 85.43198773332949, + 'Earphone': 64.61889476435002, + 'Guitar': 90.24490767424632, + 'Knife': 86.74247081391024, + 'Lamp': 80.07133629529048, + 'Laptop': 96.27681261649958, + 'Motorbike': 68.76280820765497, + 'Mug': 93.86945618139819, + 'Pistol': 80.14826124120047, + 'Rocket': 64.25450041488746, + 'Skateboard': 77.86338391474168, + 'Table': 82.10035723912368 +} +``` \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/shapenet/rscnn_original.md b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/shapenet/rscnn_original.md new file mode 100644 index 00000000..f54360f8 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/benchmark/shapenet/rscnn_original.md @@ -0,0 +1,1229 @@ +(superpoint-graph-job-py3.6) ➜ deeppointcloud-benchmarks git:(RSConv_debug) ✗ poetry run python train.py experiment.model_name=RSConv_MSN experiment.dataset=shapenet wandb.log=False training.enable_cudnn=True training.batch_size=12 data.shapenet.normal=False +The down_conv_nn has a different size as radii. Make sure of have sharedMLP +Using category information for the predictions with 16 categories +SegmentationModel( + (down_modules): ModuleList( + (0): RSConvOriginalMSGDown: 2800 (ModuleList( + (0): OriginalRSConv(ModuleList( + (0): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (3): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + )) + (1): OriginalRSConv(ModuleList( + (0): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (3): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + )) + (2): OriginalRSConv(ModuleList( + (0): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (3): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + )) + ), shared: ModuleList( + (0): Conv2d(10, 32, kernel_size=(1, 1), stride=(1, 1)) + (1): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1)) + (2): Conv1d(16, 64, kernel_size=(1,), stride=(1,)) + (3): Conv2d(3, 16, kernel_size=(1, 1), stride=(1, 1)) + ) )) + (1): RSConvOriginalMSGDown: 34005 (ModuleList( + (0): OriginalRSConv(ModuleList( + (0): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (1): BatchNorm2d(195, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (2): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + )) + (1): OriginalRSConv(ModuleList( + (0): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (1): BatchNorm2d(195, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (2): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + )) + (2): OriginalRSConv(ModuleList( + (0): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (1): BatchNorm2d(195, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (2): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + )) + ), shared: ModuleList( + (0): Conv2d(10, 32, kernel_size=(1, 1), stride=(1, 1)) + (1): Conv2d(32, 195, kernel_size=(1, 1), stride=(1, 1)) + (2): Conv1d(195, 128, kernel_size=(1,), stride=(1,)) + ) )) + (2): RSConvOriginalMSGDown: 129429 (ModuleList( + (0): OriginalRSConv(ModuleList( + (0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (1): BatchNorm2d(387, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (2): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + )) + (1): OriginalRSConv(ModuleList( + (0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (1): BatchNorm2d(387, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (2): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + )) + (2): OriginalRSConv(ModuleList( + (0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (1): BatchNorm2d(387, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (2): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + )) + ), shared: ModuleList( + (0): Conv2d(10, 64, kernel_size=(1, 1), stride=(1, 1)) + (1): Conv2d(64, 387, kernel_size=(1, 1), stride=(1, 1)) + (2): Conv1d(387, 256, kernel_size=(1,), stride=(1,)) + ) )) + (3): RSConvOriginalMSGDown: 504597 (ModuleList( + (0): OriginalRSConv(ModuleList( + (0): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (1): BatchNorm2d(771, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (2): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + )) + (1): OriginalRSConv(ModuleList( + (0): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (1): BatchNorm2d(771, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (2): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + )) + (2): OriginalRSConv(ModuleList( + (0): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (1): BatchNorm2d(771, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + (2): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + )) + ), shared: ModuleList( + (0): Conv2d(10, 128, kernel_size=(1, 1), stride=(1, 1)) + (1): Conv2d(128, 771, kernel_size=(1, 1), stride=(1, 1)) + (2): Conv1d(771, 512, kernel_size=(1,), stride=(1,)) + ) )) + ) + (inner_modules): ModuleList( + (0): GlobalDenseBaseModule: 197376 (aggr=mean, SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(1539, 128, kernel_size=(1, 1), stride=(1, 1)) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + )) + (1): GlobalDenseBaseModule: 99072 (aggr=mean, SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(771, 128, kernel_size=(1, 1), stride=(1, 1)) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + )) + ) + (up_modules): ModuleList( + (0): DenseFPModule: 1443840 (SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(2304, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + )) + (1): DenseFPModule: 722944 (SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(896, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + )) + (2): DenseFPModule: 246784 (SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(704, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + )) + (3): DenseFPModule: 49664 (SharedMLP( + (layer0): Conv2d( + (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + (layer1): Conv2d( + (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) + (normlayer): BatchNorm2d( + (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + )) + (4): DenseFPModule: 0 (SharedMLP()) + ) + (FC_layer): Seq( + (0): Conv1d( + (conv): Conv1d(400, 128, kernel_size=(1,), stride=(1,), bias=False) + (normlayer): BatchNorm1d( + (bn): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) + ) + (activation): ReLU(inplace) + ) + (1): Dropout(p=0.5) + (2): Conv1d( + (conv): Conv1d(128, 50, kernel_size=(1,), stride=(1,)) + ) + ) +) +Adam ( +Parameter Group 0 + amsgrad: False + betas: (0.9, 0.999) + eps: 1e-08 + initial_lr: 0.001 + lr: 0.001 + weight_decay: 0 +) +Model size = 3488417 +Access tensorboard with the following command +EPOCH 1 / 100 + 0%| | 0/1168 [00:00 71.88677117909052, Imiou: 77.52860172866566 -> 80.26959477294649 +EPOCH 3 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:39<00:00, 3.44it/s, data_loading=0.006, iteration=0.133, train_Cmiou=71.45, train_Imiou=80.33, train_loss_seg=0.218] +Learning rate = 0.001000 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.12it/s, test_Cmiou=74.38, test_Imiou=80.84, test_loss_seg=2.044] +================================================== + test_loss_seg = 2.0445027351379395 + test_Cmiou = 74.38502038921175 + test_Imiou = 80.84875734591189 + test_Imiou_per_class = {'Airplane': 0.7674508273467864, 'Bag': 0.5676537065008816, 'Cap': 0.7502212984227693, 'Car': 0.6730542422843496, 'Chair': 0.8725531961226138, 'Earphone': 0.6870072048777072, 'Guitar': 0.8786945310789914, 'Knife': 0.8284336832073766, 'Lamp': 0.8026789801449955, 'Laptop': 0.9511344857405444, 'Motorbike': 0.37458383869284667, 'Mug': 0.8976668330787444, 'Pistol': 0.7663134051553214, 'Rocket': 0.5511253044913195, 'Skateboard': 0.7253365250344729, 'Table': 0.8076952000941581} +================================================== +Cmiou: 71.88677117909052 -> 74.38502038921175, Imiou: 80.26959477294649 -> 80.84875734591189 +EPOCH 4 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:42<00:00, 3.41it/s, data_loading=0.006, iteration=0.135, train_Cmiou=71.53, train_Imiou=81.38, train_loss_seg=0.201] +Learning rate = 0.001000 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:24<00:00, 9.96it/s, test_Cmiou=74.26, test_Imiou=81.53, test_loss_seg=0.183] +================================================== + test_loss_seg = 0.18360912799835205 + test_Cmiou = 74.26034444971782 + test_Imiou = 81.53296936116313 + test_Imiou_per_class = {'Airplane': 0.7514842414283773, 'Bag': 0.6409829213673703, 'Cap': 0.7821761482191179, 'Car': 0.7253725420040257, 'Chair': 0.8874418708600028, 'Earphone': 0.6991693064276119, 'Guitar': 0.8800044434651371, 'Knife': 0.8362970408843464, 'Lamp': 0.8208426496845715, 'Laptop': 0.9516511458289447, 'Motorbike': 0.39183747394113033, 'Mug': 0.9254703575880555, 'Pistol': 0.6754748116718252, 'Rocket': 0.3956599418735582, 'Skateboard': 0.7062565565119785, 'Table': 0.8115336601987986} +================================================== +loss_seg: 0.26787325739860535 -> 0.18360912799835205, Imiou: 80.84875734591189 -> 81.53296936116313 +EPOCH 5 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:39<00:00, 3.44it/s, data_loading=0.006, iteration=0.132, train_Cmiou=74.54, train_Imiou=80.89, train_loss_seg=0.211] +Learning rate = 0.001000 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:24<00:00, 9.95it/s, test_Cmiou=75.99, test_Imiou=80.81, test_loss_seg=2.657] +================================================== + test_loss_seg = 2.6575050354003906 + test_Cmiou = 75.99827049024069 + test_Imiou = 80.81525574609294 + test_Imiou_per_class = {'Airplane': 0.7763293515561746, 'Bag': 0.7762284597670567, 'Cap': 0.7173052283333056, 'Car': 0.7236475366578089, 'Chair': 0.881958540330812, 'Earphone': 0.7388659937480995, 'Guitar': 0.8889777264722523, 'Knife': 0.8616837235217852, 'Lamp': 0.7421000652707677, 'Laptop': 0.9501962007073005, 'Motorbike': 0.4488839218285674, 'Mug': 0.8306115696392516, 'Pistol': 0.7893753705044979, 'Rocket': 0.5193188016267178, 'Skateboard': 0.7188746287734852, 'Table': 0.7953661597006253} +================================================== +Cmiou: 74.38502038921175 -> 75.99827049024069 +EPOCH 6 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:41<00:00, 3.42it/s, data_loading=0.006, iteration=0.134, train_Cmiou=78.51, train_Imiou=81.80, train_loss_seg=0.193] +Learning rate = 0.001000 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.12it/s, test_Cmiou=78.24, test_Imiou=82.91, test_loss_seg=0.543] +================================================== + test_loss_seg = 0.5429779887199402 + test_Cmiou = 78.24885796696306 + test_Imiou = 82.9124972269167 + test_Imiou_per_class = {'Airplane': 0.786495198009139, 'Bag': 0.7570709632091378, 'Cap': 0.7819285681093746, 'Car': 0.7183936212684267, 'Chair': 0.8854132563656214, 'Earphone': 0.7357636212300427, 'Guitar': 0.8987665451027259, 'Knife': 0.857413096670798, 'Lamp': 0.8182716917612152, 'Laptop': 0.9514762098052532, 'Motorbike': 0.5631626676477176, 'Mug': 0.9184284573342963, 'Pistol': 0.7787380200388226, 'Rocket': 0.517346652356757, 'Skateboard': 0.7289977792122357, 'Table': 0.8221509265925264} +================================================== +Cmiou: 75.99827049024069 -> 78.24885796696306, Imiou: 81.53296936116313 -> 82.9124972269167 +EPOCH 7 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:38<00:00, 3.45it/s, data_loading=0.006, iteration=0.133, train_Cmiou=76.09, train_Imiou=82.03, train_loss_seg=0.190] +Learning rate = 0.001000 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.12it/s, test_Cmiou=78.21, test_Imiou=82.35, test_loss_seg=0.120] +================================================== + test_loss_seg = 0.12039124220609665 + test_Cmiou = 78.21156296514759 + test_Imiou = 82.3568613750875 + test_Imiou_per_class = {'Airplane': 0.7797232533051404, 'Bag': 0.7407793969824407, 'Cap': 0.7965692730027004, 'Car': 0.7492881277601655, 'Chair': 0.8902414541990832, 'Earphone': 0.7651565548544763, 'Guitar': 0.889803380650947, 'Knife': 0.8075506232246399, 'Lamp': 0.764575965019347, 'Laptop': 0.9479549211695949, 'Motorbike': 0.6098580692028543, 'Mug': 0.9165535050688336, 'Pistol': 0.7582048706115967, 'Rocket': 0.5349234317899472, 'Skateboard': 0.744428145314425, 'Table': 0.8182391022674242} +================================================== +loss_seg: 0.18360912799835205 -> 0.12039124220609665 +EPOCH 8 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:38<00:00, 3.45it/s, data_loading=0.006, iteration=0.132, train_Cmiou=77.24, train_Imiou=83.24, train_loss_seg=0.181] +Learning rate = 0.001000 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.12it/s, test_Cmiou=78.36, test_Imiou=83.18, test_loss_seg=0.426] +================================================== + test_loss_seg = 0.4265753924846649 + test_Cmiou = 78.3621485930258 + test_Imiou = 83.18212531154978 + test_Imiou_per_class = {'Airplane': 0.7889784433994564, 'Bag': 0.7280971146701337, 'Cap': 0.8011821259429799, 'Car': 0.7409191906010333, 'Chair': 0.8935525882502223, 'Earphone': 0.7469496429188613, 'Guitar': 0.8961279765749962, 'Knife': 0.8496235901511952, 'Lamp': 0.8206871698408128, 'Laptop': 0.9515201799170039, 'Motorbike': 0.5366096189059334, 'Mug': 0.9212907928014193, 'Pistol': 0.7996045645545198, 'Rocket': 0.5307769631358669, 'Skateboard': 0.7113934879731946, 'Table': 0.8206303252464969} +================================================== +Cmiou: 78.24885796696306 -> 78.3621485930258, Imiou: 82.9124972269167 -> 83.18212531154978 +EPOCH 9 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:38<00:00, 3.45it/s, data_loading=0.006, iteration=0.132, train_Cmiou=78.44, train_Imiou=82.88, train_loss_seg=0.179] +Learning rate = 0.001000 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.15it/s, test_Cmiou=78.14, test_Imiou=83.61, test_loss_seg=0.114] +================================================== + test_loss_seg = 0.11409256607294083 + test_Cmiou = 78.14567102029041 + test_Imiou = 83.61323591851735 + test_Imiou_per_class = {'Airplane': 0.7965467379130337, 'Bag': 0.7243093143282097, 'Cap': 0.6671315385627674, 'Car': 0.7424826673487234, 'Chair': 0.892319508743801, 'Earphone': 0.7636338251491451, 'Guitar': 0.8958823725300923, 'Knife': 0.8646175237581384, 'Lamp': 0.8322834504677235, 'Laptop': 0.95262569932145, 'Motorbike': 0.5947551480452967, 'Mug': 0.9049798927412476, 'Pistol': 0.7909445710627434, 'Rocket': 0.5101860261313231, 'Skateboard': 0.7447865735936754, 'Table': 0.8258225135490956} +================================================== +loss_seg: 0.12039124220609665 -> 0.11409256607294083, Imiou: 83.18212531154978 -> 83.61323591851735 +EPOCH 10 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:37<00:00, 3.46it/s, data_loading=0.006, iteration=0.130, train_Cmiou=78.23, train_Imiou=82.83, train_loss_seg=0.175] +Learning rate = 0.001000 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.14it/s, test_Cmiou=79.62, test_Imiou=83.61, test_loss_seg=0.376] +================================================== + test_loss_seg = 0.3768831193447113 + test_Cmiou = 79.62705928648487 + test_Imiou = 83.61512871845541 + test_Imiou_per_class = {'Airplane': 0.8037846293340755, 'Bag': 0.8022576623545082, 'Cap': 0.7193409163141844, 'Car': 0.7498310034749296, 'Chair': 0.8900789325734144, 'Earphone': 0.7846210442714144, 'Guitar': 0.8847172585689239, 'Knife': 0.8518276988584231, 'Lamp': 0.8272541208260421, 'Laptop': 0.9470698776446872, 'Motorbike': 0.6591794498862853, 'Mug': 0.9355348017672703, 'Pistol': 0.7435836651476063, 'Rocket': 0.5614485249072299, 'Skateboard': 0.7570639777338524, 'Table': 0.8227359221747287} +================================================== +Cmiou: 78.3621485930258 -> 79.62705928648487, Imiou: 83.61323591851735 -> 83.61512871845541 +EPOCH 11 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:37<00:00, 3.46it/s, data_loading=0.006, iteration=0.130, train_Cmiou=81.41, train_Imiou=83.92, train_loss_seg=0.174] +Learning rate = 0.001000 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.14it/s, test_Cmiou=77.89, test_Imiou=83.35, test_loss_seg=0.179] +================================================== + test_loss_seg = 0.17958565056324005 + test_Cmiou = 77.89891977758248 + test_Imiou = 83.35557425087717 + test_Imiou_per_class = {'Airplane': 0.7939147829566964, 'Bag': 0.7259618622103458, 'Cap': 0.7358026964665918, 'Car': 0.7069893215027807, 'Chair': 0.890769117843113, 'Earphone': 0.7427333116183322, 'Guitar': 0.9022920605225498, 'Knife': 0.8501091276046167, 'Lamp': 0.835305779740743, 'Laptop': 0.9521186552277612, 'Motorbike': 0.5708998478606424, 'Mug': 0.926004399400188, 'Pistol': 0.7880252172734185, 'Rocket': 0.4891996942033587, 'Skateboard': 0.7274538547200936, 'Table': 0.8262474352619658} +================================================== +EPOCH 12 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:38<00:00, 3.45it/s, data_loading=0.006, iteration=0.132, train_Cmiou=78.59, train_Imiou=83.28, train_loss_seg=0.168] +Learning rate = 0.001000 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.15it/s, test_Cmiou=79.69, test_Imiou=83.69, test_loss_seg=0.119] +================================================== + test_loss_seg = 0.11908375471830368 + test_Cmiou = 79.69552925222621 + test_Imiou = 83.69810362091756 + test_Imiou_per_class = {'Airplane': 0.8087143050218527, 'Bag': 0.753909075514775, 'Cap': 0.8176454142814644, 'Car': 0.7488680195224408, 'Chair': 0.8878817165554534, 'Earphone': 0.7600604163577055, 'Guitar': 0.9029211733638155, 'Knife': 0.7307670480588075, 'Lamp': 0.8414834295586219, 'Laptop': 0.9484973071223448, 'Motorbike': 0.6638914643152091, 'Mug': 0.9253253904107973, 'Pistol': 0.8014697077020526, 'Rocket': 0.591284681958504, 'Skateboard': 0.7427264118506718, 'Table': 0.8258391187616775} +================================================== +Cmiou: 79.62705928648487 -> 79.69552925222621, Imiou: 83.61512871845541 -> 83.69810362091756 +EPOCH 13 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:37<00:00, 3.46it/s, data_loading=0.006, iteration=0.133, train_Cmiou=77.01, train_Imiou=84.15, train_loss_seg=0.159] +Learning rate = 0.001000 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.15it/s, test_Cmiou=78.93, test_Imiou=83.62, test_loss_seg=1.539] +================================================== + test_loss_seg = 1.5393120050430298 + test_Cmiou = 78.93332007633876 + test_Imiou = 83.62831870658638 + test_Imiou_per_class = {'Airplane': 0.8084913564279467, 'Bag': 0.6911929204422939, 'Cap': 0.7696524447540068, 'Car': 0.7481061814360007, 'Chair': 0.8939065368319224, 'Earphone': 0.7331740661532764, 'Guitar': 0.8836214550978481, 'Knife': 0.8709380248049854, 'Lamp': 0.8353999481616642, 'Laptop': 0.952522017893786, 'Motorbike': 0.6018332982449491, 'Mug': 0.9279313648725869, 'Pistol': 0.8056688652504956, 'Rocket': 0.5388942025059527, 'Skateboard': 0.7513149763455211, 'Table': 0.8166835529909637} +================================================== +EPOCH 14 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:49<00:00, 3.35it/s, data_loading=0.006, iteration=0.135, train_Cmiou=79.13, train_Imiou=83.37, train_loss_seg=0.171] +Learning rate = 0.001000 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.06it/s, test_Cmiou=80.22, test_Imiou=83.91, test_loss_seg=0.295] +================================================== + test_loss_seg = 0.2954549491405487 + test_Cmiou = 80.22990447260217 + test_Imiou = 83.91855145904098 + test_Imiou_per_class = {'Airplane': 0.8012859059118967, 'Bag': 0.7955366839754271, 'Cap': 0.8354432742108215, 'Car': 0.7578535590093275, 'Chair': 0.8950772688306533, 'Earphone': 0.7499732541298363, 'Guitar': 0.9000480626239846, 'Knife': 0.815373706154593, 'Lamp': 0.828750132255234, 'Laptop': 0.9459906139867795, 'Motorbike': 0.6060891803732622, 'Mug': 0.9318436609668541, 'Pistol': 0.7999163123627582, 'Rocket': 0.5937044396727758, 'Skateboard': 0.7520126559871194, 'Table': 0.8278860051650256} +================================================== +Cmiou: 79.69552925222621 -> 80.22990447260217, Imiou: 83.69810362091756 -> 83.91855145904098 +EPOCH 15 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:37<00:00, 3.46it/s, data_loading=0.006, iteration=0.132, train_Cmiou=81.41, train_Imiou=84.57, train_loss_seg=0.15 ] +Learning rate = 0.000500 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.16it/s, test_Cmiou=81.12, test_Imiou=84.63, test_loss_seg=0.244] +================================================== + test_loss_seg = 0.24409419298171997 + test_Cmiou = 81.12462360060302 + test_Imiou = 84.63663479350241 + test_Imiou_per_class = {'Airplane': 0.8180254957356968, 'Bag': 0.7953288864011787, 'Cap': 0.7934103152503394, 'Car': 0.7629001003165631, 'Chair': 0.8984597240784905, 'Earphone': 0.7586289253096344, 'Guitar': 0.9075626672947285, 'Knife': 0.8621869579825054, 'Lamp': 0.8457027112901959, 'Laptop': 0.9515286913065066, 'Motorbike': 0.6779115577419915, 'Mug': 0.9244603006202886, 'Pistol': 0.8111456283105553, 'Rocket': 0.5983855390620869, 'Skateboard': 0.748756691037802, 'Table': 0.8255455843579186} +================================================== +Cmiou: 80.22990447260217 -> 81.12462360060302, Imiou: 83.91855145904098 -> 84.63663479350241 +EPOCH 16 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:37<00:00, 3.46it/s, data_loading=0.006, iteration=0.134, train_Cmiou=80.18, train_Imiou=84.95, train_loss_seg=0.151] +Learning rate = 0.000500 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.15it/s, test_Cmiou=81.03, test_Imiou=84.41, test_loss_seg=0.246] +================================================== + test_loss_seg = 0.246812641620636 + test_Cmiou = 81.03889564443665 + test_Imiou = 84.41292209889264 + test_Imiou_per_class = {'Airplane': 0.801713142212039, 'Bag': 0.8632003111265174, 'Cap': 0.8187601987892907, 'Car': 0.7740617595543855, 'Chair': 0.9021615939393197, 'Earphone': 0.7351419708608992, 'Guitar': 0.9027901636871706, 'Knife': 0.8208805500730229, 'Lamp': 0.8408113377776257, 'Laptop': 0.9512863567956421, 'Motorbike': 0.6249248165655994, 'Mug': 0.9329285020937848, 'Pistol': 0.8152124859732722, 'Rocket': 0.6062447978222891, 'Skateboard': 0.7488499490974646, 'Table': 0.82725536674154} +================================================== +EPOCH 17 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:49<00:00, 3.34it/s, data_loading=0.005, iteration=0.360, train_Cmiou=80.76, train_Imiou=84.73, train_loss_seg=0.147] +Learning rate = 0.000500 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:25<00:00, 9.42it/s, test_Cmiou=81.52, test_Imiou=84.65, test_loss_seg=0.520] +================================================== + test_loss_seg = 0.5203151106834412 + test_Cmiou = 81.52569012393795 + test_Imiou = 84.65452131646114 + test_Imiou_per_class = {'Airplane': 0.8199124718201742, 'Bag': 0.8214982670652088, 'Cap': 0.8241900922123911, 'Car': 0.772135429292009, 'Chair': 0.8976009221489408, 'Earphone': 0.7569344800035914, 'Guitar': 0.8986558041202389, 'Knife': 0.8563460061443866, 'Lamp': 0.8475917712714084, 'Laptop': 0.9530679985165784, 'Motorbike': 0.6892968355699574, 'Mug': 0.9283949328716289, 'Pistol': 0.8177333832633606, 'Rocket': 0.5787753628981419, 'Skateboard': 0.7582332677485698, 'Table': 0.8237433948834835} +================================================== +Cmiou: 81.12462360060302 -> 81.52569012393795, Imiou: 84.63663479350241 -> 84.65452131646114 +EPOCH 18 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:39<00:00, 3.44it/s, data_loading=0.006, iteration=0.131, train_Cmiou=81.69, train_Imiou=85.68, train_loss_seg=0.138] +Learning rate = 0.000500 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.16it/s, test_Cmiou=81.55, test_Imiou=84.90, test_loss_seg=0.324] +================================================== + test_loss_seg = 0.3242870271205902 + test_Cmiou = 81.55274699976489 + test_Imiou = 84.90127959663297 + test_Imiou_per_class = {'Airplane': 0.8191590619261371, 'Bag': 0.8027306620751807, 'Cap': 0.8310465526983961, 'Car': 0.7736352054520587, 'Chair': 0.9021862104081422, 'Earphone': 0.7274188174620096, 'Guitar': 0.9082631340913178, 'Knife': 0.8379035302150246, 'Lamp': 0.8391340340742076, 'Laptop': 0.954799813537434, 'Motorbike': 0.7053785571090592, 'Mug': 0.9373601679446255, 'Pistol': 0.8067955333682982, 'Rocket': 0.6177992894657252, 'Skateboard': 0.7543891069138643, 'Table': 0.8304398432209026} +================================================== +Cmiou: 81.52569012393795 -> 81.55274699976489, Imiou: 84.65452131646114 -> 84.90127959663297 +EPOCH 19 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.131, train_Cmiou=82.87, train_Imiou=85.68, train_loss_seg=0.145] +Learning rate = 0.000500 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.16it/s, test_Cmiou=80.62, test_Imiou=84.70, test_loss_seg=0.252] +================================================== + test_loss_seg = 0.2520694434642792 + test_Cmiou = 80.62645306031288 + test_Imiou = 84.70454955887809 + test_Imiou_per_class = {'Airplane': 0.82251276999973, 'Bag': 0.8295466913570223, 'Cap': 0.8074211146691804, 'Car': 0.76824051873163, 'Chair': 0.9017092392928461, 'Earphone': 0.7191874392464864, 'Guitar': 0.9079677017037555, 'Knife': 0.8504417001862776, 'Lamp': 0.8404673517690898, 'Laptop': 0.9512243265677787, 'Motorbike': 0.6658829841980075, 'Mug': 0.9361588391107638, 'Pistol': 0.8222072585004238, 'Rocket': 0.49578291971545174, 'Skateboard': 0.7555713680407491, 'Table': 0.82591026656087} +================================================== +EPOCH 20 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:48<00:00, 3.35it/s, data_loading=0.006, iteration=0.131, train_Cmiou=82.24, train_Imiou=85.14, train_loss_seg=0.142] +Learning rate = 0.000500 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.15it/s, test_Cmiou=81.41, test_Imiou=84.80, test_loss_seg=0.106] +================================================== + test_loss_seg = 0.10631060600280762 + test_Cmiou = 81.41646482159814 + test_Imiou = 84.80702058353035 + test_Imiou_per_class = {'Airplane': 0.8169236238248554, 'Bag': 0.8190462503490394, 'Cap': 0.8201504821100809, 'Car': 0.7643175387367237, 'Chair': 0.9017009586079845, 'Earphone': 0.7258679421914123, 'Guitar': 0.9023168842245136, 'Knife': 0.8559856621211516, 'Lamp': 0.838865246677382, 'Laptop': 0.9517645613135591, 'Motorbike': 0.6840154296499834, 'Mug': 0.9343892658566947, 'Pistol': 0.8051269593621657, 'Rocket': 0.6215775001047001, 'Skateboard': 0.7531107959667613, 'Table': 0.831475270358696} +================================================== +loss_seg: 0.11409256607294083 -> 0.10631060600280762 +EPOCH 21 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:37<00:00, 3.46it/s, data_loading=0.006, iteration=0.129, train_Cmiou=82.94, train_Imiou=85.64, train_loss_seg=0.134] +Learning rate = 0.000500 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.16it/s, test_Cmiou=81.06, test_Imiou=84.86, test_loss_seg=3.372] +================================================== + test_loss_seg = 3.37199330329895 + test_Cmiou = 81.06493114825042 + test_Imiou = 84.86660942808463 + test_Imiou_per_class = {'Airplane': 0.8210149226994407, 'Bag': 0.7793822575116863, 'Cap': 0.8372754946764384, 'Car': 0.7750803734376505, 'Chair': 0.9040886277319987, 'Earphone': 0.7418449105340124, 'Guitar': 0.9075132412223345, 'Knife': 0.8016628136476387, 'Lamp': 0.848226331894293, 'Laptop': 0.9537725505595704, 'Motorbike': 0.6940234557006898, 'Mug': 0.931679454191615, 'Pistol': 0.8210191592681181, 'Rocket': 0.5681850480534116, 'Skateboard': 0.7575026316573491, 'Table': 0.8281177109338205} +================================================== +EPOCH 22 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:54<00:00, 3.30it/s, data_loading=0.006, iteration=0.142, train_Cmiou=80.86, train_Imiou=85.67, train_loss_seg=0.145] +Learning rate = 0.000500 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.03it/s, test_Cmiou=81.45, test_Imiou=84.68, test_loss_seg=0.164] +================================================== + test_loss_seg = 0.16402961313724518 + test_Cmiou = 81.4517951406267 + test_Imiou = 84.68718974333045 + test_Imiou_per_class = {'Airplane': 0.8161181348224613, 'Bag': 0.8205034528360088, 'Cap': 0.8300398828499524, 'Car': 0.7731112609478349, 'Chair': 0.9005092816092944, 'Earphone': 0.7557930381535956, 'Guitar': 0.9007886110129848, 'Knife': 0.8496466727364815, 'Lamp': 0.8272106983662908, 'Laptop': 0.9539579989753718, 'Motorbike': 0.6939101286304499, 'Mug': 0.9319765473535121, 'Pistol': 0.7849565768422433, 'Rocket': 0.6093968640009856, 'Skateboard': 0.7525735052106012, 'Table': 0.8317945681522042} +================================================== +EPOCH 23 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:38<00:00, 3.45it/s, data_loading=0.006, iteration=0.132, train_Cmiou=82.47, train_Imiou=85.31, train_loss_seg=0.138] +Learning rate = 0.000500 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.16it/s, test_Cmiou=81.74, test_Imiou=84.74, test_loss_seg=0.241] +================================================== + test_loss_seg = 0.2414415031671524 + test_Cmiou = 81.74535774435638 + test_Imiou = 84.74932390045373 + test_Imiou_per_class = {'Airplane': 0.8065867278952064, 'Bag': 0.8411455948990716, 'Cap': 0.8342588752115269, 'Car': 0.761299705308712, 'Chair': 0.9013748491366765, 'Earphone': 0.7539591137461847, 'Guitar': 0.9065585017207215, 'Knife': 0.837951404705424, 'Lamp': 0.850145406546514, 'Laptop': 0.9525918058692788, 'Motorbike': 0.6923560443052812, 'Mug': 0.9332980899627784, 'Pistol': 0.8245337595237665, 'Rocket': 0.595101968578932, 'Skateboard': 0.7588724275064169, 'Table': 0.8292229641805291} +================================================== +Cmiou: 81.55274699976489 -> 81.74535774435638 +EPOCH 24 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:37<00:00, 3.47it/s, data_loading=0.006, iteration=0.134, train_Cmiou=83.45, train_Imiou=85.69, train_loss_seg=0.138] +Learning rate = 0.000500 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.16it/s, test_Cmiou=80.28, test_Imiou=84.59, test_loss_seg=0.164] +================================================== + test_loss_seg = 0.16421888768672943 + test_Cmiou = 80.28762852820694 + test_Imiou = 84.59977015515243 + test_Imiou_per_class = {'Airplane': 0.8227829210473409, 'Bag': 0.7945347531353929, 'Cap': 0.8284817132461195, 'Car': 0.7531754461960676, 'Chair': 0.9016993787731026, 'Earphone': 0.7589071206752723, 'Guitar': 0.9091306855515366, 'Knife': 0.8242475355199389, 'Lamp': 0.8352806869568299, 'Laptop': 0.9477460667215378, 'Motorbike': 0.7047905391086572, 'Mug': 0.937308000235622, 'Pistol': 0.8116368378424272, 'Rocket': 0.4308026960539803, 'Skateboard': 0.7574300736724557, 'Table': 0.82806610977683} +================================================== +EPOCH 25 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.132, train_Cmiou=83.01, train_Imiou=85.67, train_loss_seg=0.136] +Learning rate = 0.000500 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.16it/s, test_Cmiou=80.38, test_Imiou=84.82, test_loss_seg=0.142] +================================================== + test_loss_seg = 0.14251427352428436 + test_Cmiou = 80.38086888503997 + test_Imiou = 84.82596033289583 + test_Imiou_per_class = {'Airplane': 0.8182821823709368, 'Bag': 0.8624006123685607, 'Cap': 0.6624350892263049, 'Car': 0.769166015327608, 'Chair': 0.904106522683711, 'Earphone': 0.7247271329119711, 'Guitar': 0.9098088292115941, 'Knife': 0.8346803006508715, 'Lamp': 0.8422780496927316, 'Laptop': 0.9443256544663488, 'Motorbike': 0.7030246619361273, 'Mug': 0.9336738162318771, 'Pistol': 0.8133055716613165, 'Rocket': 0.5457313524140787, 'Skateboard': 0.7636387319930656, 'Table': 0.8293544984592904} +================================================== +EPOCH 26 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:40<00:00, 3.43it/s, data_loading=0.006, iteration=0.132, train_Cmiou=82.93, train_Imiou=85.57, train_loss_seg=0.137] +Learning rate = 0.000500 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.15it/s, test_Cmiou=80.66, test_Imiou=84.53, test_loss_seg=1.716] +================================================== + test_loss_seg = 1.7164469957351685 + test_Cmiou = 80.66582721881734 + test_Imiou = 84.53151857047575 + test_Imiou_per_class = {'Airplane': 0.8097597166174304, 'Bag': 0.8124394240251451, 'Cap': 0.8644536227940539, 'Car': 0.7696071232392052, 'Chair': 0.9011034421091026, 'Earphone': 0.6626663040051403, 'Guitar': 0.9086065910015245, 'Knife': 0.8264444103015209, 'Lamp': 0.8424672776107792, 'Laptop': 0.9510937056602533, 'Motorbike': 0.7093503764307544, 'Mug': 0.9293923290712685, 'Pistol': 0.7867845149888162, 'Rocket': 0.5500977762824478, 'Skateboard': 0.7561477063755675, 'Table': 0.8261180344977642} +================================================== +EPOCH 27 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:37<00:00, 3.46it/s, data_loading=0.006, iteration=0.131, train_Cmiou=83.56, train_Imiou=85.84, train_loss_seg=0.133] +Learning rate = 0.000500 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.13it/s, test_Cmiou=81.97, test_Imiou=84.85, test_loss_seg=0.148] +================================================== + test_loss_seg = 0.14824117720127106 + test_Cmiou = 81.9728507730735 + test_Imiou = 84.85913797345867 + test_Imiou_per_class = {'Airplane': 0.8199677796928738, 'Bag': 0.8251891941382109, 'Cap': 0.8467728452280882, 'Car': 0.7732311442241536, 'Chair': 0.9019476462038655, 'Earphone': 0.7488266533679369, 'Guitar': 0.9048766984809814, 'Knife': 0.8342971534644953, 'Lamp': 0.8441494694435537, 'Laptop': 0.9522864309377328, 'Motorbike': 0.702323334856813, 'Mug': 0.9347396238386579, 'Pistol': 0.8067301932813981, 'Rocket': 0.6286228201593531, 'Skateboard': 0.7643465575699347, 'Table': 0.8273485788037117} +================================================== +Cmiou: 81.74535774435638 -> 81.9728507730735 +EPOCH 28 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:38<00:00, 3.45it/s, data_loading=0.006, iteration=0.132, train_Cmiou=84.03, train_Imiou=85.97, train_loss_seg=0.133] +Learning rate = 0.000500 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.14it/s, test_Cmiou=81.68, test_Imiou=84.85, test_loss_seg=0.233] +================================================== + test_loss_seg = 0.23347754776477814 + test_Cmiou = 81.68711006218525 + test_Imiou = 84.85890134329176 + test_Imiou_per_class = {'Airplane': 0.8232339720691833, 'Bag': 0.8105322110678245, 'Cap': 0.8447219934215862, 'Car': 0.7718240042917669, 'Chair': 0.9034505735940347, 'Earphone': 0.757383036312956, 'Guitar': 0.9034120871095215, 'Knife': 0.835188290454186, 'Lamp': 0.8326072595410603, 'Laptop': 0.9502889532635976, 'Motorbike': 0.7044695995888535, 'Mug': 0.9344167208089713, 'Pistol': 0.8068394610825842, 'Rocket': 0.6132366503089146, 'Skateboard': 0.7481972204791664, 'Table': 0.8301355765554336} +================================================== +EPOCH 29 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.131, train_Cmiou=81.79, train_Imiou=86.39, train_loss_seg=0.126] +Learning rate = 0.000250 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.16it/s, test_Cmiou=82.12, test_Imiou=85.06, test_loss_seg=0.183] +================================================== + test_loss_seg = 0.18340271711349487 + test_Cmiou = 82.12534276132405 + test_Imiou = 85.06652596505626 + test_Imiou_per_class = {'Airplane': 0.8181968448804698, 'Bag': 0.8377829519060542, 'Cap': 0.8646582716402151, 'Car': 0.7785576678631543, 'Chair': 0.9042519162181292, 'Earphone': 0.7498865273078371, 'Guitar': 0.9110073985688287, 'Knife': 0.838920151501795, 'Lamp': 0.8455467720843652, 'Laptop': 0.9541229319926708, 'Motorbike': 0.6982469309753787, 'Mug': 0.9361693514850593, 'Pistol': 0.829446158531794, 'Rocket': 0.5811715293263314, 'Skateboard': 0.7628716915034786, 'Table': 0.8292177460262871} +================================================== +Cmiou: 81.9728507730735 -> 82.12534276132405, Imiou: 84.90127959663297 -> 85.06652596505626 +EPOCH 30 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.133, train_Cmiou=82.29, train_Imiou=86.23, train_loss_seg=0.122] +Learning rate = 0.000250 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.17it/s, test_Cmiou=81.93, test_Imiou=85.01, test_loss_seg=0.136] +================================================== + test_loss_seg = 0.13666513562202454 + test_Cmiou = 81.93604168019515 + test_Imiou = 85.01444670682645 + test_Imiou_per_class = {'Airplane': 0.8183684760430658, 'Bag': 0.8219496503624624, 'Cap': 0.8524042250906182, 'Car': 0.7761288327524044, 'Chair': 0.9035306850264159, 'Earphone': 0.7559016685216049, 'Guitar': 0.9109108746129099, 'Knife': 0.8221725468272281, 'Lamp': 0.8477281793448156, 'Laptop': 0.9533107905748996, 'Motorbike': 0.7098978512978164, 'Mug': 0.9364863250595531, 'Pistol': 0.8093673992902236, 'Rocket': 0.6052850572625542, 'Skateboard': 0.756404115093152, 'Table': 0.8299199916714995} +================================================== +EPOCH 31 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:54<00:00, 3.29it/s, data_loading=0.006, iteration=0.132, train_Cmiou=83.60, train_Imiou=86.23, train_loss_seg=0.125] +Learning rate = 0.000250 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.16it/s, test_Cmiou=82.10, test_Imiou=85.18, test_loss_seg=0.197] +================================================== + test_loss_seg = 0.19716638326644897 + test_Cmiou = 82.10717626603528 + test_Imiou = 85.18960342396188 + test_Imiou_per_class = {'Airplane': 0.8247793412574551, 'Bag': 0.819976704384295, 'Cap': 0.8591871299557128, 'Car': 0.7771357319746545, 'Chair': 0.9049916301543083, 'Earphone': 0.76615468906727, 'Guitar': 0.9102510808992149, 'Knife': 0.8307105594478553, 'Lamp': 0.8484775525980444, 'Laptop': 0.9528356020982153, 'Motorbike': 0.6985597638214299, 'Mug': 0.9335534193260542, 'Pistol': 0.826380103779244, 'Rocket': 0.5738177914099213, 'Skateboard': 0.7800608134632374, 'Table': 0.8302762889287312} +================================================== +Imiou: 85.06652596505626 -> 85.18960342396188 +EPOCH 32 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.132, train_Cmiou=85.30, train_Imiou=86.89, train_loss_seg=0.12 ] +Learning rate = 0.000250 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.17it/s, test_Cmiou=81.72, test_Imiou=84.97, test_loss_seg=0.403] +================================================== + test_loss_seg = 0.40336596965789795 + test_Cmiou = 81.72828454476465 + test_Imiou = 84.97845519287398 + test_Imiou_per_class = {'Airplane': 0.823198432648196, 'Bag': 0.8170507393880972, 'Cap': 0.8666634544442163, 'Car': 0.7788847830973966, 'Chair': 0.9030668650483306, 'Earphone': 0.754931798996123, 'Guitar': 0.9059398193263983, 'Knife': 0.8297050931249974, 'Lamp': 0.8403626373810258, 'Laptop': 0.9529277762615483, 'Motorbike': 0.70240265745866, 'Mug': 0.9351969093149608, 'Pistol': 0.7981401665563683, 'Rocket': 0.5794709159249636, 'Skateboard': 0.7578971479841548, 'Table': 0.8306863302069064} +================================================== +EPOCH 33 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:57<00:00, 3.27it/s, data_loading=0.006, iteration=0.132, train_Cmiou=84.40, train_Imiou=86.73, train_loss_seg=0.125] +Learning rate = 0.000250 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.12it/s, test_Cmiou=81.98, test_Imiou=84.98, test_loss_seg=0.248] +================================================== + test_loss_seg = 0.24841059744358063 + test_Cmiou = 81.98512634355582 + test_Imiou = 84.98163435148734 + test_Imiou_per_class = {'Airplane': 0.8251345377382595, 'Bag': 0.8516226124136296, 'Cap': 0.8492051937242735, 'Car': 0.7775206492748361, 'Chair': 0.9034314891933367, 'Earphone': 0.7438383457567179, 'Guitar': 0.9089025374879787, 'Knife': 0.8131981952386245, 'Lamp': 0.841220667401948, 'Laptop': 0.9533039138078535, 'Motorbike': 0.7076064236294481, 'Mug': 0.9362931156350316, 'Pistol': 0.812775828544723, 'Rocket': 0.6026695176478108, 'Skateboard': 0.7620161494221411, 'Table': 0.8288810380523186} +================================================== +EPOCH 34 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.131, train_Cmiou=84.47, train_Imiou=86.15, train_loss_seg=0.127] +Learning rate = 0.000250 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.17it/s, test_Cmiou=82.03, test_Imiou=85.06, test_loss_seg=0.161] +================================================== + test_loss_seg = 0.16102126240730286 + test_Cmiou = 82.03951947792422 + test_Imiou = 85.0657051785214 + test_Imiou_per_class = {'Airplane': 0.8236145529994575, 'Bag': 0.8509105289103001, 'Cap': 0.8393329050219677, 'Car': 0.7792342425081389, 'Chair': 0.9046599317553159, 'Earphone': 0.7580353348751675, 'Guitar': 0.9093695931451502, 'Knife': 0.8436195002295142, 'Lamp': 0.8383758000129177, 'Laptop': 0.9536237393382005, 'Motorbike': 0.7170924418132019, 'Mug': 0.9381170922513219, 'Pistol': 0.7954719917711842, 'Rocket': 0.5841280331700537, 'Skateboard': 0.761325684384739, 'Table': 0.8294117442812442} +================================================== +EPOCH 35 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:48<00:00, 3.35it/s, data_loading=0.006, iteration=0.14 , train_Cmiou=84.94, train_Imiou=86.32, train_loss_seg=0.124] +Learning rate = 0.000250 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.06it/s, test_Cmiou=81.89, test_Imiou=85.00, test_loss_seg=0.073] +================================================== + test_loss_seg = 0.07321659475564957 + test_Cmiou = 81.89040765596434 + test_Imiou = 85.00873125828522 + test_Imiou_per_class = {'Airplane': 0.8205993187629219, 'Bag': 0.847981002697627, 'Cap': 0.8430485160621347, 'Car': 0.7809964304464027, 'Chair': 0.905068789181107, 'Earphone': 0.7553425649438535, 'Guitar': 0.9038681373645654, 'Knife': 0.837776087102967, 'Lamp': 0.83841102790515, 'Laptop': 0.95330690720915, 'Motorbike': 0.7003005442255339, 'Mug': 0.9358635567510944, 'Pistol': 0.8138375698852134, 'Rocket': 0.5799814705006886, 'Skateboard': 0.755999902446561, 'Table': 0.8300833994693227} +================================================== +loss_seg: 0.10631060600280762 -> 0.07321659475564957 +EPOCH 36 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:38<00:00, 3.45it/s, data_loading=0.006, iteration=0.130, train_Cmiou=84.58, train_Imiou=86.61, train_loss_seg=0.121] +Learning rate = 0.000250 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.16it/s, test_Cmiou=81.76, test_Imiou=85.13, test_loss_seg=0.208] +================================================== + test_loss_seg = 0.2082234025001526 + test_Cmiou = 81.76138029467674 + test_Imiou = 85.13142836791715 + test_Imiou_per_class = {'Airplane': 0.8227898965649466, 'Bag': 0.8485881233941104, 'Cap': 0.8325403604855709, 'Car': 0.7772894199831996, 'Chair': 0.9064460840650616, 'Earphone': 0.6921795678619509, 'Guitar': 0.9087188426178205, 'Knife': 0.8341656245243749, 'Lamp': 0.8461556707152667, 'Laptop': 0.9538122625715864, 'Motorbike': 0.7175781506770537, 'Mug': 0.9347886362896223, 'Pistol': 0.8141552620446743, 'Rocket': 0.6061126903164281, 'Skateboard': 0.7570696525352757, 'Table': 0.8294306025013377} +================================================== +EPOCH 37 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.132, train_Cmiou=84.33, train_Imiou=86.43, train_loss_seg=0.122] +Learning rate = 0.000250 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.16it/s, test_Cmiou=82.16, test_Imiou=85.12, test_loss_seg=0.080] +================================================== + test_loss_seg = 0.08080736547708511 + test_Cmiou = 82.1623866600803 + test_Imiou = 85.12253957328583 + test_Imiou_per_class = {'Airplane': 0.8185618059493257, 'Bag': 0.8400060433575975, 'Cap': 0.8382520810412276, 'Car': 0.7761994326873569, 'Chair': 0.904645168627015, 'Earphone': 0.7589472540949673, 'Guitar': 0.910264271718976, 'Knife': 0.8338491510603545, 'Lamp': 0.8400952546017002, 'Laptop': 0.9538767405943006, 'Motorbike': 0.7143192814337027, 'Mug': 0.9349819298808121, 'Pistol': 0.8163859532132087, 'Rocket': 0.6142860024469409, 'Skateboard': 0.7581233029852329, 'Table': 0.8331881919201294} +================================================== +Cmiou: 82.12534276132405 -> 82.1623866600803 +EPOCH 38 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.134, train_Cmiou=85.39, train_Imiou=86.71, train_loss_seg=0.116] +Learning rate = 0.000250 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.17it/s, test_Cmiou=82.13, test_Imiou=85.02, test_loss_seg=0.135] +================================================== + test_loss_seg = 0.13505111634731293 + test_Cmiou = 82.13082672139885 + test_Imiou = 85.02903124118843 + test_Imiou_per_class = {'Airplane': 0.8208631004955532, 'Bag': 0.8493747621437409, 'Cap': 0.8414630944235792, 'Car': 0.7815434659312678, 'Chair': 0.905783049824814, 'Earphone': 0.7624064733380882, 'Guitar': 0.9112153770905912, 'Knife': 0.820976782061223, 'Lamp': 0.8436368771485049, 'Laptop': 0.9523079675755194, 'Motorbike': 0.6902181283925438, 'Mug': 0.9351922105762368, 'Pistol': 0.8217306194841677, 'Rocket': 0.6058728031965, 'Skateboard': 0.7706279126319983, 'Table': 0.8277196511094875} +================================================== +EPOCH 39 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.132, train_Cmiou=84.80, train_Imiou=86.66, train_loss_seg=0.118] +Learning rate = 0.000250 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.17it/s, test_Cmiou=81.98, test_Imiou=85.08, test_loss_seg=0.071] +================================================== + test_loss_seg = 0.071554996073246 + test_Cmiou = 81.9810687521441 + test_Imiou = 85.08168080592581 + test_Imiou_per_class = {'Airplane': 0.8122543476135131, 'Bag': 0.8458496107550068, 'Cap': 0.8425592403433271, 'Car': 0.7824509593417582, 'Chair': 0.9038929680757598, 'Earphone': 0.7600032886360953, 'Guitar': 0.9118769927844025, 'Knife': 0.832160559278412, 'Lamp': 0.8492425615648793, 'Laptop': 0.9543858040414142, 'Motorbike': 0.7027842914657167, 'Mug': 0.9205308695997773, 'Pistol': 0.8229403703431811, 'Rocket': 0.5871055014720928, 'Skateboard': 0.7571608778143017, 'Table': 0.8317727572134174} +================================================== +loss_seg: 0.07321659475564957 -> 0.071554996073246 +EPOCH 40 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.132, train_Cmiou=83.71, train_Imiou=86.35, train_loss_seg=0.120] +Learning rate = 0.000250 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.16it/s, test_Cmiou=82.54, test_Imiou=85.11, test_loss_seg=0.170] +================================================== + test_loss_seg = 0.17091111838817596 + test_Cmiou = 82.54043226503444 + test_Imiou = 85.10998151443786 + test_Imiou_per_class = {'Airplane': 0.8210442250269581, 'Bag': 0.8431229287414572, 'Cap': 0.8550897816924674, 'Car': 0.7839122981287232, 'Chair': 0.9030064449407746, 'Earphone': 0.7634198972899283, 'Guitar': 0.9119751179251371, 'Knife': 0.8346864170770312, 'Lamp': 0.8423750302663572, 'Laptop': 0.9528285292713342, 'Motorbike': 0.7152608438019269, 'Mug': 0.9405351309456697, 'Pistol': 0.8232917627396173, 'Rocket': 0.6178805798653135, 'Skateboard': 0.7688697354775696, 'Table': 0.8291704392152452} +================================================== +Cmiou: 82.1623866600803 -> 82.54043226503444 +EPOCH 41 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:54<00:00, 3.29it/s, data_loading=0.006, iteration=0.131, train_Cmiou=84.32, train_Imiou=86.56, train_loss_seg=0.116] +Learning rate = 0.000250 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.17it/s, test_Cmiou=82.18, test_Imiou=85.14, test_loss_seg=5.551] +================================================== + test_loss_seg = 5.551235198974609 + test_Cmiou = 82.1800768594926 + test_Imiou = 85.14962678667084 + test_Imiou_per_class = {'Airplane': 0.823148690231068, 'Bag': 0.8424739515861053, 'Cap': 0.8375201689353602, 'Car': 0.7801370666742162, 'Chair': 0.902408458789382, 'Earphone': 0.7565244488578413, 'Guitar': 0.9115204607936775, 'Knife': 0.842081624608394, 'Lamp': 0.8396902170272479, 'Laptop': 0.9539574358679748, 'Motorbike': 0.703548285270217, 'Mug': 0.9390142999544654, 'Pistol': 0.81199014365022, 'Rocket': 0.5985620721541904, 'Skateboard': 0.7733642194936238, 'Table': 0.8328707536248303} +================================================== +EPOCH 42 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.130, train_Cmiou=84.68, train_Imiou=86.85, train_loss_seg=0.120] +Learning rate = 0.000250 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.17it/s, test_Cmiou=82.06, test_Imiou=84.97, test_loss_seg=0.637] +================================================== + test_loss_seg = 0.637665331363678 + test_Cmiou = 82.06282151388848 + test_Imiou = 84.97374531901059 + test_Imiou_per_class = {'Airplane': 0.8243768481699044, 'Bag': 0.8426317513894028, 'Cap': 0.8636980450124359, 'Car': 0.7816550469699467, 'Chair': 0.9016161209763364, 'Earphone': 0.7535023672581452, 'Guitar': 0.9123025870097206, 'Knife': 0.8377068243742165, 'Lamp': 0.8384515716406048, 'Laptop': 0.9530337914824356, 'Motorbike': 0.6913227073552874, 'Mug': 0.9396203559301288, 'Pistol': 0.8101870649164629, 'Rocket': 0.5931542594960163, 'Skateboard': 0.7580823666393027, 'Table': 0.82870973360181} +================================================== +EPOCH 43 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:55<00:00, 3.29it/s, data_loading=0.006, iteration=0.132, train_Cmiou=84.09, train_Imiou=85.81, train_loss_seg=0.120] +Learning rate = 0.000125 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.15it/s, test_Cmiou=82.36, test_Imiou=85.25, test_loss_seg=0.126] +================================================== + test_loss_seg = 0.12619845569133759 + test_Cmiou = 82.36413788764033 + test_Imiou = 85.25959281022749 + test_Imiou_per_class = {'Airplane': 0.8276812015018369, 'Bag': 0.8300272190693331, 'Cap': 0.8610504513963018, 'Car': 0.7800997254468185, 'Chair': 0.9057435202873051, 'Earphone': 0.7516309237578718, 'Guitar': 0.9124385161155064, 'Knife': 0.8353431151908366, 'Lamp': 0.8419442502196651, 'Laptop': 0.9539753011315014, 'Motorbike': 0.7211736977534381, 'Mug': 0.940527641373321, 'Pistol': 0.8104011656627231, 'Rocket': 0.6157980782019976, 'Skateboard': 0.7595143015919501, 'Table': 0.8309129533220453} +================================================== +Imiou: 85.18960342396188 -> 85.25959281022749 +EPOCH 44 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.131, train_Cmiou=84.95, train_Imiou=87.51, train_loss_seg=0.116] +Learning rate = 0.000125 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.19it/s, test_Cmiou=82.64, test_Imiou=85.27, test_loss_seg=0.162] +================================================== + test_loss_seg = 0.162679985165596 + test_Cmiou = 82.64493595689592 + test_Imiou = 85.27760985360428 + test_Imiou_per_class = {'Airplane': 0.8269475951239537, 'Bag': 0.845311691194821, 'Cap': 0.8650740713031646, 'Car': 0.7812131337421516, 'Chair': 0.9053091115027251, 'Earphone': 0.7672628801644051, 'Guitar': 0.9133320414388001, 'Knife': 0.8353014197111971, 'Lamp': 0.8363567917163747, 'Laptop': 0.9541709851017998, 'Motorbike': 0.7132141679984769, 'Mug': 0.9425096100364724, 'Pistol': 0.8195363737774436, 'Rocket': 0.6276261208426223, 'Skateboard': 0.7570752303250291, 'Table': 0.8329485291239096} +================================================== +Cmiou: 82.54043226503444 -> 82.64493595689592, Imiou: 85.25959281022749 -> 85.27760985360428 +EPOCH 45 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:35<00:00, 3.48it/s, data_loading=0.006, iteration=0.130, train_Cmiou=84.07, train_Imiou=87.19, train_loss_seg=0.111] +Learning rate = 0.000125 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.19it/s, test_Cmiou=82.52, test_Imiou=85.21, test_loss_seg=0.101] +================================================== + test_loss_seg = 0.10165390372276306 + test_Cmiou = 82.52356808557289 + test_Imiou = 85.21555739598313 + test_Imiou_per_class = {'Airplane': 0.8247102351954111, 'Bag': 0.8450456728102148, 'Cap': 0.8588548651932754, 'Car': 0.7836878848811009, 'Chair': 0.9046634886417153, 'Earphone': 0.7627830455036183, 'Guitar': 0.9125108612866619, 'Knife': 0.8365844368526341, 'Lamp': 0.837235966122656, 'Laptop': 0.9543541002596324, 'Motorbike': 0.715438229182684, 'Mug': 0.9414212104485439, 'Pistol': 0.8142055916644754, 'Rocket': 0.6121139841727821, 'Skateboard': 0.7684691391622244, 'Table': 0.8316921823140315} +================================================== +EPOCH 46 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:35<00:00, 3.48it/s, data_loading=0.006, iteration=0.130, train_Cmiou=86.47, train_Imiou=87.69, train_loss_seg=0.111] +Learning rate = 0.000125 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.17it/s, test_Cmiou=82.43, test_Imiou=85.19, test_loss_seg=0.161] +================================================== + test_loss_seg = 0.1614244133234024 + test_Cmiou = 82.43155416219783 + test_Imiou = 85.19369532950064 + test_Imiou_per_class = {'Airplane': 0.8248818555509242, 'Bag': 0.8456148693494543, 'Cap': 0.8655530858329491, 'Car': 0.780494708887844, 'Chair': 0.9056165863139894, 'Earphone': 0.7642468006905551, 'Guitar': 0.9130370862286461, 'Knife': 0.8399035945800808, 'Lamp': 0.8387716945532692, 'Laptop': 0.9536920977289508, 'Motorbike': 0.7102945303401067, 'Mug': 0.9348899825941919, 'Pistol': 0.8136519011574698, 'Rocket': 0.597819033083916, 'Skateboard': 0.7701071408246234, 'Table': 0.8304736982346819} +================================================== +EPOCH 47 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.130, train_Cmiou=85.68, train_Imiou=87.23, train_loss_seg=0.111] +Learning rate = 0.000125 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.30, test_Imiou=85.08, test_loss_seg=0.048] +================================================== + test_loss_seg = 0.048263195902109146 + test_Cmiou = 82.30096758900982 + test_Imiou = 85.08604095531621 + test_Imiou_per_class = {'Airplane': 0.8228261853797931, 'Bag': 0.8421310298793402, 'Cap': 0.8639239351432786, 'Car': 0.7783147793641617, 'Chair': 0.9045455409962474, 'Earphone': 0.7541021176646762, 'Guitar': 0.912542816299337, 'Knife': 0.8321926341095403, 'Lamp': 0.8365054947796016, 'Laptop': 0.9548181550653159, 'Motorbike': 0.7041920072520442, 'Mug': 0.9392349194134993, 'Pistol': 0.8046539859110046, 'Rocket': 0.6164765492819693, 'Skateboard': 0.7706727301195627, 'Table': 0.8310219335821974} +================================================== +loss_seg: 0.071554996073246 -> 0.048263195902109146 +EPOCH 48 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.131, train_Cmiou=84.67, train_Imiou=86.90, train_loss_seg=0.116] +Learning rate = 0.000125 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.17it/s, test_Cmiou=82.53, test_Imiou=85.23, test_loss_seg=0.160] +================================================== + test_loss_seg = 0.16045145690441132 + test_Cmiou = 82.53630596304734 + test_Imiou = 85.230172342764 + test_Imiou_per_class = {'Airplane': 0.8230240145579925, 'Bag': 0.8487441862690731, 'Cap': 0.8657298492492841, 'Car': 0.7824151029527522, 'Chair': 0.904830657469504, 'Earphone': 0.7652588085060394, 'Guitar': 0.9125412742517963, 'Knife': 0.8419221826044812, 'Lamp': 0.8388643020950115, 'Laptop': 0.9543653356792525, 'Motorbike': 0.7114332050450111, 'Mug': 0.940084214284794, 'Pistol': 0.8214759212466533, 'Rocket': 0.5930244508719169, 'Skateboard': 0.7702537258294723, 'Table': 0.8318417231745405} +================================================== +EPOCH 49 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:35<00:00, 3.48it/s, data_loading=0.006, iteration=0.131, train_Cmiou=84.92, train_Imiou=87.12, train_loss_seg=0.112] +Learning rate = 0.000125 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.53, test_Imiou=85.25, test_loss_seg=0.147] +================================================== + test_loss_seg = 0.1477843075990677 + test_Cmiou = 82.53501161642123 + test_Imiou = 85.25364462845859 + test_Imiou_per_class = {'Airplane': 0.8254727649058997, 'Bag': 0.8435894331769196, 'Cap': 0.8616647026797012, 'Car': 0.7843231569397856, 'Chair': 0.9042148910723499, 'Earphone': 0.7561257331928202, 'Guitar': 0.9136652803680992, 'Knife': 0.840438366257905, 'Lamp': 0.8376190367718969, 'Laptop': 0.9544502565980656, 'Motorbike': 0.7107524038457899, 'Mug': 0.937004390243141, 'Pistol': 0.8156307006776377, 'Rocket': 0.6293074876724837, 'Skateboard': 0.7585072910215722, 'Table': 0.8328359632033289} +================================================== +EPOCH 50 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:35<00:00, 3.48it/s, data_loading=0.006, iteration=0.132, train_Cmiou=85.17, train_Imiou=87.72, train_loss_seg=0.107] +Learning rate = 0.000125 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.17it/s, test_Cmiou=82.26, test_Imiou=84.84, test_loss_seg=0.834] +================================================== + test_loss_seg = 0.8342617154121399 + test_Cmiou = 82.26705540959274 + test_Imiou = 84.84432457760656 + test_Imiou_per_class = {'Airplane': 0.8238082800475283, 'Bag': 0.8473452230984623, 'Cap': 0.8623071626418686, 'Car': 0.7771567772822713, 'Chair': 0.9047509290665823, 'Earphone': 0.752602723203809, 'Guitar': 0.9117119870884245, 'Knife': 0.803026988694868, 'Lamp': 0.8392573460922411, 'Laptop': 0.9531578876661705, 'Motorbike': 0.711418103993145, 'Mug': 0.9402374732828037, 'Pistol': 0.820891849742832, 'Rocket': 0.6232567596207611, 'Skateboard': 0.768558653247855, 'Table': 0.8232407207652168} +================================================== +EPOCH 51 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.131, train_Cmiou=86.32, train_Imiou=87.10, train_loss_seg=0.112] +Learning rate = 0.000125 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.56, test_Imiou=85.09, test_loss_seg=0.112] +================================================== + test_loss_seg = 0.1127411425113678 + test_Cmiou = 82.569356756156 + test_Imiou = 85.09045484110958 + test_Imiou_per_class = {'Airplane': 0.8207337716063577, 'Bag': 0.8418766827411363, 'Cap': 0.8697097578179335, 'Car': 0.7820315300986355, 'Chair': 0.9055519951380065, 'Earphone': 0.7616132822035361, 'Guitar': 0.9103532403169509, 'Knife': 0.8390426703881626, 'Lamp': 0.8370375275716607, 'Laptop': 0.9541049020106422, 'Motorbike': 0.708174024659518, 'Mug': 0.9377563352368155, 'Pistol': 0.8035750246853219, 'Rocket': 0.6374743717966975, 'Skateboard': 0.7726021681487626, 'Table': 0.8294597965648237} +================================================== +EPOCH 52 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.133, train_Cmiou=85.26, train_Imiou=87.30, train_loss_seg=0.113] +Learning rate = 0.000125 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.17it/s, test_Cmiou=82.45, test_Imiou=85.12, test_loss_seg=0.234] +================================================== + test_loss_seg = 0.2345266342163086 + test_Cmiou = 82.45846173875232 + test_Imiou = 85.1233579676652 + test_Imiou_per_class = {'Airplane': 0.8261006466035566, 'Bag': 0.8471064582479207, 'Cap': 0.8598101564534038, 'Car': 0.7826311734583656, 'Chair': 0.9029271826739182, 'Earphone': 0.763189599631006, 'Guitar': 0.9125247194519808, 'Knife': 0.8333412033154713, 'Lamp': 0.838101463190862, 'Laptop': 0.9542651880028612, 'Motorbike': 0.7041384473917217, 'Mug': 0.9385538266686348, 'Pistol': 0.8120484564433522, 'Rocket': 0.6174246123008748, 'Skateboard': 0.7708180803677358, 'Table': 0.8303726639987057} +================================================== +EPOCH 53 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:39<00:00, 3.44it/s, data_loading=0.006, iteration=0.132, train_Cmiou=86.95, train_Imiou=87.14, train_loss_seg=0.115] +Learning rate = 0.000125 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.17it/s, test_Cmiou=82.58, test_Imiou=85.14, test_loss_seg=0.042] +================================================== + test_loss_seg = 0.04246433079242706 + test_Cmiou = 82.5801561149453 + test_Imiou = 85.14826070324307 + test_Imiou_per_class = {'Airplane': 0.8248227498147556, 'Bag': 0.8475490218514172, 'Cap': 0.872431296445079, 'Car': 0.7798639322941646, 'Chair': 0.90494076865624, 'Earphone': 0.7603444196953207, 'Guitar': 0.912694111347077, 'Knife': 0.8430382313736262, 'Lamp': 0.837746346571032, 'Laptop': 0.9543778300104985, 'Motorbike': 0.7113918914710661, 'Mug': 0.940782957091326, 'Pistol': 0.8100853394480351, 'Rocket': 0.6091131707635179, 'Skateboard': 0.7744808788244056, 'Table': 0.8291620327336844} +================================================== +loss_seg: 0.048263195902109146 -> 0.04246433079242706 +EPOCH 54 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.132, train_Cmiou=85.72, train_Imiou=87.45, train_loss_seg=0.108] +Learning rate = 0.000125 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.58, test_Imiou=84.95, test_loss_seg=0.078] +================================================== + test_loss_seg = 0.07859033346176147 + test_Cmiou = 82.58082787612588 + test_Imiou = 84.95796955564524 + test_Imiou_per_class = {'Airplane': 0.8207852452008159, 'Bag': 0.8376088447491757, 'Cap': 0.8689341267231612, 'Car': 0.7819205275896863, 'Chair': 0.9036590942339177, 'Earphone': 0.7661542666601951, 'Guitar': 0.9131816961549476, 'Knife': 0.8415011179090879, 'Lamp': 0.833790219068121, 'Laptop': 0.9534081472715126, 'Motorbike': 0.71083258303592, 'Mug': 0.9408688670245331, 'Pistol': 0.8199920223180525, 'Rocket': 0.6240355760025299, 'Skateboard': 0.7701849790234713, 'Table': 0.8260751472150155} +================================================== +EPOCH 55 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.130, train_Cmiou=85.03, train_Imiou=86.84, train_loss_seg=0.115] +Learning rate = 0.000125 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.48, test_Imiou=85.15, test_loss_seg=0.195] +================================================== + test_loss_seg = 0.19502870738506317 + test_Cmiou = 82.48612100145785 + test_Imiou = 85.15519367004927 + test_Imiou_per_class = {'Airplane': 0.8268755743610414, 'Bag': 0.8445790531281839, 'Cap': 0.8673201949620739, 'Car': 0.7789737321931949, 'Chair': 0.9036520699738859, 'Earphone': 0.7652682248746233, 'Guitar': 0.9123182024639879, 'Knife': 0.8323842872767131, 'Lamp': 0.8406160920509824, 'Laptop': 0.9540243851112498, 'Motorbike': 0.7079483984406795, 'Mug': 0.9380336715092037, 'Pistol': 0.8170989588710204, 'Rocket': 0.62123989790104, 'Skateboard': 0.7570303198851758, 'Table': 0.8304162972302005} +================================================== +EPOCH 56 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:52<00:00, 3.31it/s, data_loading=0.006, iteration=0.145, train_Cmiou=85.10, train_Imiou=86.88, train_loss_seg=0.116] +Learning rate = 0.000125 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:24<00:00, 10.00it/s, test_Cmiou=82.23, test_Imiou=85.03, test_loss_seg=0.458] +================================================== + test_loss_seg = 0.45887503027915955 + test_Cmiou = 82.23739340843235 + test_Imiou = 85.03281422288293 + test_Imiou_per_class = {'Airplane': 0.8269324097698488, 'Bag': 0.8500618575400327, 'Cap': 0.8656164854450036, 'Car': 0.7819963152516711, 'Chair': 0.9039874961585884, 'Earphone': 0.7675269567995772, 'Guitar': 0.9113657947248368, 'Knife': 0.828993678613385, 'Lamp': 0.8380181806455695, 'Laptop': 0.9550194208337126, 'Motorbike': 0.7109433048071855, 'Mug': 0.9399147846385588, 'Pistol': 0.8160439346589665, 'Rocket': 0.5758376359780107, 'Skateboard': 0.7587801344637224, 'Table': 0.8269445550205053} +================================================== +EPOCH 57 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:37<00:00, 3.46it/s, data_loading=0.006, iteration=0.130, train_Cmiou=85.39, train_Imiou=86.80, train_loss_seg=0.109] +Learning rate = 0.000125 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.53, test_Imiou=85.06, test_loss_seg=0.096] +================================================== + test_loss_seg = 0.09691200405359268 + test_Cmiou = 82.53847681247035 + test_Imiou = 85.06523141262146 + test_Imiou_per_class = {'Airplane': 0.825468480777825, 'Bag': 0.8481777706001787, 'Cap': 0.8735962285462272, 'Car': 0.7822874736403007, 'Chair': 0.9038958382993456, 'Earphone': 0.7684152288524068, 'Guitar': 0.9123835490194436, 'Knife': 0.8361537026352213, 'Lamp': 0.8342955319347284, 'Laptop': 0.9544551152923629, 'Motorbike': 0.7046672285706502, 'Mug': 0.9412833505199032, 'Pistol': 0.8225281288319938, 'Rocket': 0.6134448776660448, 'Skateboard': 0.7565641512281883, 'Table': 0.8285396335804363} +================================================== +EPOCH 58 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.130, train_Cmiou=85.67, train_Imiou=87.68, train_loss_seg=0.106] +Learning rate = 0.000063 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.17it/s, test_Cmiou=82.70, test_Imiou=85.12, test_loss_seg=0.295] +================================================== + test_loss_seg = 0.29586726427078247 + test_Cmiou = 82.70750182461093 + test_Imiou = 85.12714920766857 + test_Imiou_per_class = {'Airplane': 0.8274670066993983, 'Bag': 0.8460735334614952, 'Cap': 0.8730895532296974, 'Car': 0.7848469049366611, 'Chair': 0.9055662007063241, 'Earphone': 0.7727800290944159, 'Guitar': 0.911263115931843, 'Knife': 0.8321923280155004, 'Lamp': 0.8352484454435718, 'Laptop': 0.9548260114744519, 'Motorbike': 0.7155025725930507, 'Mug': 0.9401620424798585, 'Pistol': 0.8055940972484973, 'Rocket': 0.6304405424984665, 'Skateboard': 0.7704526438351091, 'Table': 0.8276952642894079} +================================================== +Cmiou: 82.64493595689592 -> 82.70750182461093 +EPOCH 59 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:58<00:00, 3.26it/s, data_loading=0.006, iteration=0.131, train_Cmiou=86.40, train_Imiou=87.54, train_loss_seg=0.105] +Learning rate = 0.000063 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.17it/s, test_Cmiou=82.39, test_Imiou=85.11, test_loss_seg=0.086] +================================================== + test_loss_seg = 0.0867200568318367 + test_Cmiou = 82.39362602158235 + test_Imiou = 85.1178248801256 + test_Imiou_per_class = {'Airplane': 0.8270082768992354, 'Bag': 0.8479755981540691, 'Cap': 0.869413510636576, 'Car': 0.7796827939922274, 'Chair': 0.9038827490105592, 'Earphone': 0.7754018780427813, 'Guitar': 0.9114471648383506, 'Knife': 0.8339723078390439, 'Lamp': 0.8414684285436868, 'Laptop': 0.954670587169825, 'Motorbike': 0.7152441040788752, 'Mug': 0.9406780891462337, 'Pistol': 0.8109414997019214, 'Rocket': 0.5761706208783236, 'Skateboard': 0.7667944123721097, 'Table': 0.8282281421493586} +================================================== +EPOCH 60 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.134, train_Cmiou=85.75, train_Imiou=86.54, train_loss_seg=0.112] +Learning rate = 0.000063 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.17it/s, test_Cmiou=82.47, test_Imiou=84.98, test_loss_seg=0.452] +================================================== + test_loss_seg = 0.45247364044189453 + test_Cmiou = 82.47344890206038 + test_Imiou = 84.98537617206765 + test_Imiou_per_class = {'Airplane': 0.8238958254740856, 'Bag': 0.8449547918321721, 'Cap': 0.8745177961536972, 'Car': 0.7825583953173448, 'Chair': 0.9056150783333855, 'Earphone': 0.772312392942772, 'Guitar': 0.913415073321381, 'Knife': 0.8337690520603876, 'Lamp': 0.8375127992762949, 'Laptop': 0.9548052169042042, 'Motorbike': 0.7110981130904338, 'Mug': 0.9425138411400326, 'Pistol': 0.821470522119992, 'Rocket': 0.5941530679936134, 'Skateboard': 0.759504390562551, 'Table': 0.8236554678073131} +================================================== +EPOCH 61 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:35<00:00, 3.48it/s, data_loading=0.006, iteration=0.132, train_Cmiou=85.75, train_Imiou=87.32, train_loss_seg=0.109] +Learning rate = 0.000063 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.78, test_Imiou=85.11, test_loss_seg=0.133] +================================================== + test_loss_seg = 0.13344667851924896 + test_Cmiou = 82.78002739702927 + test_Imiou = 85.1133523225192 + test_Imiou_per_class = {'Airplane': 0.8232188299718632, 'Bag': 0.8436591729871242, 'Cap': 0.8671774859245496, 'Car': 0.7823451702693675, 'Chair': 0.9051276731324536, 'Earphone': 0.7727810338300648, 'Guitar': 0.9131914973015712, 'Knife': 0.8384633199709256, 'Lamp': 0.8333939551690774, 'Laptop': 0.9543396165115755, 'Motorbike': 0.7154859680751051, 'Mug': 0.9407750152270188, 'Pistol': 0.8168719519525588, 'Rocket': 0.6309439887947459, 'Skateboard': 0.7783340273633256, 'Table': 0.8286956770433583} +================================================== +Cmiou: 82.70750182461093 -> 82.78002739702927 +EPOCH 62 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:55<00:00, 3.29it/s, data_loading=0.006, iteration=0.131, train_Cmiou=85.05, train_Imiou=87.05, train_loss_seg=0.112] +Learning rate = 0.000063 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.16it/s, test_Cmiou=82.64, test_Imiou=85.12, test_loss_seg=0.240] +================================================== + test_loss_seg = 0.24039560556411743 + test_Cmiou = 82.64398327415414 + test_Imiou = 85.12059281873935 + test_Imiou_per_class = {'Airplane': 0.8248124676499672, 'Bag': 0.8466828678248076, 'Cap': 0.8643899222591386, 'Car': 0.7825616399672509, 'Chair': 0.9044200301612755, 'Earphone': 0.7603397677132678, 'Guitar': 0.9123895262853672, 'Knife': 0.839614141519316, 'Lamp': 0.8364191498254285, 'Laptop': 0.9543224352603756, 'Motorbike': 0.7179690709416061, 'Mug': 0.9407985850092166, 'Pistol': 0.8217414112788951, 'Rocket': 0.6181121720353259, 'Skateboard': 0.770330831917541, 'Table': 0.8281333042158839} +================================================== +EPOCH 63 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.130, train_Cmiou=86.95, train_Imiou=87.39, train_loss_seg=0.108] +Learning rate = 0.000063 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.58, test_Imiou=85.01, test_loss_seg=0.089] +================================================== + test_loss_seg = 0.08992281556129456 + test_Cmiou = 82.58960137298311 + test_Imiou = 85.01671727136589 + test_Imiou_per_class = {'Airplane': 0.828818127818797, 'Bag': 0.8522062299484382, 'Cap': 0.8655506294936072, 'Car': 0.7810083406183701, 'Chair': 0.9023940921901428, 'Earphone': 0.7647884777042727, 'Guitar': 0.9135229106911235, 'Knife': 0.8326229739951799, 'Lamp': 0.8386818417993923, 'Laptop': 0.9554001033834169, 'Motorbike': 0.7190919956814515, 'Mug': 0.9405200275924585, 'Pistol': 0.8128589269039, 'Rocket': 0.6057154624199874, 'Skateboard': 0.7764255218319339, 'Table': 0.824730557604827} +================================================== +EPOCH 64 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.48it/s, data_loading=0.006, iteration=0.132, train_Cmiou=85.96, train_Imiou=87.54, train_loss_seg=0.107] +Learning rate = 0.000063 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.69, test_Imiou=85.10, test_loss_seg=0.070] +================================================== + test_loss_seg = 0.07091207057237625 + test_Cmiou = 82.69729293641873 + test_Imiou = 85.1052857402083 + test_Imiou_per_class = {'Airplane': 0.8261941298091113, 'Bag': 0.8463004737014447, 'Cap': 0.8628275245667184, 'Car': 0.7831585129893975, 'Chair': 0.9047103538044753, 'Earphone': 0.7664798496285581, 'Guitar': 0.9120544654748601, 'Knife': 0.8365901878710353, 'Lamp': 0.8350976619797179, 'Laptop': 0.9551118455342194, 'Motorbike': 0.7157348493311629, 'Mug': 0.9396947198775045, 'Pistol': 0.8175930512738688, 'Rocket': 0.629557973584186, 'Skateboard': 0.7729722900547066, 'Table': 0.827488980346031} +================================================== +EPOCH 65 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.130, train_Cmiou=84.86, train_Imiou=87.05, train_loss_seg=0.107] +Learning rate = 0.000063 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.63, test_Imiou=84.96, test_loss_seg=0.090] +================================================== + test_loss_seg = 0.09053408354520798 + test_Cmiou = 82.63967126568727 + test_Imiou = 84.96929271246388 + test_Imiou_per_class = {'Airplane': 0.8241217161381905, 'Bag': 0.85064469148448, 'Cap': 0.8674653842199628, 'Car': 0.7816437445112694, 'Chair': 0.9029020491201142, 'Earphone': 0.774994567528407, 'Guitar': 0.9127525976848138, 'Knife': 0.831074125861463, 'Lamp': 0.8399659210204411, 'Laptop': 0.9544977515135628, 'Motorbike': 0.7087279385636823, 'Mug': 0.9418834384796249, 'Pistol': 0.813307804611479, 'Rocket': 0.6219955894327281, 'Skateboard': 0.7716365133785822, 'Table': 0.8247335689611632} +================================================== +EPOCH 66 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.130, train_Cmiou=84.62, train_Imiou=87.32, train_loss_seg=0.106] +Learning rate = 0.000063 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.19it/s, test_Cmiou=82.56, test_Imiou=85.14, test_loss_seg=0.276] +================================================== + test_loss_seg = 0.276111900806427 + test_Cmiou = 82.5678743323288 + test_Imiou = 85.14280202763318 + test_Imiou_per_class = {'Airplane': 0.828133226669318, 'Bag': 0.8458053721277341, 'Cap': 0.8690677373148499, 'Car': 0.7820734662427504, 'Chair': 0.9048302063390442, 'Earphone': 0.764980929807357, 'Guitar': 0.9130096718178214, 'Knife': 0.8426661038336627, 'Lamp': 0.8363448154485463, 'Laptop': 0.9539399776891817, 'Motorbike': 0.7049017363665693, 'Mug': 0.9396189461726007, 'Pistol': 0.8151291246861827, 'Rocket': 0.6236678783637557, 'Skateboard': 0.7583115130518262, 'Table': 0.8283791872414072} +================================================== +EPOCH 67 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.131, train_Cmiou=85.66, train_Imiou=87.05, train_loss_seg=0.110] +Learning rate = 0.000063 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.15it/s, test_Cmiou=82.63, test_Imiou=84.94, test_loss_seg=0.158] +================================================== + test_loss_seg = 0.15830697119235992 + test_Cmiou = 82.63261634079821 + test_Imiou = 84.94210537633981 + test_Imiou_per_class = {'Airplane': 0.8250653180787804, 'Bag': 0.8521547864949437, 'Cap': 0.8705755334475295, 'Car': 0.7797041730473515, 'Chair': 0.9030963471711911, 'Earphone': 0.7711205016294367, 'Guitar': 0.9116596574334028, 'Knife': 0.8330237876026695, 'Lamp': 0.8358104488558995, 'Laptop': 0.955219233360935, 'Motorbike': 0.7155950011304446, 'Mug': 0.9398340842427558, 'Pistol': 0.8150751748902887, 'Rocket': 0.6193059884095159, 'Skateboard': 0.769284045853823, 'Table': 0.8246945328787448} +================================================== +EPOCH 68 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.131, train_Cmiou=84.17, train_Imiou=88.19, train_loss_seg=0.107] +Learning rate = 0.000063 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.20it/s, test_Cmiou=82.67, test_Imiou=84.98, test_loss_seg=0.179] +================================================== + test_loss_seg = 0.17899662256240845 + test_Cmiou = 82.67545460045656 + test_Imiou = 84.98757136110535 + test_Imiou_per_class = {'Airplane': 0.8209005466254431, 'Bag': 0.8469564979293546, 'Cap': 0.8704453994275357, 'Car': 0.7829942460145092, 'Chair': 0.9041351512834257, 'Earphone': 0.7675422631117559, 'Guitar': 0.9130913095765736, 'Knife': 0.8376908038602362, 'Lamp': 0.8345337017469099, 'Laptop': 0.9553269234446007, 'Motorbike': 0.7214532094088036, 'Mug': 0.93920709951812, 'Pistol': 0.8180871455495136, 'Rocket': 0.616959829644847, 'Skateboard': 0.7730405348537609, 'Table': 0.8257080740776606} +================================================== +EPOCH 69 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:38<00:00, 3.45it/s, data_loading=0.006, iteration=0.138, train_Cmiou=86.14, train_Imiou=88.01, train_loss_seg=0.103] +Learning rate = 0.000063 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.09it/s, test_Cmiou=82.49, test_Imiou=84.98, test_loss_seg=0.358] +================================================== + test_loss_seg = 0.35848501324653625 + test_Cmiou = 82.4991653796212 + test_Imiou = 84.9878925307918 + test_Imiou_per_class = {'Airplane': 0.8231311376963037, 'Bag': 0.8457461837643366, 'Cap': 0.8726625745874382, 'Car': 0.7839492294307807, 'Chair': 0.9028706370233592, 'Earphone': 0.7698377515781974, 'Guitar': 0.9126910145823824, 'Knife': 0.8277418463231072, 'Lamp': 0.8357454884481328, 'Laptop': 0.9549737737977909, 'Motorbike': 0.716034151954668, 'Mug': 0.9399359890681566, 'Pistol': 0.8060125039320719, 'Rocket': 0.610697925611496, 'Skateboard': 0.7704463718528989, 'Table': 0.8273898810882713} +================================================== +EPOCH 70 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.007, iteration=0.132, train_Cmiou=84.48, train_Imiou=87.26, train_loss_seg=0.110] +Learning rate = 0.000063 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.70, test_Imiou=85.18, test_loss_seg=0.126] +================================================== + test_loss_seg = 0.12671466171741486 + test_Cmiou = 82.70833060691028 + test_Imiou = 85.18233270590699 + test_Imiou_per_class = {'Airplane': 0.8263184206150198, 'Bag': 0.8490273863672915, 'Cap': 0.8652524481079822, 'Car': 0.781245390927671, 'Chair': 0.9041682343463474, 'Earphone': 0.7774749802976116, 'Guitar': 0.9126731892589747, 'Knife': 0.8377477734974054, 'Lamp': 0.8346492937354345, 'Laptop': 0.9551088170009182, 'Motorbike': 0.7268443481410057, 'Mug': 0.9407298728118206, 'Pistol': 0.8036525923996244, 'Rocket': 0.615902665053166, 'Skateboard': 0.7717663120256666, 'Table': 0.8307711725197057} +================================================== +EPOCH 71 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:35<00:00, 3.48it/s, data_loading=0.006, iteration=0.131, train_Cmiou=86.15, train_Imiou=87.56, train_loss_seg=0.105] +Learning rate = 0.000063 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.19it/s, test_Cmiou=82.60, test_Imiou=85.17, test_loss_seg=1.663] +================================================== + test_loss_seg = 1.6638153791427612 + test_Cmiou = 82.60024096801494 + test_Imiou = 85.17808315805743 + test_Imiou_per_class = {'Airplane': 0.8241124222076709, 'Bag': 0.8490973652181723, 'Cap': 0.8732909883217291, 'Car': 0.782081023192087, 'Chair': 0.9050277666555573, 'Earphone': 0.772414365505756, 'Guitar': 0.912854069473607, 'Knife': 0.8320775974500265, 'Lamp': 0.8351527321930338, 'Laptop': 0.9550676191411759, 'Motorbike': 0.7152519445170048, 'Mug': 0.9403672229311593, 'Pistol': 0.8081925965217732, 'Rocket': 0.6090535917578977, 'Skateboard': 0.770414722101105, 'Table': 0.831582527694634} +================================================== +EPOCH 72 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.132, train_Cmiou=85.20, train_Imiou=87.67, train_loss_seg=0.106] +Learning rate = 0.000031 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.72, test_Imiou=85.02, test_loss_seg=0.163] +================================================== + test_loss_seg = 0.16368243098258972 + test_Cmiou = 82.72383851950104 + test_Imiou = 85.0204432049671 + test_Imiou_per_class = {'Airplane': 0.8258792886510886, 'Bag': 0.8493383830312313, 'Cap': 0.868710334531454, 'Car': 0.7840353365051917, 'Chair': 0.9044417167030196, 'Earphone': 0.7641429226151298, 'Guitar': 0.912853008052252, 'Knife': 0.8404788920949475, 'Lamp': 0.8355219883500247, 'Laptop': 0.9554519810035792, 'Motorbike': 0.7174699932901594, 'Mug': 0.9407924600513572, 'Pistol': 0.8140247874190135, 'Rocket': 0.6283779250317078, 'Skateboard': 0.7701237927303713, 'Table': 0.8241713530596411} +================================================== +EPOCH 73 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:35<00:00, 3.48it/s, data_loading=0.006, iteration=0.132, train_Cmiou=86.67, train_Imiou=87.55, train_loss_seg=0.103] +Learning rate = 0.000031 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.72, test_Imiou=85.13, test_loss_seg=0.090] +================================================== + test_loss_seg = 0.09034768491983414 + test_Cmiou = 82.72526925695563 + test_Imiou = 85.13573732544242 + test_Imiou_per_class = {'Airplane': 0.8257875045564645, 'Bag': 0.8460116955700936, 'Cap': 0.8732213521653659, 'Car': 0.7830536249759682, 'Chair': 0.9050482318758342, 'Earphone': 0.7681420716574051, 'Guitar': 0.9132965474225593, 'Knife': 0.8338423704511992, 'Lamp': 0.8374221953657265, 'Laptop': 0.9552910163958849, 'Motorbike': 0.7180077864604186, 'Mug': 0.940121677312979, 'Pistol': 0.812183682868475, 'Rocket': 0.6154477785850322, 'Skateboard': 0.7816680117648109, 'Table': 0.8274975336846819} +================================================== +EPOCH 74 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.48it/s, data_loading=0.006, iteration=0.132, train_Cmiou=85.34, train_Imiou=87.35, train_loss_seg=0.103] +Learning rate = 0.000031 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.57, test_Imiou=85.16, test_loss_seg=0.124] +================================================== + test_loss_seg = 0.12403357028961182 + test_Cmiou = 82.57580855514742 + test_Imiou = 85.16994279658276 + test_Imiou_per_class = {'Airplane': 0.8252667034679838, 'Bag': 0.847839383876302, 'Cap': 0.8687830368774688, 'Car': 0.7832829052581407, 'Chair': 0.9042756921837025, 'Earphone': 0.7710927705351436, 'Guitar': 0.9124433980438913, 'Knife': 0.8355102158184184, 'Lamp': 0.8381103078734048, 'Laptop': 0.9544851122438799, 'Motorbike': 0.711188354938016, 'Mug': 0.9398597532885408, 'Pistol': 0.8133423182219365, 'Rocket': 0.6048605111609623, 'Skateboard': 0.7716169229616415, 'Table': 0.8301719820741528} +================================================== +EPOCH 75 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:55<00:00, 3.29it/s, data_loading=0.006, iteration=0.131, train_Cmiou=87.17, train_Imiou=87.01, train_loss_seg=0.107] +Learning rate = 0.000031 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.52, test_Imiou=85.10, test_loss_seg=0.412] +================================================== + test_loss_seg = 0.41287198662757874 + test_Cmiou = 82.52336383307744 + test_Imiou = 85.10262960770295 + test_Imiou_per_class = {'Airplane': 0.8287164012510657, 'Bag': 0.8467686680115805, 'Cap': 0.8712794189215977, 'Car': 0.7831488577597485, 'Chair': 0.9052294618666162, 'Earphone': 0.7731129903838011, 'Guitar': 0.9131495014105261, 'Knife': 0.8349543097997472, 'Lamp': 0.839437635496129, 'Laptop': 0.9550648843431436, 'Motorbike': 0.7145588342620811, 'Mug': 0.939592038630022, 'Pistol': 0.8146080173377285, 'Rocket': 0.5909689201301395, 'Skateboard': 0.7679705923809236, 'Table': 0.8251776813075399} +================================================== +EPOCH 76 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:35<00:00, 3.48it/s, data_loading=0.006, iteration=0.131, train_Cmiou=86.21, train_Imiou=87.17, train_loss_seg=0.108] +Learning rate = 0.000031 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.56, test_Imiou=84.99, test_loss_seg=0.088] +================================================== + test_loss_seg = 0.08817765861749649 + test_Cmiou = 82.56179279171461 + test_Imiou = 84.99887431020127 + test_Imiou_per_class = {'Airplane': 0.8220708487277121, 'Bag': 0.8486916860181386, 'Cap': 0.8687583724926712, 'Car': 0.783244907608814, 'Chair': 0.9047427229366805, 'Earphone': 0.7699568390710086, 'Guitar': 0.9121072208180487, 'Knife': 0.8341852656290637, 'Lamp': 0.833652071403557, 'Laptop': 0.9552798589077913, 'Motorbike': 0.7195323321216778, 'Mug': 0.9409562175857256, 'Pistol': 0.8107878432081546, 'Rocket': 0.6082308596501873, 'Skateboard': 0.771244757715645, 'Table': 0.8264450427794615} +================================================== +EPOCH 77 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:35<00:00, 3.48it/s, data_loading=0.006, iteration=0.131, train_Cmiou=86.62, train_Imiou=87.67, train_loss_seg=0.108] +Learning rate = 0.000031 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.17it/s, test_Cmiou=82.72, test_Imiou=85.14, test_loss_seg=0.908] +================================================== + test_loss_seg = 0.9085749983787537 + test_Cmiou = 82.7271387627066 + test_Imiou = 85.14121259286185 + test_Imiou_per_class = {'Airplane': 0.8256999293025822, 'Bag': 0.8474703605285809, 'Cap': 0.8723767428918049, 'Car': 0.7836772336029252, 'Chair': 0.9051650620599786, 'Earphone': 0.7774685538063215, 'Guitar': 0.913594396072443, 'Knife': 0.8384728500662362, 'Lamp': 0.8359200114707925, 'Laptop': 0.9545290088557815, 'Motorbike': 0.7162074589577506, 'Mug': 0.9413654920934128, 'Pistol': 0.8112545634045089, 'Rocket': 0.6132051184411768, 'Skateboard': 0.7720238156980253, 'Table': 0.8279116047807352} +================================================== +EPOCH 78 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.48it/s, data_loading=0.006, iteration=0.132, train_Cmiou=86.52, train_Imiou=87.64, train_loss_seg=0.103] +Learning rate = 0.000031 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.17it/s, test_Cmiou=82.78, test_Imiou=85.13, test_loss_seg=0.046] +================================================== + test_loss_seg = 0.0462506003677845 + test_Cmiou = 82.78019852222316 + test_Imiou = 85.13713406330208 + test_Imiou_per_class = {'Airplane': 0.8259864639460615, 'Bag': 0.8512609850412537, 'Cap': 0.8723287752096841, 'Car': 0.7835308261605867, 'Chair': 0.9059411321095912, 'Earphone': 0.7675746489454057, 'Guitar': 0.9135948213547299, 'Knife': 0.8333777650442766, 'Lamp': 0.8370519202925176, 'Laptop': 0.9552833106161185, 'Motorbike': 0.7192223854777942, 'Mug': 0.9420391056305477, 'Pistol': 0.81629936372763, 'Rocket': 0.6233936527399382, 'Skateboard': 0.7713727079780051, 'Table': 0.8265738992815643} +================================================== +Cmiou: 82.78002739702927 -> 82.78019852222316 +EPOCH 79 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.131, train_Cmiou=85.44, train_Imiou=86.80, train_loss_seg=0.110] +Learning rate = 0.000031 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:26<00:00, 8.98it/s, test_Cmiou=82.65, test_Imiou=85.14, test_loss_seg=0.217] +================================================== + test_loss_seg = 0.21724724769592285 + test_Cmiou = 82.65479208310397 + test_Imiou = 85.14457932293135 + test_Imiou_per_class = {'Airplane': 0.8260990774161929, 'Bag': 0.8468347540605298, 'Cap': 0.8621814377892965, 'Car': 0.7845749340734818, 'Chair': 0.9056449057811226, 'Earphone': 0.7627165600542226, 'Guitar': 0.913182951975813, 'Knife': 0.8356457445076971, 'Lamp': 0.838147006658288, 'Laptop': 0.9544054122414832, 'Motorbike': 0.7171766493769086, 'Mug': 0.9420443091124545, 'Pistol': 0.8145235380019588, 'Rocket': 0.6236182886615947, 'Skateboard': 0.7710505261822138, 'Table': 0.8269206374033763} +================================================== +EPOCH 80 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:49<00:00, 3.35it/s, data_loading=0.006, iteration=0.132, train_Cmiou=87.04, train_Imiou=87.31, train_loss_seg=0.103] +Learning rate = 0.000031 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.17it/s, test_Cmiou=82.64, test_Imiou=85.07, test_loss_seg=0.194] +================================================== + test_loss_seg = 0.1941176801919937 + test_Cmiou = 82.6434829249035 + test_Imiou = 85.0749505288652 + test_Imiou_per_class = {'Airplane': 0.8251147093200122, 'Bag': 0.8466403861141896, 'Cap': 0.867663034325489, 'Car': 0.7831013565383016, 'Chair': 0.9048333971262249, 'Earphone': 0.7701486257707748, 'Guitar': 0.9126537723874131, 'Knife': 0.8333010493508144, 'Lamp': 0.8363955719572312, 'Laptop': 0.9553172733269962, 'Motorbike': 0.7149809538003369, 'Mug': 0.9404382788797135, 'Pistol': 0.8151894607815944, 'Rocket': 0.6192024253995644, 'Skateboard': 0.7712148418612381, 'Table': 0.8267621310446646} +================================================== +EPOCH 81 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:56<00:00, 3.28it/s, data_loading=0.006, iteration=0.134, train_Cmiou=85.85, train_Imiou=87.95, train_loss_seg=0.103] +Learning rate = 0.000031 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.15it/s, test_Cmiou=82.70, test_Imiou=85.07, test_loss_seg=0.135] +================================================== + test_loss_seg = 0.13560687005519867 + test_Cmiou = 82.70631954116405 + test_Imiou = 85.07518851532492 + test_Imiou_per_class = {'Airplane': 0.8255980295440515, 'Bag': 0.8451628656820878, 'Cap': 0.874831088931181, 'Car': 0.782979545984898, 'Chair': 0.9046337586285058, 'Earphone': 0.7643845845158133, 'Guitar': 0.9132812591719675, 'Knife': 0.8302286589838838, 'Lamp': 0.8404393151188198, 'Laptop': 0.9553140796028733, 'Motorbike': 0.7197120705772343, 'Mug': 0.9410970885983257, 'Pistol': 0.8101057816539184, 'Rocket': 0.6301260049166929, 'Skateboard': 0.7696656784901414, 'Table': 0.8254513161858537} +================================================== +EPOCH 82 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.131, train_Cmiou=86.96, train_Imiou=87.65, train_loss_seg=0.102] +Learning rate = 0.000031 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.59, test_Imiou=85.07, test_loss_seg=0.100] +================================================== + test_loss_seg = 0.10063064843416214 + test_Cmiou = 82.59410726870632 + test_Imiou = 85.07709915417814 + test_Imiou_per_class = {'Airplane': 0.8250235432306899, 'Bag': 0.8482030008379499, 'Cap': 0.8732145260561847, 'Car': 0.7846307023988695, 'Chair': 0.9038166518609819, 'Earphone': 0.7690891991760245, 'Guitar': 0.9140074559066009, 'Knife': 0.8275618871767462, 'Lamp': 0.8363599673911473, 'Laptop': 0.9549843455739296, 'Motorbike': 0.7139710198379611, 'Mug': 0.9408902553086774, 'Pistol': 0.8164012450692578, 'Rocket': 0.6078830718499711, 'Skateboard': 0.7711992916067782, 'Table': 0.8278209997112402} +================================================== +EPOCH 83 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.131, train_Cmiou=86.18, train_Imiou=86.70, train_loss_seg=0.108] +Learning rate = 0.000031 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.75, test_Imiou=85.09, test_loss_seg=0.559] +================================================== + test_loss_seg = 0.5592495203018188 + test_Cmiou = 82.75552201538142 + test_Imiou = 85.09916871042347 + test_Imiou_per_class = {'Airplane': 0.8265845138296957, 'Bag': 0.8475581760989053, 'Cap': 0.8748224236446671, 'Car': 0.7833417227984101, 'Chair': 0.9032348216712496, 'Earphone': 0.7695328957440143, 'Guitar': 0.9116334591582769, 'Knife': 0.8307682824880308, 'Lamp': 0.8354658054765667, 'Laptop': 0.9551576514685347, 'Motorbike': 0.7201015201380491, 'Mug': 0.9398638895191199, 'Pistol': 0.8134674385308552, 'Rocket': 0.624370028022824, 'Skateboard': 0.776504452643892, 'Table': 0.8284764412279348} +================================================== +EPOCH 84 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.131, train_Cmiou=86.95, train_Imiou=87.64, train_loss_seg=0.107] +Learning rate = 0.000031 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.60, test_Imiou=85.08, test_loss_seg=0.953] +================================================== + test_loss_seg = 0.9530434012413025 + test_Cmiou = 82.6044515077152 + test_Imiou = 85.08433382551357 + test_Imiou_per_class = {'Airplane': 0.8233566902317797, 'Bag': 0.8465981922879398, 'Cap': 0.8723362586270685, 'Car': 0.7825804829356513, 'Chair': 0.9049623166584098, 'Earphone': 0.7727117107979308, 'Guitar': 0.9137654429498838, 'Knife': 0.8342059138729138, 'Lamp': 0.8331491049403285, 'Laptop': 0.9542567260829449, 'Motorbike': 0.7093288875402968, 'Mug': 0.9409862906910007, 'Pistol': 0.8149175333291246, 'Rocket': 0.6195013678349283, 'Skateboard': 0.76492017918288, 'Table': 0.8291351432713491} +================================================== +EPOCH 85 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.48it/s, data_loading=0.006, iteration=0.131, train_Cmiou=85.88, train_Imiou=87.72, train_loss_seg=0.106] +Learning rate = 0.000031 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.17it/s, test_Cmiou=82.66, test_Imiou=85.02, test_loss_seg=0.126] +================================================== + test_loss_seg = 0.12666727602481842 + test_Cmiou = 82.66305424772878 + test_Imiou = 85.02519514157181 + test_Imiou_per_class = {'Airplane': 0.8247770707477936, 'Bag': 0.8485571502158175, 'Cap': 0.8706191846451543, 'Car': 0.7832073724340496, 'Chair': 0.9046357733520374, 'Earphone': 0.7695734524836119, 'Guitar': 0.9140004057360962, 'Knife': 0.8371331656389656, 'Lamp': 0.83744677510884, 'Laptop': 0.9546432375273709, 'Motorbike': 0.714202570386211, 'Mug': 0.9408070745567411, 'Pistol': 0.8134355545535132, 'Rocket': 0.620572256721676, 'Skateboard': 0.7678602350071115, 'Table': 0.8246174005216138} +================================================== +EPOCH 86 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.131, train_Cmiou=86.47, train_Imiou=87.45, train_loss_seg=0.105] +Learning rate = 0.000016 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.64, test_Imiou=85.00, test_loss_seg=0.286] +================================================== + test_loss_seg = 0.28665098547935486 + test_Cmiou = 82.64772638593001 + test_Imiou = 85.00768100074848 + test_Imiou_per_class = {'Airplane': 0.8196097329610204, 'Bag': 0.8473284995842806, 'Cap': 0.8667655552534558, 'Car': 0.7850182151572064, 'Chair': 0.9037618839883602, 'Earphone': 0.7673156494907348, 'Guitar': 0.9134737226936066, 'Knife': 0.8417614484034726, 'Lamp': 0.8383819461301835, 'Laptop': 0.9543891811035125, 'Motorbike': 0.714047262889869, 'Mug': 0.9409423703210152, 'Pistol': 0.8153958123490301, 'Rocket': 0.6187016752725942, 'Skateboard': 0.770959575180987, 'Table': 0.825783690969474} +================================================== +EPOCH 87 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.132, train_Cmiou=86.89, train_Imiou=87.35, train_loss_seg=0.102] +Learning rate = 0.000016 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.17it/s, test_Cmiou=82.80, test_Imiou=85.10, test_loss_seg=0.309] +================================================== + test_loss_seg = 0.3096754252910614 + test_Cmiou = 82.80514848345668 + test_Imiou = 85.10895649398199 + test_Imiou_per_class = {'Airplane': 0.8232152489010717, 'Bag': 0.8499359258976871, 'Cap': 0.8723617042772638, 'Car': 0.78544069628007, 'Chair': 0.904022752878463, 'Earphone': 0.7690024460982912, 'Guitar': 0.9135780407586203, 'Knife': 0.8373616144188236, 'Lamp': 0.8350432206678389, 'Laptop': 0.9550457716810986, 'Motorbike': 0.7176093155186886, 'Mug': 0.941836076559161, 'Pistol': 0.8127038222467481, 'Rocket': 0.622146863271899, 'Skateboard': 0.7812760186809471, 'Table': 0.8282442392163964} +================================================== +Cmiou: 82.78019852222316 -> 82.80514848345668 +EPOCH 88 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.131, train_Cmiou=85.31, train_Imiou=87.80, train_loss_seg=0.105] +Learning rate = 0.000016 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.69, test_Imiou=85.15, test_loss_seg=0.167] +================================================== + test_loss_seg = 0.1677943468093872 + test_Cmiou = 82.6975254038229 + test_Imiou = 85.15494592307996 + test_Imiou_per_class = {'Airplane': 0.8246374888720013, 'Bag': 0.8487443352496399, 'Cap': 0.8691635491270049, 'Car': 0.7847932333887747, 'Chair': 0.9050007639230039, 'Earphone': 0.7697014837164353, 'Guitar': 0.9137660396457301, 'Knife': 0.8359591578765606, 'Lamp': 0.8379627288577824, 'Laptop': 0.9551952476208807, 'Motorbike': 0.7139543456044924, 'Mug': 0.9420014140657762, 'Pistol': 0.8168877068970001, 'Rocket': 0.6163595910657583, 'Skateboard': 0.7692739332848387, 'Table': 0.8282030454159817} +================================================== +EPOCH 89 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:53<00:00, 3.31it/s, data_loading=0.006, iteration=0.145, train_Cmiou=85.87, train_Imiou=87.61, train_loss_seg=0.102] +Learning rate = 0.000016 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:24<00:00, 10.00it/s, test_Cmiou=82.59, test_Imiou=85.03, test_loss_seg=0.230] +================================================== + test_loss_seg = 0.230404332280159 + test_Cmiou = 82.59378879617375 + test_Imiou = 85.03064031698713 + test_Imiou_per_class = {'Airplane': 0.8227500039459246, 'Bag': 0.8505853689203363, 'Cap': 0.869011966442876, 'Car': 0.7811712932360583, 'Chair': 0.9040094306677381, 'Earphone': 0.7634695201172885, 'Guitar': 0.9133028029667614, 'Knife': 0.8316931300849539, 'Lamp': 0.8356670754453634, 'Laptop': 0.9549438981797294, 'Motorbike': 0.7180662581433988, 'Mug': 0.941755694933712, 'Pistol': 0.8158591847987936, 'Rocket': 0.6161087554816344, 'Skateboard': 0.7691780991191667, 'Table': 0.8274337249040669} +================================================== +EPOCH 90 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:37<00:00, 3.46it/s, data_loading=0.006, iteration=0.132, train_Cmiou=84.68, train_Imiou=86.81, train_loss_seg=0.106] +Learning rate = 0.000016 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.74, test_Imiou=85.07, test_loss_seg=3.705] +================================================== + test_loss_seg = 3.7053720951080322 + test_Cmiou = 82.74649594702979 + test_Imiou = 85.07728356236642 + test_Imiou_per_class = {'Airplane': 0.823434592581187, 'Bag': 0.8504392556823656, 'Cap': 0.8719287559068337, 'Car': 0.7839727349780081, 'Chair': 0.9035317816879569, 'Earphone': 0.7716085601854221, 'Guitar': 0.9132468265570481, 'Knife': 0.8359828709239483, 'Lamp': 0.8349761939368867, 'Laptop': 0.9552694906595685, 'Motorbike': 0.7187598596673102, 'Mug': 0.9419296130807101, 'Pistol': 0.8160707470630729, 'Rocket': 0.6200077356511854, 'Skateboard': 0.7701817125317745, 'Table': 0.8280986204314862} +================================================== +EPOCH 91 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:38<00:00, 3.45it/s, data_loading=0.006, iteration=0.130, train_Cmiou=82.98, train_Imiou=86.90, train_loss_seg=0.106] +Learning rate = 0.000016 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.17it/s, test_Cmiou=82.73, test_Imiou=85.12, test_loss_seg=0.295] +================================================== + test_loss_seg = 0.2952561378479004 + test_Cmiou = 82.73552012603481 + test_Imiou = 85.12324729717999 + test_Imiou_per_class = {'Airplane': 0.8262798972401254, 'Bag': 0.8474915093924315, 'Cap': 0.8709346074709338, 'Car': 0.7840882794361886, 'Chair': 0.9038439472812617, 'Earphone': 0.7702419387372784, 'Guitar': 0.9143639570634343, 'Knife': 0.8340276752689852, 'Lamp': 0.8379844136034185, 'Laptop': 0.9553430670716307, 'Motorbike': 0.7189620357680638, 'Mug': 0.9410787458394531, 'Pistol': 0.8151031568563806, 'Rocket': 0.6204011870138155, 'Skateboard': 0.7701999598463841, 'Table': 0.8273388422757841} +================================================== +EPOCH 92 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.131, train_Cmiou=86.45, train_Imiou=87.48, train_loss_seg=0.104] +Learning rate = 0.000016 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.16it/s, test_Cmiou=82.61, test_Imiou=85.08, test_loss_seg=0.118] +================================================== + test_loss_seg = 0.11881987005472183 + test_Cmiou = 82.6139196134149 + test_Imiou = 85.08869393518103 + test_Imiou_per_class = {'Airplane': 0.8236434957437907, 'Bag': 0.8462932620362235, 'Cap': 0.8745579876629315, 'Car': 0.7833747535106542, 'Chair': 0.9037436433480951, 'Earphone': 0.7659333264710905, 'Guitar': 0.9150121977285747, 'Knife': 0.8309594880855815, 'Lamp': 0.8364023801909087, 'Laptop': 0.9544451142170134, 'Motorbike': 0.7147097392623579, 'Mug': 0.9416451722662413, 'Pistol': 0.8115406774799779, 'Rocket': 0.6202077103982866, 'Skateboard': 0.7669437926717642, 'Table': 0.8288143970728898} +================================================== +EPOCH 93 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:45<00:00, 3.38it/s, data_loading=0.006, iteration=0.133, train_Cmiou=85.80, train_Imiou=88.12, train_loss_seg=0.100] +Learning rate = 0.000016 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:24<00:00, 9.98it/s, test_Cmiou=82.67, test_Imiou=85.05, test_loss_seg=0.211] +================================================== + test_loss_seg = 0.21106493473052979 + test_Cmiou = 82.67050378943969 + test_Imiou = 85.05382458412521 + test_Imiou_per_class = {'Airplane': 0.8264957474664263, 'Bag': 0.8477764018503745, 'Cap': 0.8758259832481847, 'Car': 0.7828318036098725, 'Chair': 0.9047980688050952, 'Earphone': 0.7587258564757986, 'Guitar': 0.9136860985165969, 'Knife': 0.834687817336323, 'Lamp': 0.8351338755724254, 'Laptop': 0.955406425010649, 'Motorbike': 0.7155808170386472, 'Mug': 0.9403337601087801, 'Pistol': 0.8139277911796223, 'Rocket': 0.6271001985724561, 'Skateboard': 0.7692490918438831, 'Table': 0.8257208696752163} +================================================== +EPOCH 94 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:56<00:00, 3.28it/s, data_loading=0.005, iteration=0.368, train_Cmiou=87.30, train_Imiou=87.51, train_loss_seg=0.103] +Learning rate = 0.000016 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:28<00:00, 8.45it/s, test_Cmiou=82.63, test_Imiou=84.99, test_loss_seg=0.179] +================================================== + test_loss_seg = 0.17907877266407013 + test_Cmiou = 82.63737516125934 + test_Imiou = 84.99719726653137 + test_Imiou_per_class = {'Airplane': 0.822379885350051, 'Bag': 0.8455399591234638, 'Cap': 0.8719019523551864, 'Car': 0.783691464266464, 'Chair': 0.9044452376391745, 'Earphone': 0.7696762023278912, 'Guitar': 0.9139397809377187, 'Knife': 0.8318837880778147, 'Lamp': 0.8369953374146943, 'Laptop': 0.9551514992502209, 'Motorbike': 0.711926471502944, 'Mug': 0.9402578473498852, 'Pistol': 0.8133877387122375, 'Rocket': 0.6246312014459745, 'Skateboard': 0.7708327513995146, 'Table': 0.8253389086482584} +================================================== +EPOCH 95 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:40<00:00, 3.43it/s, data_loading=0.006, iteration=0.130, train_Cmiou=86.65, train_Imiou=87.62, train_loss_seg=0.103] +Learning rate = 0.000016 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.74, test_Imiou=85.10, test_loss_seg=0.068] +================================================== + test_loss_seg = 0.0680479109287262 + test_Cmiou = 82.74686046598158 + test_Imiou = 85.10331101455691 + test_Imiou_per_class = {'Airplane': 0.8253985387869082, 'Bag': 0.8485253127462693, 'Cap': 0.8729444089880797, 'Car': 0.7848942122540937, 'Chair': 0.9043464599078106, 'Earphone': 0.7651351198059506, 'Guitar': 0.9139789971820858, 'Knife': 0.8378481835402003, 'Lamp': 0.8378285385838813, 'Laptop': 0.9549302353677023, 'Motorbike': 0.7139678800588627, 'Mug': 0.9409699026783485, 'Pistol': 0.8130088285869341, 'Rocket': 0.6282079160895542, 'Skateboard': 0.7709406391414054, 'Table': 0.8265725008389678} +================================================== +EPOCH 96 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.131, train_Cmiou=85.69, train_Imiou=87.68, train_loss_seg=0.105] +Learning rate = 0.000016 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.18it/s, test_Cmiou=82.81, test_Imiou=85.14, test_loss_seg=0.216] +================================================== + test_loss_seg = 0.21643976867198944 + test_Cmiou = 82.81135875470305 + test_Imiou = 85.14380953895215 + test_Imiou_per_class = {'Airplane': 0.8270061230080823, 'Bag': 0.8493233084271087, 'Cap': 0.8650182085485778, 'Car': 0.7855429882292723, 'Chair': 0.9047852480620399, 'Earphone': 0.7790459832036333, 'Guitar': 0.9139993705391923, 'Knife': 0.8409024010661648, 'Lamp': 0.8357014645416262, 'Laptop': 0.9550579474624091, 'Motorbike': 0.7110050748636725, 'Mug': 0.9412701351201642, 'Pistol': 0.8192732945852339, 'Rocket': 0.6235927662781297, 'Skateboard': 0.771315421271747, 'Table': 0.8269776655454337} +================================================== +Cmiou: 82.80514848345668 -> 82.81135875470305 +EPOCH 97 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:36<00:00, 3.47it/s, data_loading=0.006, iteration=0.131, train_Cmiou=86.53, train_Imiou=86.93, train_loss_seg=0.108] +Learning rate = 0.000016 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:23<00:00, 10.17it/s, test_Cmiou=82.72, test_Imiou=85.07, test_loss_seg=0.689] +================================================== + test_loss_seg = 0.6891748905181885 + test_Cmiou = 82.72106861984827 + test_Imiou = 85.07638129606217 + test_Imiou_per_class = {'Airplane': 0.8249401147930723, 'Bag': 0.8491882646630724, 'Cap': 0.8716480952963847, 'Car': 0.7801891497555686, 'Chair': 0.904868380252686, 'Earphone': 0.7697938436931862, 'Guitar': 0.9124537583262219, 'Knife': 0.8386233329215178, 'Lamp': 0.8334797850171515, 'Laptop': 0.9549707506372705, 'Motorbike': 0.7132215031731942, 'Mug': 0.9419259227640239, 'Pistol': 0.8178557127398761, 'Rocket': 0.623448011454023, 'Skateboard': 0.7710587157872879, 'Table': 0.8277056379011863} +================================================== +EPOCH 98 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:47<00:00, 3.36it/s, data_loading=0.006, iteration=0.136, train_Cmiou=85.81, train_Imiou=87.96, train_loss_seg=0.102] +Learning rate = 0.000016 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:24<00:00, 9.94it/s, test_Cmiou=82.72, test_Imiou=85.07, test_loss_seg=0.119] +================================================== + test_loss_seg = 0.11923985928297043 + test_Cmiou = 82.72254609967693 + test_Imiou = 85.07893879176963 + test_Imiou_per_class = {'Airplane': 0.825863430932043, 'Bag': 0.8478520521894607, 'Cap': 0.8696218830384019, 'Car': 0.7836738043621558, 'Chair': 0.9044058765524128, 'Earphone': 0.7677265239339407, 'Guitar': 0.9126360876128935, 'Knife': 0.8367805442080704, 'Lamp': 0.8367005620738912, 'Laptop': 0.9547663185089644, 'Motorbike': 0.7179131534663832, 'Mug': 0.9422900147122635, 'Pistol': 0.8133819556314097, 'Rocket': 0.6259416128374069, 'Skateboard': 0.7697974625108119, 'Table': 0.8262560933777987} +================================================== +EPOCH 99 / 100 +100%|██████████████████████████████████████████████████████████████████| 1168/1168 [05:51<00:00, 3.32it/s, data_loading=0.006, iteration=0.135, train_Cmiou=86.11, train_Imiou=87.51, train_loss_seg=0.102] +Learning rate = 0.000016 +100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 240/240 [00:24<00:00, 9.86it/s, test_Cmiou=82.57, test_Imiou=85.13, test_loss_seg=0.512] +================================================== + test_loss_seg = 0.5127086043357849 + test_Cmiou = 82.57370585001053 + test_Imiou = 85.13352229711745 + test_Imiou_per_class = {'Airplane': 0.8244139668339339, 'Bag': 0.8472346639162536, 'Cap': 0.8561745670579584, 'Car': 0.7847442837550577, 'Chair': 0.9050730803276622, 'Earphone': 0.7696407940832379, 'Guitar': 0.9132956973045946, 'Knife': 0.8336280980142263, 'Lamp': 0.837400636098979, 'Laptop': 0.9552409111879531, 'Motorbike': 0.7175058991993183, 'Mug': 0.941834690137604, 'Pistol': 0.8165710265029233, 'Rocket': 0.6111093354014496, 'Skateboard': 0.769858499543118, 'Table': 0.8280667866374137} +================================================== \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/config.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/config.yaml new file mode 100644 index 00000000..072bd98b --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/config.yaml @@ -0,0 +1,38 @@ +defaults: # for loading the default config + - task: segmentation # Task performed (segmentation, classification etc...) + optional: True +# Type of model name (conf/models/segmentation/*.yaml )to use +# pointnet, pointnet2, kpconv + - model_type: pointnet2 + optional: True + - dataset: urbanmeshused #s3disfused + optional: True + + - visualization: default #yaml file name + - lr_scheduler: exponential +# Types of training hyperparamters (conf/training/*.yaml ) default, +# PointNet, PointNet++: default; KPConv: kpconv; + - training: default + + - debugging: default.yaml + - models: ${defaults.0.task}/${defaults.1.model_type} + - data: ${defaults.0.task}/${defaults.2.dataset} + - sota # Contains current SOTA results on different datasets (extracted from papers !). + - hydra/job_logging: custom + - hydra/output: custom # add the support for user-defined experiment folder (where to save the experiment files) + +job_name: benchmark # prefix name for saving the experiment file. + +# models in conf/models/segmentation/*.yaml: +# PointNet: PointNet; PointNet++: pointnet2_charlesmsg, pointnet2_largemsg; KPConv: KPConvPaper; +model_name: pointnet2_largemsg # Name of the specific model to load + +update_lr_scheduler_on: "on_epoch" # ["on_epoch", "on_num_batch", "on_num_sample"] +selection_stage: "" +pretty_print: False +eval_frequency: 1 + +tracker_options: # Extra options for the tracker + full_res: False + make_submission: False + track_boxes: False diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/classification/modelnet.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/classification/modelnet.yaml new file mode 100644 index 00000000..95c39815 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/classification/modelnet.yaml @@ -0,0 +1,16 @@ +data: + task: classification + class: modelnet.ModelNetDataset + name: modelnet + dataroot: data + number: 10 + pre_transforms: + - transform: NormalizeScale + - transform: GridSampling3D + lparams: [0.02] + train_transforms: + - transform: FixedPoints + lparams: [2048] + test_transforms: + - transform: FixedPoints + lparams: [2048] \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/object_detection/scannet-fixed.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/object_detection/scannet-fixed.yaml new file mode 100644 index 00000000..3cb48203 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/object_detection/scannet-fixed.yaml @@ -0,0 +1,41 @@ +data: + class: scannet.ScannetDataset + dataset_name: "scannet-sparse" + task: object_detection + dataroot: data + version: "v2" + use_instance_labels: True + use_instance_bboxes: True + donotcare_class_ids: [] + process_workers: 8 + + pre_transform: + - transform: GridSampling3D + lparams: [0.02] + + train_transform: + - transform: GridSampling3D + lparams: [0.05] + - transform: FixedPoints + lparams: [50000] + - transform: RandomNoise + params: + sigma: 0.01 + clip: 0.05 + - transform: RandomScaleAnisotropic + params: + scales: [0.9, 1.1] + - transform: RandomSymmetry + params: + axis: [True, False, False] + - transform: Random3AxisRotation + params: + rot_x: 2 + rot_y: 2 + rot_z: 180 + + val_transform: + - transform: GridSampling3D + lparams: [0.05] + - transform: FixedPoints + lparams: [50000] diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/object_detection/scannet-sparse.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/object_detection/scannet-sparse.yaml new file mode 100644 index 00000000..d8df13eb --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/object_detection/scannet-sparse.yaml @@ -0,0 +1,75 @@ +data: + class: scannet.ScannetDataset + dataset_name: "scannet-sparse" + task: object_detection + dataroot: data + version: "v2" + use_instance_labels: True + use_instance_bboxes: True + donotcare_class_ids: [] + process_workers: 8 + + grid_size: 0.05 + mode: last + + pre_transform: + - transform: GridSampling3D + lparams: [0.02] + + train_transform: + - transform: XYZFeature + params: + add_x: True + add_y: True + add_z: True + - transform: RandomNoise + params: + sigma: 0.01 + clip: 0.05 + - transform: RandomScaleAnisotropic + params: + scales: [0.9, 1.1] + - transform: RandomSymmetry + params: + axis: [True, False, False] + - transform: Random3AxisRotation + params: + rot_x: 2 + rot_y: 2 + rot_z: 180 + - transform: GridSampling3D + params: + size: ${data.grid_size} + quantize_coords: True + mode: ${data.mode} + - transform: FixedPoints + lparams: [50000] + params: + replace: False + - transform: ShiftVoxels + - transform: AddFeatsByKeys + params: + list_add_to_x: [True, True, True] + feat_names: ["pos_x", "pos_y", "pos_z"] + delete_feats: [True, True, True] + + val_transform: + - transform: XYZFeature + params: + add_x: True + add_y: True + add_z: True + - transform: GridSampling3D + params: + size: ${data.grid_size} + quantize_coords: True + mode: ${data.mode} + - transform: FixedPoints + lparams: [50000] + params: + replace: False + - transform: AddFeatsByKeys + params: + list_add_to_x: [True, True, True] + feat_names: ["pos_x", "pos_y", "pos_z"] + delete_feats: [True, True, True] diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/object_detection/scannet.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/object_detection/scannet.yaml new file mode 100644 index 00000000..644cfbdd --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/object_detection/scannet.yaml @@ -0,0 +1,68 @@ +data: + class: scannet.ScannetDataset + dataset_name: "scannet-sparse" + task: object_detection + dataroot: data + version: "v2" + use_instance_labels: True + use_instance_bboxes: True + donotcare_class_ids: [] + process_workers: 8 + grid_size: 0.05 + + pre_transform: + - transform: GridSampling3D + lparams: [0.02] + + train_transform: + - transform: XYZFeature + params: + add_x: True + add_y: True + add_z: True + - transform: GridSampling3D + params: + size: ${data.grid_size} + - transform: FixedPoints + lparams: [50000] + params: + replace: False + - transform: RandomNoise + params: + sigma: 0.01 + clip: 0.05 + - transform: RandomScaleAnisotropic + params: + scales: [0.9, 1.1] + - transform: RandomSymmetry + params: + axis: [True, False, False] + - transform: Random3AxisRotation + params: + rot_x: 2 + rot_y: 2 + rot_z: 180 + - transform: AddFeatsByKeys + params: + list_add_to_x: [True, True, True] + feat_names: ["pos_x", "pos_y", "pos_z"] + delete_feats: [True, True, True] + + val_transform: + - transform: XYZFeature + params: + add_x: True + add_y: True + add_z: True + - transform: GridSampling3D + params: + size: ${data.grid_size} + - transform: FixedPoints + lparams: [50000] + params: + replace: False + - transform: AddFeatsByKeys + params: + list_add_to_x: [True, True, True] + feat_names: ["pos_x", "pos_y", "pos_z"] + delete_feats: [True, True, True] diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/panoptic/s3disfused.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/panoptic/s3disfused.yaml new file mode 100644 index 00000000..2c9ae34d --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/panoptic/s3disfused.yaml @@ -0,0 +1,67 @@ +data: + task: panoptic + class: s3dis.S3DISFusedDataset + dataroot: data + fold: 5 + first_subsampling: 0.04 + grid_size: ${data.first_subsampling} + keep_instance: True + use_category: False + sampling_format: "cylinder" # 'sphere' + mode: last + pre_collate_transform: + - transform: PointCloudFusion # One point cloud per area + - transform: SaveOriginalPosId # Required so that one can recover the original point in the fused point cloud + - transform: GridSampling3D # Samples on a grid + params: + size: ${data.first_subsampling} + train_transforms: + - transform: RandomNoise + params: + sigma: 0.001 + - transform: RandomRotate + params: + degrees: 180 + axis: 2 + - transform: RandomScaleAnisotropic + params: + scales: [0.8, 1.2] + - transform: RandomSymmetry + params: + axis: [True, False, False] + - transform: DropFeature + params: + drop_proba: 0.2 + feature_name: rgb + - transform: XYZFeature + params: + add_x: False + add_y: False + add_z: True + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: ${data.mode} + - transform: AddFeatsByKeys + params: + list_add_to_x: [True, True] + feat_names: [rgb, pos_z] + delete_feats: [True, True] + test_transform: + - transform: XYZFeature + params: + add_x: False + add_y: False + add_z: True + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: ${data.mode} + - transform: AddFeatsByKeys + params: + list_add_to_x: [True, True] + feat_names: [rgb, pos_z] + delete_feats: [True, True] + val_transform: ${data.test_transform} diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/panoptic/scannet-sparse.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/panoptic/scannet-sparse.yaml new file mode 100644 index 00000000..4aabf0f1 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/panoptic/scannet-sparse.yaml @@ -0,0 +1,63 @@ +data: + class: scannet.ScannetDataset + dataset_name: "scannet-sparse" + task: panoptic + dataroot: data + version: "v2" + use_instance_labels: True + donotcare_class_ids: [] + process_workers: 8 + + grid_size: 0.05 + mode: last + + pre_transform: + - transform: GridSampling3D + lparams: [0.02] + + train_transform: + - transform: RandomNoise + params: + sigma: 0.01 + clip: 0.05 + - transform: RandomScaleAnisotropic + params: + scales: [0.9, 1.1] + - transform: Random3AxisRotation + params: + rot_x: 2 + rot_y: 2 + rot_z: 180 + - transform: GridSampling3D + params: + size: ${data.grid_size} + quantize_coords: True + mode: ${data.mode} + - transform: FixedPoints + lparams: [100000] + params: + replace: False + - transform: ShiftVoxels + - transform: AddFeatsByKeys + params: + list_add_to_x: [True] + feat_names: ["rgb"] + delete_feats: [True] + + val_transform: + - transform: GridSampling3D + params: + size: ${data.grid_size} + quantize_coords: True + mode: ${data.mode} + - transform: FixedPoints + lparams: [100000] + params: + replace: False + - transform: AddFeatsByKeys + params: + list_add_to_x: [True] + feat_names: ["rgb"] + delete_feats: [True] + + test_transform: ${data.val_transform} diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/fragment3dmatch.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/fragment3dmatch.yaml new file mode 100644 index 00000000..2ff84426 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/fragment3dmatch.yaml @@ -0,0 +1,78 @@ +data: + task: registration + class: general3dmatch.General3DMatchDataset + dataroot: data + name: 3DMatch + is_patch: False + num_frame_per_fragment: 101 + max_dist_overlap: 0.04 + min_overlap_ratio: 0.2 + tsdf_voxel_size: 0.006 + limit_size: 850 + depth_thresh: 4.5 + first_subsampling: 0.025 + is_online_matching: False + num_pos_pairs: 30000 + pre_transforms: + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + train_transform: + - transform: SaveOriginalPosId + - transform: RandomNoise + params: + sigma: 0.005 + clip: 0.05 + - transform: RandomRotate + params: + degrees: 180 + axis: 0 + - transform: RandomRotate + params: + degrees: 180 + axis: 1 + - transform: RandomRotate + params: + degrees: 180 + axis: 2 + - transform: RandomScaleAnisotropic + params: + scales: [0.9,1.2] + + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: False + mode: "mean" + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' + - transform: Jitter + + test_transform: + - transform: SaveOriginalPosId + - transform: RandomRotate + params: + degrees: 180 + axis: 0 + - transform: RandomRotate + params: + degrees: 180 + axis: 1 + - transform: RandomRotate + params: + degrees: 180 + axis: 2 + + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: False + mode: "mean" + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/fragment3dmatch_dense.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/fragment3dmatch_dense.yaml new file mode 100644 index 00000000..66d0803a --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/fragment3dmatch_dense.yaml @@ -0,0 +1,63 @@ +data: + task: registration + class: general3dmatch.General3DMatchDataset + dataroot: data + name: 3DMatch + is_patch: False + num_frame_per_fragment: 101 + max_dist_overlap: 0.04 + min_overlap_ratio: 0.2 + tsdf_voxel_size: 0.006 + limit_size: 850 + depth_thresh: 4.5 + first_subsampling: 0.025 + is_online_matching: False + num_pos_pairs: 30000 + tau_1: 0.1 + tau_2: 0.05 + use_ransac: False + use_teaser: False + sym: True + rot_thresh: 2 + trans_thresh: 0.1 + ransac_thresh: 0.02 + pre_transforms: + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + train_transform: + - transform: SaveOriginalPosId + - transform: FixedPoints + lparams: [16384] + - transform: Center + - transform: RandomNoise + params: + sigma: 0.005 + clip: 0.05 + - transform: Random3AxisRotation + params: + apply_rotation: True + rot_x: 180 + rot_y: 180 + rot_z: 180 + - transform: RandomScaleAnisotropic + params: + scales: [0.9,1.2] + + val_transform: + - transform: SaveOriginalPosId + - transform: FixedPoints + lparams: [16384] + - transform: Center + - transform: Random3AxisRotation + params: + apply_rotation: True + rot_x: 180 + rot_y: 180 + rot_z: 180 + + test_transform: + - transform: SaveOriginalPosId + - transform: FixedPoints + lparams: [16384] + - transform: Center diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/fragment3dmatch_partial.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/fragment3dmatch_partial.yaml new file mode 100644 index 00000000..0a9b6fb1 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/fragment3dmatch_partial.yaml @@ -0,0 +1,92 @@ +data: + task: registration + class: general3dmatch.General3DMatchDataset + dataroot: data + name: 3DMatch + is_patch: False + num_frame_per_fragment: 101 + max_dist_overlap: 0.04 + min_overlap_ratio: 0.2 + tsdf_voxel_size: 0.006 + limit_size: 850 + depth_thresh: 4.5 + first_subsampling: 0.03 + is_online_matching: False + num_pos_pairs: 30000 + num_points: 5000 + tau_1: 0.1 + tau_2: 0.05 + use_ransac: False + use_teaser: True + sym: True + rot_thresh: 2 + trans_thresh: 0.1 + ransac_thresh: 0.02 + noise_bound_teaser: 0.1 + pre_transforms: + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + mode: "mean" + train_transform: + - transform: SaveOriginalPosId + - transform: FixedPoints + lparams: [20000] + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: False + mode: "last" + - transform: RandomNoise + params: + sigma: 0.005 + clip: 0.05 + - transform: Random3AxisRotation + params: + apply_rotation: True + rot_x: 180 + rot_y: 180 + rot_z: 180 + - transform: RandomScaleAnisotropic + params: + scales: [0.9,1.2] + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' + - transform: Jitter + + val_transform: + - transform: SaveOriginalPosId + - transform: FixedPoints + lparams: [20000] + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: False + mode: "last" + - transform: Random3AxisRotation + params: + apply_rotation: True + rot_x: 180 + rot_y: 180 + rot_z: 180 + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' + + test_transform: + - transform: SaveOriginalPosId + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: False + mode: "last" + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/fragment3dmatch_sparse.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/fragment3dmatch_sparse.yaml new file mode 100644 index 00000000..568117d9 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/fragment3dmatch_sparse.yaml @@ -0,0 +1,101 @@ +data: + task: registration + class: general3dmatch.General3DMatchDataset + dataroot: data + name: 3DMatch + is_patch: False + num_frame_per_fragment: 101 + max_dist_overlap: 0.05 + min_overlap_ratio: 0.3 + tsdf_voxel_size: 0.006 + limit_size: 850 + depth_thresh: 4.5 + first_subsampling: 0.02 + is_online_matching: False + self_supervised: False + min_size_block: 0.5 + max_size_block: 1 + num_pos_pairs: 400 + min_points: 400 + num_points: 5000 + tau_1: 0.1 + tau_2: 0.05 + use_ransac: False + use_teaser: True + sym: True + rot_thresh: 15 + trans_thresh: 0.3 + ransac_thresh: 0.02 + noise_bound_teaser: 0.1 + use_fps: True + pre_transform: + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + train_transform: + - transform: SaveOriginalPosId + - transform: FixedPoints + lparams: [20000] + - transform: RandomNoise + params: + sigma: 0.005 + clip: 0.05 + - transform: Random3AxisRotation + params: + apply_rotation: True + rot_x: 180 + rot_y: 180 + rot_z: 180 + - transform: RandomScaleAnisotropic + params: + scales: [0.9,1.2] + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: "last" + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' + - transform: Jitter + + val_transform: + - transform: SaveOriginalPosId + - transform: FixedPoints + lparams: [20000] + - transform: Random3AxisRotation + params: + apply_rotation: True + rot_x: 180 + rot_y: 180 + rot_z: 180 + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: "last" + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' + + test_transform: + - transform: SaveOriginalPosId + - transform: XYZFeature + params: + add_x: True + add_y: True + add_z: True + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: "last" + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/fragment3dmatch_sparse_ss.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/fragment3dmatch_sparse_ss.yaml new file mode 100644 index 00000000..c74920f0 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/fragment3dmatch_sparse_ss.yaml @@ -0,0 +1,133 @@ +data: + task: registration + class: general3dmatch.General3DMatchDataset + dataroot: data + name: 3DMatch + is_patch: False + num_frame_per_fragment: 101 + max_dist_overlap: 0.05 + min_overlap_ratio: 0.3 + tsdf_voxel_size: 0.006 + limit_size: 850 + depth_thresh: 4.5 + first_subsampling: 0.02 + is_online_matching: False + self_supervised: true + min_size_block: 1.5 + max_size_block: 2 + num_pos_pairs: 30000 + min_points: 700 + num_points: 5000 + tau_1: 0.1 + tau_2: 0.05 + use_ransac: False + use_teaser: True + sym: True + rot_thresh: 4 + trans_thresh: 0.15 + ransac_thresh: 0.02 + noise_bound_teaser: 0.1 + pre_transform: + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + ss_transform: + - transform: CubeCrop + params: + c: 1 + - transform: CubeCrop + params: + c: 1.5 + train_transform: + - transform: SaveOriginalPosId + - transform: FixedPoints + lparams: [20000] + - transform: RandomWalkDropout + params: + dropout_ratio: 2e-5 + num_iter: 6e3 + radius: 0.035 + - transform: DensityFilter + params: + radius_nn: 0.08 + min_num: 16 + - transform: RandomNoise + params: + sigma: 0.01 + clip: 0.05 + - transform: Random3AxisRotation + params: + apply_rotation: True + rot_x: 180 + rot_y: 180 + rot_z: 180 + - transform: RandomScaleAnisotropic + params: + scales: [0.9,1.2] + - transform: XYZFeature + params: + add_x: True + add_y: True + add_z: True + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: "last" + - transform: ShiftVoxels + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' + - transform: Jitter + + val_transform: + - transform: SaveOriginalPosId + - transform: FixedPoints + lparams: [20000] + - transform: Random3AxisRotation + params: + apply_rotation: True + rot_x: 180 + rot_y: 180 + rot_z: 180 + - transform: XYZFeature + params: + add_x: True + add_y: True + add_z: True + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: "last" + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' + + test_transform: + - transform: SaveOriginalPosId + - transform: Random3AxisRotation + params: + apply_rotation: True + rot_x: 180 + rot_y: 180 + rot_z: 180 + - transform: XYZFeature + params: + add_x: True + add_y: True + add_z: True + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: "last" + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/fragmentkitti_sparse.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/fragmentkitti_sparse.yaml new file mode 100644 index 00000000..cd9e68b3 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/fragmentkitti_sparse.yaml @@ -0,0 +1,114 @@ +data: + task: registration + class: kitti.KittiDataset + dataroot: data + name: kitti + self_supervised: false + max_time_distance: 10 + min_dist: 20 + max_dist_overlap: 0.45 + min_size_block: 3 + max_size_block: 20 + first_subsampling: 0.3 + is_online_matching: False + num_pos_pairs: 2048 + num_points: 5000 + tau_1: 0.5 + tau_2: 0.05 + use_ransac: False + use_teaser: True + sym: True + rot_thresh: 5 + trans_thresh: 0.6 + ransac_thresh: 0.3 + noise_bound_teaser: 0.1 + ss_transform: + - transform: Center + pre_transform: + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + + train_transform: + - transform: SaveOriginalPosId + - transform: FixedPoints + lparams: [20000] + - transform: RandomNoise + params: + sigma: 0.05 + clip: 0.5 + - transform: Random3AxisRotation + params: + apply_rotation: True + rot_x: 2 + rot_y: 360 + rot_z: 2 + - transform: RandomScaleAnisotropic + params: + scales: [0.9,1.2] + - transform: XYZFeature + params: + add_x: True + add_y: True + add_z: True + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: "last" + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' + - transform: Jitter + + val_transform: + - transform: SaveOriginalPosId + - transform: Center + - transform: Random3AxisRotation + params: + apply_rotation: True + rot_x: 2 + rot_y: 360 + rot_z: 2 + - transform: XYZFeature + params: + add_x: True + add_y: True + add_z: True + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: "last" + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' + + test_transform: + - transform: SaveOriginalPosId + - transform: Center + - transform: Random3AxisRotation + params: + apply_rotation: True + rot_x: 2 + rot_y: 360 + rot_z: 2 + - transform: XYZFeature + params: + add_x: True + add_y: True + add_z: True + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: "last" + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/modelnet_sparse_ss.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/modelnet_sparse_ss.yaml new file mode 100644 index 00000000..42e1e01f --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/modelnet_sparse_ss.yaml @@ -0,0 +1,133 @@ +data: + task: registration + class: modelnet.SiameseModelNetDataset + dataroot: data + name: modelnet + name_modelnet: "10" + is_patch: False + max_dist_overlap: 0.05 + min_overlap_ratio: 0.3 + first_subsampling: 0.02 + + self_supervised: true + min_size_block: 1.5 + max_size_block: 2 + num_pos_pairs: 30000 + min_points: 500 + num_points: 5000 + tau_1: 0.1 + tau_2: 0.05 + use_ransac: False + use_teaser: True + sym: True + rot_thresh: 4 + trans_thresh: 0.15 + ransac_thresh: 0.02 + noise_bound_teaser: 0.1 + pre_transform: + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + ss_transform: + - transform: CubeCrop + params: + c: 1 + train_transform: + - transform: SaveOriginalPosId + - transform: FixedPoints + lparams: [2048] + - transform: RandomNoise + params: + sigma: 0.01 + clip: 0.05 + - transform: Random3AxisRotation + params: + apply_rotation: True + rot_x: 180 + rot_y: 180 + rot_z: 180 + - transform: RandomScaleAnisotropic + params: + scales: [0.9,1.2] + - transform: XYZFeature + params: + add_x: True + add_y: True + add_z: True + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: "last" + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' + - transform: Jitter + + val_transform: + - transform: SaveOriginalPosId + - transform: Center + - transform: FixedPoints + lparams: [2048] + - transform: CubeCrop + params: + c: 1 + - transform: Random3AxisRotation + params: + apply_rotation: True + rot_x: 180 + rot_y: 180 + rot_z: 180 + - transform: XYZFeature + params: + add_x: True + add_y: True + add_z: True + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: "last" + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' + + test_transform: + - transform: SaveOriginalPosId + - transform: Center + - transform: CubeCrop + params: + c: 1 + - transform: RandomNoise + params: + sigma: 0.01 + clip: 0.05 + - transform: Random3AxisRotation + params: + apply_rotation: True + rot_x: 45 + rot_y: 45 + rot_z: 45 + - transform: RandomTranslation + params: + delta_min: [-0.5, -0.5, -0.5] + delta_max: [0.5, 0.5, 0.5] + - transform: XYZFeature + params: + add_x: True + add_y: True + add_z: True + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: "last" + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/patch3dmatch.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/patch3dmatch.yaml new file mode 100644 index 00000000..608a980a --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/patch3dmatch.yaml @@ -0,0 +1,40 @@ +data: + task: registration + class: general3dmatch.General3DMatchDataset + dataroot: data + name: 3DMatch + is_patch: True + num_frame_per_fragment: 101 + max_dist_overlap: 0.04 + min_overlap_ratio: 0.2 + tsdf_voxel_size: 0.006 + limit_size: 850 + depth_thresh: 4.5 + radius_patch: 0.3 + first_subsampling: 0.02 + num_random_pt: 300 + is_offline: True + pre_filters: + - filter: PlanarityFilter + params: + thresh: 0.3 + test_pre_filters: + - filter: RandomFilter + params: + thresh: 0.1 + pre_transforms: + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + train_transforms: + - transform: FixedPoints + lparams: [1024] + - transform: Center + - transform: RandomNoise + params: + sigma: 0.0005 + clip: 0.05 + test_transforms: + - transform: FixedPoints + lparams: [1024] + - transform: Center diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/test3dmatch.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/test3dmatch.yaml new file mode 100644 index 00000000..ee39797a --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/test3dmatch.yaml @@ -0,0 +1,18 @@ +data: + task: registration + class: test3dmatch.Test3DMatchDataset + dataroot: data + name: 3DMatch + is_patch: True + radius_patch: 0.3 + first_subsampling: 0.02 + num_random_pt: 5000 + is_offline: True + pre_transforms: + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + test_transforms: + - transform: FixedPoints + lparams: [1024] + - transform: Center diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/test3dmatch_sparse.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/test3dmatch_sparse.yaml new file mode 100644 index 00000000..1987a3e6 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/test3dmatch_sparse.yaml @@ -0,0 +1,26 @@ +data: + task: registration + class: test3dmatch.Test3DMatchDataset + dataroot: data + name: 3DMatch + is_patch: False + radius_patch: 0.3 + first_subsampling: 0.025 + num_random_pt: 5000 + is_offline: True + pre_transforms: + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + test_transforms: + - transform: SaveOriginalPosId + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: "last" + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/testeth.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/testeth.yaml new file mode 100644 index 00000000..466bde0c --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/testeth.yaml @@ -0,0 +1,90 @@ +data: + task: registration + class: testeth.ETHDataset + dataroot: data + first_subsampling: 0.02 + max_dist_overlap: 0.05 + min_size_block: 5 + max_size_block: 7 + num_pos_pairs: 30000 + min_points: 300 + num_points: 5000 + tau_1: 0.1 + tau_2: 0.05 + rot_thresh: 4 + trans_thresh: 0.15 + sym: True + pre_transforms: + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + ss_transform: + - transform: CubeCrop + params: + c: 5 + - transform: CubeCrop + params: + c: 5.5 + + train_transform: + - transform: SaveOriginalPosId + - transform: FixedPoints + lparams: [20000] + - transform: RandomNoise + params: + sigma: 0.01 + clip: 0.05 + - transform: Random3AxisRotation + params: + apply_rotation: True + rot_x: 360 + rot_y: 360 + rot_z: 360 + - transform: RandomScaleAnisotropic + params: + scales: [0.9,1.2] + - transform: XYZFeature + params: + add_x: True + add_y: True + add_z: True + - transform: LotteryTransform + params: + transform_options: + - transform: GridSampling3D + params: + size: 0.02 + - transform: GridSampling3D + params: + size: 0.03 + - transform: GridSampling3D + params: + size: 0.04 + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: "last" + - transform: ShiftVoxels + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' + test_transform: + - transform: SaveOriginalPosId + - transform: XYZFeature + params: + add_x: True + add_y: True + add_z: True + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: "last" + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/testkaist.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/testkaist.yaml new file mode 100644 index 00000000..3b70f1b1 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/testkaist.yaml @@ -0,0 +1,78 @@ +data: + task: registration + class: testkaist.KaistDataset + dataroot: data + first_subsampling: 0.02 + max_dist_overlap: 0.05 + min_size_block: 1.5 + max_size_block: 2 + num_pos_pairs: 30000 + min_points: 300 + num_points: 5000 + tau_1: 0.1 + tau_2: 0.05 + rot_thresh: 4 + trans_thresh: 0.15 + sym: True + pre_transforms: + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + ss_transform: + - transform: CubeCrop + params: + c: 1 + - transform: CubeCrop + params: + c: 1.5 + + train_transform: + - transform: SaveOriginalPosId + - transform: FixedPoints + lparams: [20000] + - transform: RandomNoise + params: + sigma: 0.01 + clip: 0.05 + - transform: Random3AxisRotation + params: + apply_rotation: True + rot_x: 360 + rot_y: 360 + rot_z: 360 + - transform: RandomScaleAnisotropic + params: + scales: [0.9,1.2] + - transform: XYZFeature + params: + add_x: True + add_y: True + add_z: True + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: "last" + - transform: ShiftVoxels + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' + test_transform: + - transform: SaveOriginalPosId + - transform: XYZFeature + params: + add_x: True + add_y: True + add_z: True + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: "last" + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/testplanetary.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/testplanetary.yaml new file mode 100644 index 00000000..0b6f2552 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/testplanetary.yaml @@ -0,0 +1,78 @@ +data: + task: registration + class: testplanetary.PlanetaryDataset + dataroot: data + first_subsampling: 0.02 + max_dist_overlap: 0.05 + min_size_block: 1.5 + max_size_block: 2 + num_pos_pairs: 30000 + min_points: 300 + num_points: 5000 + tau_1: 0.1 + tau_2: 0.05 + rot_thresh: 4 + trans_thresh: 0.15 + sym: True + pre_transforms: + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + ss_transform: + - transform: CubeCrop + params: + c: 1 + - transform: CubeCrop + params: + c: 1.5 + + train_transform: + - transform: SaveOriginalPosId + - transform: FixedPoints + lparams: [20000] + - transform: RandomNoise + params: + sigma: 0.01 + clip: 0.05 + - transform: Random3AxisRotation + params: + apply_rotation: True + rot_x: 360 + rot_y: 360 + rot_z: 360 + - transform: RandomScaleAnisotropic + params: + scales: [0.9,1.2] + - transform: XYZFeature + params: + add_x: True + add_y: True + add_z: True + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: "last" + - transform: ShiftVoxels + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' + test_transform: + - transform: SaveOriginalPosId + - transform: XYZFeature + params: + add_x: True + add_y: True + add_z: True + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: "last" + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/testtum.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/testtum.yaml new file mode 100644 index 00000000..fd67b292 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/registration/testtum.yaml @@ -0,0 +1,78 @@ +data: + task: registration + class: testtum.TUMDataset + dataroot: data + first_subsampling: 0.02 + max_dist_overlap: 0.05 + min_size_block: 1.5 + max_size_block: 2 + num_pos_pairs: 30000 + min_points: 300 + num_points: 5000 + tau_1: 0.1 + tau_2: 0.05 + rot_thresh: 4 + trans_thresh: 0.15 + sym: True + pre_transforms: + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + ss_transform: + - transform: CubeCrop + params: + c: 1 + - transform: CubeCrop + params: + c: 1.5 + + train_transform: + - transform: SaveOriginalPosId + - transform: FixedPoints + lparams: [20000] + - transform: RandomNoise + params: + sigma: 0.01 + clip: 0.05 + - transform: Random3AxisRotation + params: + apply_rotation: True + rot_x: 360 + rot_y: 360 + rot_z: 360 + - transform: RandomScaleAnisotropic + params: + scales: [0.9,1.2] + - transform: XYZFeature + params: + add_x: True + add_y: True + add_z: True + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: "last" + - transform: ShiftVoxels + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' + test_transform: + - transform: SaveOriginalPosId + - transform: XYZFeature + params: + add_x: True + add_y: True + add_z: True + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: "last" + - transform: AddOnes + - transform: AddFeatByKey + params: + add_to_x: True + feat_name: 'ones' diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/s3dis1x1.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/s3dis1x1.yaml new file mode 100644 index 00000000..692b4830 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/s3dis1x1.yaml @@ -0,0 +1,21 @@ +data: + task: segmentation + class: s3dis.S3DIS1x1Dataset + dataroot: data + fold: 5 + class_weight_method: "sqrt" + use_category: False + train_transforms: + - transform: FixedPoints + lparams: [4096] + - transform: RandomTranslate + params: + translate: 0.01 + - transform: RandomRotate + params: + degrees: 180 + axis: 2 + + test_transforms: + - transform: FixedPoints + lparams: [4096] diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/s3disfused-addones.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/s3disfused-addones.yaml new file mode 100644 index 00000000..18820886 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/s3disfused-addones.yaml @@ -0,0 +1,57 @@ +data: + task: segmentation + class: s3dis.S3DISFusedDataset + dataroot: data + fold: 5 + first_subsampling: 0.04 + use_category: False + pre_collate_transform: + - transform: PointCloudFusion # One point cloud per area + - transform: SaveOriginalPosId # Required so that one can recover the original point in the fused point cloud + - transform: GridSampling3D # Samples on a grid + params: + size: ${data.first_subsampling} + train_transforms: + - transform: RandomNoise + params: + sigma: 0.001 + - transform: RandomRotate + params: + degrees: 180 + axis: 2 + - transform: RandomScaleAnisotropic + params: + scales: [0.8, 1.2] + - transform: RandomSymmetry + params: + axis: [True, False, False] + - transform: DropFeature + params: + drop_proba: 0.2 + feature_name: rgb + - transform: XYZFeature + params: + add_x: False + add_y: False + add_z: True + - transform: AddOnes + - transform: AddFeatsByKeys + params: + list_add_to_x: [True, True, True] + feat_names: [rgb, pos_z, ones] + delete_feats: [True, True, True] + - transform: Center + test_transform: + - transform: XYZFeature + params: + add_x: False + add_y: False + add_z: True + - transform: AddOnes + - transform: AddFeatsByKeys + params: + list_add_to_x: [True, True, True] + feat_names: [rgb, pos_z, ones] + delete_feats: [True, True, True] + - transform: Center + val_transform: ${data.test_transform} diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/s3disfused-fixed.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/s3disfused-fixed.yaml new file mode 100644 index 00000000..f08b3270 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/s3disfused-fixed.yaml @@ -0,0 +1,65 @@ +data: + task: segmentation + class: s3dis.S3DISFusedDataset + dataroot: data + fold: 5 + first_subsampling: 0.04 + use_category: False + pre_collate_transform: + - transform: PointCloudFusion # One point cloud per area + - transform: SaveOriginalPosId # Required so that one can recover the original point in the fused point cloud + - transform: GridSampling3D # Samples on a grid + params: + size: ${data.first_subsampling} + train_transforms: + - transform: FixedPoints + lparams: [20000] + - transform: RandomNoise + params: + sigma: 0.001 + - transform: RandomRotate + params: + degrees: 180 + axis: 2 + - transform: RandomScaleAnisotropic + params: + scales: [0.8, 1.2] + - transform: RandomSymmetry + params: + axis: [True, False, False] + - transform: DropFeature + params: + drop_proba: 0.2 + feature_name: rgb + - transform: XYZFeature + params: + add_x: False + add_y: False + add_z: True + - transform: AddFeatsByKeys + params: + list_add_to_x: [True, True] + feat_names: [rgb, pos_z] + delete_feats: [True, True] + - transform: Center + - transform: ScalePos + params: + scale: 0.5 + test_transform: + - transform: FixedPoints + lparams: [20000] + - transform: XYZFeature + params: + add_x: False + add_y: False + add_z: True + - transform: AddFeatsByKeys + params: + list_add_to_x: [True, True] + feat_names: [rgb, pos_z] + delete_feats: [True, True] + - transform: Center + - transform: ScalePos + params: + scale: 0.5 + val_transform: ${data.test_transform} \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/s3disfused-sparse.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/s3disfused-sparse.yaml new file mode 100644 index 00000000..ebdd6e48 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/s3disfused-sparse.yaml @@ -0,0 +1,66 @@ +data: + task: segmentation + class: s3dis.S3DISFusedDataset + dataroot: data + fold: 5 + first_subsampling: 0.04 + use_category: False + pre_collate_transform: + - transform: PointCloudFusion # One point cloud per area + - transform: SaveOriginalPosId # Required so that one can recover the original point in the fused point cloud + - transform: GridSampling3D # Samples on a grid + params: + size: ${data.first_subsampling} + train_transforms: + - transform: RandomNoise + params: + sigma: 0.001 + - transform: RandomRotate + params: + degrees: 180 + axis: 2 + - transform: RandomScaleAnisotropic + params: + scales: [0.8, 1.2] + - transform: RandomSymmetry + params: + axis: [True, False, False] + - transform: DropFeature + params: + drop_proba: 0.2 + feature_name: rgb + - transform: XYZFeature + params: + add_x: False + add_y: False + add_z: True + - transform: AddFeatsByKeys + params: + list_add_to_x: [True, True] + feat_names: [rgb, pos_z] + delete_feats: [True, True] + - transform: Center + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: "last" + - transform: ShiftVoxels + test_transform: + - transform: XYZFeature + params: + add_x: False + add_y: False + add_z: True + - transform: AddFeatsByKeys + params: + list_add_to_x: [True, True] + feat_names: [rgb, pos_z] + delete_feats: [True, True] + - transform: Center + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: True + mode: "last" + val_transform: ${data.test_transform} diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/s3disfused.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/s3disfused.yaml new file mode 100644 index 00000000..ad09a61d --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/s3disfused.yaml @@ -0,0 +1,55 @@ +data: + task: segmentation + class: s3dis.S3DISFusedDataset + dataroot: data + fold: 5 + first_subsampling: 0.04 + use_category: False + pre_collate_transform: + - transform: PointCloudFusion # One point cloud per area + - transform: SaveOriginalPosId # Required so that one can recover the original point in the fused point cloud + - transform: GridSampling3D # Samples on a grid + params: + size: ${data.first_subsampling} + train_transforms: + - transform: RandomNoise + params: + sigma: 0.001 + - transform: RandomRotate + params: + degrees: 180 + axis: 2 + - transform: RandomScaleAnisotropic + params: + scales: [0.8, 1.2] + - transform: RandomSymmetry + params: + axis: [True, False, False] + - transform: DropFeature + params: + drop_proba: 0.2 + feature_name: rgb + - transform: XYZFeature + params: + add_x: False + add_y: False + add_z: True + - transform: AddFeatsByKeys + params: + list_add_to_x: [True, True] + feat_names: [rgb, pos_z] + delete_feats: [True, True] + - transform: Center + test_transform: + - transform: XYZFeature + params: + add_x: False + add_y: False + add_z: True + - transform: AddFeatsByKeys + params: + list_add_to_x: [True, True] + feat_names: [rgb, pos_z] + delete_feats: [True, True] + - transform: Center + val_transform: ${data.test_transform} diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/scannet-fixed.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/scannet-fixed.yaml new file mode 100644 index 00000000..bd8da22f --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/scannet-fixed.yaml @@ -0,0 +1,51 @@ +data: + class: scannet.ScannetDataset + dataset_name: "scannet-sparse" + task: segmentation + dataroot: data + grid_size: 0.02 + version: "v2" + use_instance_labels: False + use_instance_bboxes: False + donotcare_class_ids: [] + process_workers: 0 + apply_rotation: True + use_category: False + mode: "mean" + + pre_transform: + - transform: GridSampling3D + lparams: [0.02] + + train_transform: + - transform: FixedPoints + lparams: [40000] + - transform: RandomNoise + params: + sigma: 0.01 + clip: 0.05 + - transform: Random3AxisRotation + params: + apply_rotation: ${data.apply_rotation} + rot_x: 2 + rot_y: 2 + rot_z: 180 + - transform: RandomScaleAnisotropic + params: + scales: [0.9, 1.1] + - transform: AddFeatsByKeys + params: + list_add_to_x: [True] + feat_names: ["rgb"] + delete_feats: [True] + + val_transform: + - transform: FixedPoints + lparams: [40000] + - transform: AddFeatsByKeys + params: + list_add_to_x: [True] + feat_names: ["rgb"] + delete_feats: [True] + + test_transform: ${data.val_transform} diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/scannet-pvcnn.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/scannet-pvcnn.yaml new file mode 100644 index 00000000..0fa1e934 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/scannet-pvcnn.yaml @@ -0,0 +1,46 @@ +data: + class: scannet.ScannetDataset + dataset_name: "scannet-sparse" + task: segmentation + dataroot: data + grid_size: 0.05 + version: "v2" + use_instance_labels: False + use_instance_bboxes: False + donotcare_class_ids: [] + process_workers: 8 + apply_rotation: True + + train_pre_batch_collate_transform: + - transform: ClampBatchSize + params: + num_points: 600000 + + train_transform: + - transform: ElasticDistortion + - transform: Random3AxisRotation + params: + apply_rotation: ${data.apply_rotation} + rot_x: 8 + rot_y: 8 + rot_z: 180 + - transform: RandomSymmetry + params: + axis: [True, True, False] + - transform: RandomScaleAnisotropic + params: + scales: [0.9, 1.1] + - transform: AddFeatsByKeys + params: + list_add_to_x: [True] + feat_names: ["rgb"] + delete_feats: [True] + + val_transform: + - transform: AddFeatsByKeys + params: + list_add_to_x: [True] + feat_names: ["rgb"] + delete_feats: [True] + + test_transform: ${data.val_transform} diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/scannet-sparse.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/scannet-sparse.yaml new file mode 100644 index 00000000..6c2fb786 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/scannet-sparse.yaml @@ -0,0 +1,57 @@ +data: + class: scannet.ScannetDataset + dataset_name: "scannet-sparse" + task: segmentation + dataroot: data + grid_size: 0.05 + version: "v2" + use_instance_labels: False + use_instance_bboxes: False + donotcare_class_ids: [] + process_workers: 8 + apply_rotation: True + mode: "last" + + train_pre_batch_collate_transform: + - transform: ClampBatchSize + params: + num_points: 400000 + + train_transform: + - transform: ElasticDistortion + - transform: Random3AxisRotation + params: + apply_rotation: ${data.apply_rotation} + rot_x: 8 + rot_y: 8 + rot_z: 180 + - transform: RandomSymmetry + params: + axis: [True, True, False] + - transform: RandomScaleAnisotropic + params: + scales: [0.9, 1.1] + - transform: GridSampling3D + params: + size: ${data.grid_size} + quantize_coords: True + mode: ${data.mode} + - transform: AddFeatsByKeys + params: + list_add_to_x: [True] + feat_names: ["rgb"] + delete_feats: [True] + + val_transform: + - transform: GridSampling3D + params: + size: ${data.grid_size} + quantize_coords: True + mode: ${data.mode} + - transform: AddFeatsByKeys + params: + list_add_to_x: [True] + feat_names: ["rgb"] + delete_feats: [True] + + test_transform: ${data.val_transform} diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/scannet.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/scannet.yaml new file mode 100644 index 00000000..df7f6e56 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/scannet.yaml @@ -0,0 +1,62 @@ +data: + class: scannet.ScannetDataset + dataset_name: "scannet-sparse" + task: segmentation + dataroot: data + version: v2 + use_instance_labels: False + use_instance_bboxes: False + donotcare_class_ids: [] + process_workers: 4 + use_category: False + + apply_rotation: True + mode: mean + first_subsampling: 0.05 + + pre_transform: + - transform: GridSampling3D + lparams: [0.02] + + train_transform: + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: False + mode: ${data.mode} + - transform: RandomNoise + params: + sigma: 0.01 + clip: 0.05 + - transform: Random3AxisRotation + params: + apply_rotation: ${data.apply_rotation} + rot_x: 2 + rot_y: 2 + rot_z: 180 + - transform: RandomScaleAnisotropic + params: + scales: [0.9, 1.1] + - transform: FixedPoints + lparams: [100000] + params: + replace: False + - transform: AddFeatsByKeys + params: + list_add_to_x: [True] + feat_names: ["rgb"] + delete_feats: [True] + + val_transform: + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + quantize_coords: False + mode: ${data.mode} + - transform: AddFeatsByKeys + params: + list_add_to_x: [True] + feat_names: ["rgb"] + delete_feats: [True] + + test_transform: ${data.val_transform} diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/semanticKitti.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/semanticKitti.yaml new file mode 100644 index 00000000..b5916ebc --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/semanticKitti.yaml @@ -0,0 +1,35 @@ +data: + class: semantickitti.SemanticKittiDataset + dataset_name: "kitti" + task: segmentation + dataroot: data + grid_size: 0.1 + process_workers: 8 + apply_rotation: True + mode: "last" + + train_transform: + - transform: ElasticDistortion + - transform: Random3AxisRotation + params: + apply_rotation: ${data.apply_rotation} + rot_x: 2 + rot_y: 2 + rot_z: 180 + - transform: RandomScaleAnisotropic + params: + scales: [0.9, 1.1] + - transform: GridSampling3D + params: + size: ${data.grid_size} + quantize_coords: True + mode: ${data.mode} + + val_transform: + - transform: GridSampling3D + params: + size: ${data.grid_size} + quantize_coords: True + mode: ${data.mode} + + test_transform: ${data.val_transform} diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/shapenet-fixed.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/shapenet-fixed.yaml new file mode 100644 index 00000000..7df54116 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/shapenet-fixed.yaml @@ -0,0 +1,25 @@ +data: + class: shapenet.ShapeNetDataset + task: segmentation + dataroot: data + normal: True + train_size: 14016 # Number of shapes in the whole training set + use_category: True + category: 'Cap' + first_subsampling: 0.02 + pre_transforms: + - transform: NormalizeScale + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + train_transforms: + - transform: FixedPoints + lparams: [2048] + - transform: RandomNoise + params: + sigma: 0.01 + clip: 0.05 + test_transforms: + - transform: FixedPoints + lparams: [2048] + val_transforms: ${data.test_transforms} diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/shapenet.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/shapenet.yaml new file mode 100644 index 00000000..2f38eb2c --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/shapenet.yaml @@ -0,0 +1,20 @@ +data: + class: shapenet.ShapeNetDataset + task: segmentation + dataroot: data + normal: True # Use normal vectors as features + first_subsampling: 0.02 # Grid size of the input data + use_category: True # Use object category information + pre_transforms: # Offline transforms, done only once + - transform: NormalizeScale + - transform: GridSampling3D + params: + size: ${data.first_subsampling} + train_transforms: # Data augmentation pipeline + - transform: RandomNoise + params: + sigma: 0.01 + clip: 0.05 + - transform: RandomScaleAnisotropic + params: + scales: [0.9, 1.1] diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/shapenet_sparse.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/shapenet_sparse.yaml new file mode 100644 index 00000000..23c2cde8 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/shapenet_sparse.yaml @@ -0,0 +1,32 @@ +data: + class: shapenet.ShapeNetDataset + dataset_name: "shapenet_sparse" + task: segmentation + dataroot: data + normal: True + use_category: True + grid_size: 0.02 + pre_transforms: + - transform: NormalizeScale + train_transforms: + - transform: RandomNoise + params: + sigma: 0.01 + clip: 0.05 + - transform: GridSampling3D + params: + grid_size: ${data.grid_size} + mode: "mean" + quantize_coords: True + test_transforms: + - transform: GridSampling3D + params: + grid_size: ${data.grid_size} + quantize_coords: True + mode: "mean" + val_transforms: + - transform: GridSampling3D + params: + grid_size: ${data.grid_size} + quantize_coords: True + mode: "mean" diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/urbanmeshused.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/urbanmeshused.yaml new file mode 100644 index 00000000..17ccef3a --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/data/segmentation/urbanmeshused.yaml @@ -0,0 +1,57 @@ +data: + task: segmentation + class: urbanmesh.UrbanMeshDataset + dataroot: data + fold: 5 + first_subsampling: 0.01 #PointNet, PointNet++: 0.01; KPConv: GPU 0.5 + use_category: False + pre_collate_transform: + - transform: PointCloudFusion # One point cloud per area + - transform: SaveOriginalPosId # Required so that one can recover the original point in the fused point cloud + - transform: GridSampling3D # Samples on a grid + params: + size: ${data.first_subsampling} + train_transforms: + - transform: RandomNoise + params: + sigma: 0.001 + - transform: RandomRotate + params: + degrees: 180 + axis: 2 + - transform: RandomScaleAnisotropic + params: + scales: [0.8, 1.2] + - transform: RandomSymmetry + params: + axis: [True, False, False] + - transform: DropFeature + params: + drop_proba: 0.2 + feature_name: rgb + - transform: XYZFeature + params: + add_x: False + add_y: False + add_z: True + - transform: AddFeatsByKeys + params: + list_add_to_x: [True, True] + feat_names: [rgb, pos_z] + input_nc_feats: [3, 1] + delete_feats: [True, True] + - transform: Center + test_transform: + - transform: XYZFeature + params: + add_x: False + add_y: False + add_z: True + - transform: AddFeatsByKeys + params: + list_add_to_x: [True, True] + feat_names: [rgb, pos_z] + input_nc_feats: [3, 1] + delete_feats: [True, True] + - transform: Center + val_transform: ${data.test_transform} diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/debugging/default.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/debugging/default.yaml new file mode 100644 index 00000000..108cd7dc --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/debugging/default.yaml @@ -0,0 +1,5 @@ +debugging: + find_neighbour_dist: False + num_batches: 50 + early_break: False + profiling: False \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/debugging/early_break.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/debugging/early_break.yaml new file mode 100644 index 00000000..4a0a269e --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/debugging/early_break.yaml @@ -0,0 +1,2 @@ +debugging: + early_break: True \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/debugging/find_neighbour_dist.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/debugging/find_neighbour_dist.yaml new file mode 100644 index 00000000..9380e11e --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/debugging/find_neighbour_dist.yaml @@ -0,0 +1,3 @@ +debugging: + find_neighbour_dist: True + num_batches: 20 \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/eval.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/eval.yaml new file mode 100644 index 00000000..9d0e7acb --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/eval.yaml @@ -0,0 +1,21 @@ +num_workers: 1 +batch_size: 1 +cuda: 0 #0 +weight_name: "latest" # Used during resume, select with model to load from [miou, macc, acc..., latest] +enable_cudnn: True +checkpoint_dir: "C:/data/Algorithms_comparision/my_torch_point3d_comp/outputs/benchmark/" # "{your_path}/outputs/2020-01-28/11-04-13" for example +model_name: pointnet2_charlesmsg # PointNet # pointnet2_largemsg # Res16UNet34C +precompute_multi_scale: False #True # Compute multiscate features on cpu for faster training / inference +enable_dropout: False +voting_runs: 1 + +defaults: + - visualization: default #yaml file name + +tracker_options: # Extra options for the tracker + full_res: False + make_submission: True # False + +hydra: + run: + dir: ${checkpoint_dir}/eval/${now:%Y-%m-%d_%H-%M-%S} diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/hydra/job_logging/custom.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/hydra/job_logging/custom.yaml new file mode 100644 index 00000000..92cf00c4 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/hydra/job_logging/custom.yaml @@ -0,0 +1,20 @@ +hydra: + job_logging: + formatters: + simple: + format: "%(message)s" + root: + handlers: [debug_console_handler, file_handler] + version: 1 + handlers: + debug_console_handler: + level: DEBUG + formatter: simple + class: logging.StreamHandler + stream: ext://sys.stdout + file_handler: + level: DEBUG + formatter: simple + class: logging.FileHandler + filename: train.log + disable_existing_loggers: False diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/hydra/output/custom.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/hydra/output/custom.yaml new file mode 100644 index 00000000..f7d90993 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/hydra/output/custom.yaml @@ -0,0 +1,4 @@ +# @package _global_ +hydra: + run: + dir: ./outputs/${job_name}/${job_name}-${model_name}-${now:%Y%m%d_%H%M%S} diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/cosine.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/cosine.yaml new file mode 100644 index 00000000..36af4ef9 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/cosine.yaml @@ -0,0 +1,4 @@ +lr_scheduler: + class: CosineAnnealingLR + params: + T_max: 10 \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/cyclic.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/cyclic.yaml new file mode 100644 index 00000000..52919ead --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/cyclic.yaml @@ -0,0 +1,5 @@ +lr_scheduler: + class: CyclicLR + params: + base_lr: ${training.optim.base_lr} + max_lr: 0.1 \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/exponential.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/exponential.yaml new file mode 100644 index 00000000..6f5234b3 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/exponential.yaml @@ -0,0 +1,4 @@ +lr_scheduler: + class: ExponentialLR + params: + gamma: 0.9885 # = 0.1**(1/200.) divide by 10 every 200 epochs diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/multi_step.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/multi_step.yaml new file mode 100644 index 00000000..a35c9d08 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/multi_step.yaml @@ -0,0 +1,5 @@ +lr_scheduler: + class: MultiStepLR + params: + milestones: [80,120,160] + gamma: 0.2 diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/multi_step_reg.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/multi_step_reg.yaml new file mode 100644 index 00000000..4e658d1f --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/multi_step_reg.yaml @@ -0,0 +1,5 @@ +lr_scheduler: + class: MultiStepLR + params: + milestones: [600, 1200, 1800, 3000] + gamma: 0.5 diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/plateau.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/plateau.yaml new file mode 100644 index 00000000..709d641b --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/plateau.yaml @@ -0,0 +1,4 @@ +lr_scheduler: + class: ReduceLROnPlateau + params: + mode: "min" \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/poly_lr.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/poly_lr.yaml new file mode 100644 index 00000000..64dd760b --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/poly_lr.yaml @@ -0,0 +1,9 @@ +lr_scheduler: + class: PolyLR + params: + on_epoch: + max_iter: 150 + power: 0.9 + on_num_batch: + max_iter: 60000 + power: 2 diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/step.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/step.yaml new file mode 100644 index 00000000..7bf09cf5 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/lr_scheduler/step.yaml @@ -0,0 +1,6 @@ +lr_scheduler: + class: StepLR + params: + step_size: 10 + gamma: 0.9 + last_epoch: -1 \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/object_detection/votenet.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/object_detection/votenet.yaml new file mode 100644 index 00000000..a1067c93 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/object_detection/votenet.yaml @@ -0,0 +1,52 @@ +models: + VoteNetPaper: + class: votenet.VoteNetModel + conv_type: "DENSE" + define_constants: + in_feat: 64 + num_layers_down: 4 + num_layers_up: 2 + num_proposal: 256 + num_features: 256 + backbone: + model_type: "PointNet2" + down_conv: + module_name: PointNetMSGDown + npoint: [2048, 1024, 512, 256] + radii: [[0.2], [0.4], [0.8], [1.2]] + nsamples: [[64], [32], [16], [16]] + down_conv_nn: [[[FEAT + 3, in_feat, in_feat, in_feat * 2]], + [[in_feat * 2 + 3, in_feat * 2, in_feat * 2, in_feat * 4]], + [[in_feat * 4 + 3, in_feat * 2, in_feat * 2, in_feat * 4]], + [[in_feat * 4 + 3, in_feat * 2, in_feat * 2, in_feat * 4]]] + save_sampling_id: [True, False, False, False] + normalize_xyz: [True, True, True, True] + up_conv: + module_name: DenseFPModule + up_conv_nn: + [ + [in_feat * 4 + in_feat * 4, in_feat * 4, in_feat * 4], + [in_feat * 4 + in_feat * 4, in_feat * 4, num_features] + ] + skip: True + voting: + module_name: VotingModule + vote_factor: 1 + feat_dim: num_features + proposal: + module_name: ProposalModule + vote_aggregation: + module_name: PointNetMSGDown + npoint: num_proposal + radii: [0.3] + nsample: [16] + down_conv_nn: [[num_features + 3, in_feat * 2, in_feat * 2, in_feat * 2]] + normalize_xyz: True + num_heading_bin: 1 + num_proposal: num_proposal + sampling: "seed_fps" + loss_params: + far_threshold: 0.6 + near_threshold: 0.3 + gt_vote_factor: 3 # number of GT votes per point + objectness_cls_weights: [0.2, 0.8] \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/object_detection/votenet2.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/object_detection/votenet2.yaml new file mode 100644 index 00000000..5e466fce --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/object_detection/votenet2.yaml @@ -0,0 +1,442 @@ +models: + VoteNetPaper: + class: votenet2.VoteNet2 + conv_type: "DENSE" + define_constants: + in_feat: 64 + num_layers_down: 4 + num_layers_up: 2 + num_proposal: 256 + num_features: 256 + num_classes: 18 # Should be $(data.num_classes) + backbone: + model_type: "PointNet2" + config: + down_conv: + module_name: PointNetMSGDown + npoint: [2048, 1024, 512, 256] + radii: [[0.2], [0.4], [0.8], [1.2]] + nsamples: [[64], [32], [16], [16]] + down_conv_nn: + [ + [[FEAT + 3, in_feat, in_feat, in_feat * 2]], + [ + [ + in_feat * 2 + 3, + in_feat * 2, + in_feat * 2, + in_feat * 4, + ], + ], + [ + [ + in_feat * 4 + 3, + in_feat * 2, + in_feat * 2, + in_feat * 4, + ], + ], + [ + [ + in_feat * 4 + 3, + in_feat * 2, + in_feat * 2, + in_feat * 4, + ], + ], + ] + save_sampling_id: [True, False, False, False] + normalize_xyz: [True, True, True, True] + up_conv: + module_name: DenseFPModule + up_conv_nn: + [ + [ + in_feat * 4 + in_feat * 4, + in_feat * 4, + in_feat * 4, + ], + [ + in_feat * 4 + in_feat * 4, + in_feat * 4, + num_features, + ], + ] + skip: True + voting: + module_name: VotingModule + vote_factor: 1 + feat_dim: num_features + num_points_to_sample: 1024 + proposal: + module_name: ProposalModule + vote_aggregation: + module_name: PointNetMSGDown + npoint: num_proposal + radii: [0.3] + nsample: [16] + down_conv_nn: + [[num_features + 3, in_feat * 2, in_feat * 2, in_feat * 2]] + normalize_xyz: True + num_class: num_classes + num_heading_bin: 1 + num_proposal: num_proposal + sampling: "seed_fps" + loss_params: + far_threshold: 0.6 + near_threshold: 0.3 + gt_vote_factor: 3 # number of GT votes per point + objectness_cls_weights: [0.2, 0.8] + + VoteNetMink: + class: votenet2.VoteNet2 + conv_type: "SPARSE" + define_constants: + num_proposal: 256 + num_classes: 18 # Should be $(data.num_classes) + backbone: + model_type: "Minkowski" + config: + class: minkowski.Minkowski_Model + conv_type: "SPARSE" + define_constants: + in_feat: 32 + in_feat_tr: 128 + down_conv: + module_name: ResNetDown + dimension: 3 + bn_momentum: 0.2 + down_conv_nn: + [ + [FEAT, in_feat], + [in_feat, in_feat], + [in_feat, 2*in_feat], + [2*in_feat, 4*in_feat], + [4*in_feat, 8*in_feat], + ] + kernel_size: 3 + stride: [1, 2, 2, 2, 2] + dilation: 1 + N: [0, 1, 1, 1, 1, 1] + up_conv: + module_name: ResNetUp + dimension: 3 + bn_momentum: 0.2 + up_conv_nn: + [ + [8*in_feat, in_feat_tr], + [in_feat_tr + 4*in_feat, in_feat_tr], + [in_feat_tr + 2*in_feat, in_feat_tr], + [in_feat_tr + in_feat, in_feat_tr], + [in_feat_tr + in_feat, in_feat_tr], + ] + kernel_size: 3 + stride: [2, 2, 2, 2, 1] + dilation: 1 + N: [1, 1, 1, 1, 1, 0] + voting: + module_name: VotingModule + vote_factor: 1 + num_points_to_sample: 1024 + proposal: + module_name: ProposalModule + vote_aggregation: + module_name: PointNetMSGDown + npoint: num_proposal + radii: [0.3] + nsample: [16] + normalize_xyz: True + num_class: num_classes + num_heading_bin: 1 + num_proposal: num_proposal + sampling: "seed_fps" + loss_params: + far_threshold: 0.6 + near_threshold: 0.3 + gt_vote_factor: 3 # number of GT votes per point + objectness_cls_weights: [0.2, 0.8] + + VoteNetKPConv: + class: votenet2.VoteNet2 + conv_type: "PARTIAL_DENSE" + define_constants: + num_proposal: 256 + num_classes: 18 # Should be $(data.num_classes) + backbone: + model_type: "KPConv" + extra_options: + in_grid_size: ${data.grid_size} + voting: + module_name: VotingModule + vote_factor: 1 + num_points_to_sample: 1024 + proposal: + module_name: ProposalModule + vote_aggregation: + module_name: PointNetMSGDown + npoint: num_proposal + radii: [0.3] + nsample: [16] + normalize_xyz: True + num_class: num_classes + num_heading_bin: 1 + num_proposal: num_proposal + sampling: "seed_fps" + loss_params: + far_threshold: 0.6 + near_threshold: 0.3 + gt_vote_factor: 3 # number of GT votes per point + objectness_cls_weights: [0.2, 0.8] + + VoteNetPN2: + class: votenet2.VoteNet2 + conv_type: "DENSE" + define_constants: + num_proposal: 256 + num_classes: 18 # Should be $(data.num_classes) + backbone: + model_type: "PointNet2" + voting: + module_name: VotingModule + vote_factor: 1 + num_points_to_sample: 1024 + proposal: + module_name: ProposalModule + vote_aggregation: + module_name: PointNetMSGDown + npoint: num_proposal + radii: [0.3] + nsample: [16] + normalize_xyz: True + num_class: num_classes + num_heading_bin: 1 + num_proposal: num_proposal + sampling: "seed_fps" + loss_params: + far_threshold: 0.6 + near_threshold: 0.3 + gt_vote_factor: 3 # number of GT votes per point + objectness_cls_weights: [0.2, 0.8] + + VoteNetRSConv: + class: votenet2.VoteNet2 + conv_type: "DENSE" + define_constants: + num_proposal: 256 + num_classes: 18 # Should be $(data.num_classes) + backbone: + model_type: "RSConv" + config: + down_conv: + save_sampling_id: [True, False, False, False] + module_name: RSConvOriginalMSGDown + npoint: [2048, 1024, 512, 256] + radii: + [ + [0.125, 0.2, 0.25], + [0.2, 0.3, 0.4], + [0.4, 0.6, 0.8], + [0.8, 1.2, 1.6], + ] + nsamples: + [[16, 32, 64], [16, 32, 48], [16, 32, 48], [16, 24, 32]] + down_conv_nn: + [ + [[10, 64//2, 16], [FEAT + 3, 16]], + [10, 128//4, 64 * 3 + 3], + [10, 256//4, 128 * 3 + 3], + [10, 512//4, 256 * 3 + 3], + ] + channel_raising_nn: + [ + [16, 64], + [64 * 3 + 3, 128], + [128 * 3 + 3, 256], + [256 * 3 + 3, 512], + ] + innermost: + - module_name: GlobalDenseBaseModule + nn: [512 * 3 + 3, 128] + aggr: "mean" + - module_name: GlobalDenseBaseModule + nn: [256 * 3 + 3, 128] + aggr: "mean" + up_conv: + module_name: DenseFPModule + up_conv_nn: + [ + [512 * 3 + 256 * 3, 512, 512], + [128 * 3 + 512, 512, 512], + [64 * 3 + 512, 256, 256], + [256 + FEAT, 128, 128], + ] + skip: True + voting: + module_name: VotingModule + vote_factor: 1 + num_points_to_sample: 1024 + proposal: + module_name: ProposalModule + vote_aggregation: + module_name: PointNetMSGDown + npoint: num_proposal + radii: [0.3] + nsample: [16] + down_conv_nn: + [[num_features + 3, in_feat * 2, in_feat * 2, in_feat * 2]] + normalize_xyz: True + num_class: num_classes + num_heading_bin: 1 + num_proposal: num_proposal + sampling: "seed_fps" + loss_params: + far_threshold: 0.6 + near_threshold: 0.3 + gt_vote_factor: 3 # number of GT votes per point + objectness_cls_weights: [0.2, 0.8] + + VoteNetRSConvTruncated: + class: votenet2.VoteNet2 + conv_type: "DENSE" + dropout: 0.5 + define_constants: + num_proposal: 256 + num_classes: 18 # Should be $(data.num_classes) + backbone: + model_type: "RSConv" + config: + down_conv: + save_sampling_id: [True, False, False, False] + module_name: RSConvOriginalMSGDown + npoint: [2048, 1024, 512, 256] + radii: + [ + [0.125, 0.2, 0.25], + [0.2, 0.3, 0.4], + [0.4, 0.6, 0.8], + [0.8, 1.2, 1.6], + ] + nsamples: + [[16, 32, 64], [16, 32, 48], [16, 32, 48], [16, 24, 32]] + down_conv_nn: + [ + [[10, 64//2, 16], [FEAT + 3, 16]], + [10, 128//4, 64 * 3 + 3], + [10, 256//4, 128 * 3 + 3], + [10, 512//4, 256 * 3 + 3], + ] + channel_raising_nn: + [ + [16, 64], + [64 * 3 + 3, 128], + [128 * 3 + 3, 256], + [256 * 3 + 3, 512], + ] + innermost: + - module_name: GlobalDenseBaseModule + nn: [512 * 3 + 3, 128] + aggr: "mean" + - module_name: GlobalDenseBaseModule + nn: [256 * 3 + 3, 128] + aggr: "mean" + up_conv: + module_name: DenseFPModule + up_conv_nn: + [ + [512 * 3 + 256 * 3, 512, 512], + [128 * 3 + 512, 512, 256], + ] + skip: True + voting: + module_name: VotingModule + vote_factor: 1 + num_points_to_sample: 1024 + proposal: + module_name: ProposalModule + vote_aggregation: + module_name: PointNetMSGDown + npoint: num_proposal + radii: [0.3] + nsample: [16] + down_conv_nn: + [[num_features + 3, in_feat * 2, in_feat * 2, in_feat * 2]] + normalize_xyz: True + num_class: num_classes + num_heading_bin: 1 + num_proposal: num_proposal + sampling: "seed_fps" + loss_params: + far_threshold: 0.6 + near_threshold: 0.3 + gt_vote_factor: 3 # number of GT votes per point + objectness_cls_weights: [0.2, 0.8] + + VoteNetRSConvSmall: + class: votenet2.VoteNet2 + conv_type: "DENSE" + define_constants: + num_proposal: 256 + num_classes: 18 # Should be $(data.num_classes) + backbone: + model_type: "RSConv" + config: + down_conv: + save_sampling_id: [True, False, False, False] + module_name: RSConvOriginalMSGDown + npoint: [2048, 1024, 512, 256] + radii: [[0.25], [0.4], [0.8], [1.6]] + nsamples: [[64], [48], [48], [32]] + down_conv_nn: + [ + [[10, 64//2, 16], [FEAT + 3, 16]], + [10, 64//4, 64 + 3], + [10, 256//4, 128 + 3], + [10, 512//4, 256 + 3], + ] + channel_raising_nn: + [ + [16, 64], + [64 + 3, 128], + [128 + 3, 256], + [256 + 3, 256], + ] + innermost: + - module_name: GlobalDenseBaseModule + nn: [256 + 3, 128] + aggr: "mean" + - module_name: GlobalDenseBaseModule + nn: [256 + 3, 128] + aggr: "mean" + up_conv: + module_name: DenseFPModule + up_conv_nn: + [ + [256 + 256, 256, 256], + [128 + 256, 256, 256], + [64 + 256, 256, 256], + [256 + FEAT, 128, 128], + ] + skip: True + voting: + module_name: VotingModule + vote_factor: 1 + num_points_to_sample: 1024 + proposal: + module_name: ProposalModule + vote_aggregation: + module_name: PointNetMSGDown + npoint: num_proposal + radii: [0.3] + nsample: [16] + down_conv_nn: + [[num_features + 3, in_feat * 2, in_feat * 2, in_feat * 2]] + normalize_xyz: True + num_class: num_classes + num_heading_bin: 1 + num_proposal: num_proposal + sampling: "seed_fps" + loss_params: + far_threshold: 0.6 + near_threshold: 0.3 + gt_vote_factor: 3 # number of GT votes per point + objectness_cls_weights: [0.2, 0.8] diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/panoptic/pointgroup.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/panoptic/pointgroup.yaml new file mode 100644 index 00000000..3c661eff --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/panoptic/pointgroup.yaml @@ -0,0 +1,158 @@ +models: + PointGroup: + class: pointgroup.PointGroup + conv_type: "SPARSE" + scorer_type: "unet" + loss_weights: + semantic: 1 + offset_norm_loss: 1 + offset_dir_loss: 1 + score_loss: 1 + backbone: + architecture: "unet" + + scorer_unet: + class: minkowski.Minkowski_Model + conv_type: "SPARSE" + define_constants: + in_feat: 96 + down_conv: + module_name: ResNetDown + dimension: 3 + down_conv_nn: [[in_feat, 2*in_feat], [2*in_feat, 4*in_feat]] + kernel_size: 3 + stride: 2 + N: 1 + up_conv: + module_name: ResNetUp + dimension: 3 + up_conv_nn: + [[4*in_feat, 2*in_feat], [2*in_feat+ 2*in_feat, in_feat]] + kernel_size: 3 + stride: 2 + N: 1 + scorer_encoder: + class: minkowski.Minkowski_Model + conv_type: "SPARSE" + define_constants: + in_feat: 96 + down_conv: + module_name: ResNetDown + dimension: 3 + down_conv_nn: [[in_feat, 2*in_feat], [2*in_feat, 4*in_feat]] + kernel_size: 3 + stride: 2 + N: 1 + innermost: + module_name: GlobalBaseModule + activation: + name: LeakyReLU + negative_slope: 0.2 + aggr: "max" + nn: [4*in_feat, in_feat] + + prepare_epoch: 120 + cluster_radius_search: 1.5 * ${data.grid_size} + + min_iou_threshold: 0.25 + max_iou_threshold: 0.75 + + vizual_ratio: 0 + + PointGroup-PAPER: + class: pointgroup.PointGroup + conv_type: "SPARSE" + scorer_type: "unet" + loss_weights: + semantic: 1 + offset_norm_loss: 1 + offset_dir_loss: 1 + score_loss: 1 + + feat_size: 16 + backbone: + architecture: "unet" + config: + class: minkowski.Minkowski_Model + conv_type: "SPARSE" + define_constants: + in_feat: ${models.PointGroup-PAPER.feat_size} + down_conv: + module_name: ResNetDown + dimension: 3 + down_conv_nn: + [ + [FEAT, in_feat], + [in_feat, 2*in_feat], + [2*in_feat, 3*in_feat], + [3*in_feat, 4*in_feat], + [4*in_feat, 5*in_feat], + [5*in_feat, 6*in_feat], + [6*in_feat, 7*in_feat], + ] + kernel_size: 3 + stride: [1, 2, 2, 2, 2, 2, 2] + N: 2 + up_conv: + module_name: ResNetUp + dimension: 3 + up_conv_nn: + [ + [7*in_feat, 6*in_feat], + [2*6*in_feat, 5*in_feat], + [2*5*in_feat, 4*in_feat], + [2*4*in_feat, 3*in_feat], + [2*3*in_feat, 2*in_feat], + [2*2*in_feat, in_feat], + [2*in_feat, in_feat], + ] + kernel_size: 3 + stride: [2, 2, 2, 2, 2, 2, 1] + N: 2 + + scorer_unet: + class: minkowski.Minkowski_Model + conv_type: "SPARSE" + define_constants: + in_feat: ${models.PointGroup-PAPER.feat_size} + down_conv: + module_name: ResNetDown + dimension: 3 + down_conv_nn: [[in_feat, 2*in_feat], [2*in_feat, 4*in_feat]] + kernel_size: 3 + stride: 2 + N: 2 + up_conv: + module_name: ResNetUp + dimension: 3 + up_conv_nn: [[4*in_feat, 2*in_feat], [4*in_feat, in_feat]] + kernel_size: 3 + stride: 2 + N: 2 + scorer_encoder: + class: minkowski.Minkowski_Model + conv_type: "SPARSE" + define_constants: + in_feat: ${models.PointGroup-PAPER.feat_size} + down_conv: + module_name: ResNetDown + dimension: 3 + down_conv_nn: [[in_feat, 2*in_feat], [2*in_feat, 4*in_feat]] + kernel_size: 3 + stride: 2 + N: 2 + innermost: + module_name: GlobalBaseModule + activation: + name: LeakyReLU + negative_slope: 0.2 + aggr: "max" + nn: [4*in_feat, in_feat] + + prepare_epoch: 120 + cluster_radius_search: 1.5 * ${data.grid_size} + + min_iou_threshold: 0.25 + max_iou_threshold: 0.75 + + vizual_ratio: 0 diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/registration/kpconv.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/registration/kpconv.yaml new file mode 100644 index 00000000..ac46ca6a --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/registration/kpconv.yaml @@ -0,0 +1,95 @@ +models: + KPFCNN: + class: kpconv.FragmentKPConv + conv_type: "PARTIAL_DENSE" + out_channels: 32 + loss_mode: "match" + normalize_feature: True + define_constants: + in_grid_size: ${data.first_subsampling} + in_feat: 64 + bn_momentum: 0.02 + down_conv: + down_conv_nn: + [ + [[FEAT, in_feat], [in_feat, 2*in_feat]], + [[2*in_feat, 2*in_feat], [2*in_feat, 4*in_feat]], + [[4*in_feat, 4*in_feat], [4*in_feat, 8*in_feat]], + [[8*in_feat, 8*in_feat], [8*in_feat, 16*in_feat]], + [[16*in_feat, 16*in_feat], [16*in_feat, 32*in_feat]], + ] + grid_size: + [ + [in_grid_size, in_grid_size], + [2*in_grid_size, 2*in_grid_size], + [4*in_grid_size, 4*in_grid_size], + [8*in_grid_size, 8*in_grid_size], + [16*in_grid_size, 16*in_grid_size], + ] + prev_grid_size: + [ + [in_grid_size, in_grid_size], + [in_grid_size, 2*in_grid_size], + [2*in_grid_size, 4*in_grid_size], + [4*in_grid_size, 8*in_grid_size], + [8*in_grid_size, 16*in_grid_size], + ] + block_names: + [ + ["SimpleBlock", "ResnetBBlock"], + ["ResnetBBlock", "ResnetBBlock"], + ["ResnetBBlock", "ResnetBBlock"], + ["ResnetBBlock", "ResnetBBlock"], + ["ResnetBBlock", "ResnetBBlock"], + ] + has_bottleneck: + [ + [False, True], + [True, True], + [True, True], + [True, True], + [True, True], + ] + deformable: + [ + [False, False], + [False, False], + [False, False], + [False, False], + [False, False], + ] + max_num_neighbors: + [[20, 20], [20, 20], [20, 32], [32, 32], [32, 32]] + module_name: KPDualBlock + up_conv: + module_name: FPModule_PD + up_conv_nn: + [ + [32*in_feat + 16*in_feat, 8*in_feat], + [8*in_feat + 8*in_feat, 4*in_feat], + [4*in_feat + 4*in_feat, 2*in_feat], + [2*in_feat + 2*in_feat, in_feat], + ] + skip: True + up_k: [1,1,1,1] + bn_momentum: + [ + bn_momentum, + bn_momentum, + bn_momentum, + bn_momentum, + bn_momentum, + ] + mlp_cls: + nn: [in_feat, in_feat] + dropout: 0 + bn_momentum: bn_momentum + loss_weights: + lambda_reg: 0 + metric_loss: + class: "ContrastiveHardestNegativeLoss" + params: + num_pos: 1024 + num_hn_samples: 256 + pos_thresh: 0.1 + neg_thresh: 1.4 diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/registration/minkowski.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/registration/minkowski.yaml new file mode 100644 index 00000000..585bde85 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/registration/minkowski.yaml @@ -0,0 +1,177 @@ +models: + MinkUNet_Fragment: + class: minkowski.MinkowskiFragment + conv_type: "SPARSE" + loss_mode: "match" + down_conv: + module_name: Res2BlockDown + dimension: 3 + bn_momentum: 0.05 + down_conv_nn: + [ + [FEAT, 32], + [32, 64], + [64, 128], + [128, 256] + ] + kernel_size: [5, 3, 3, 3] + stride: [1, 2, 2, 2] + dilation: [1, 1, 1, 1] + up_conv: + module_name: Res2BlockUp + dimension: 3 + bn_momentum: 0.05 + up_conv_nn: + [ + [256, 64], + [64 + 128, 64], + [64 + 64, 64], + [64 + 32, 64, 32] + ] + kernel_size: [3, 3, 3, 1] + stride: [2, 2, 2, 1] + dilation: [1, 1, 1, 1] + normalize_feature: True + metric_loss: + class: "ContrastiveHardestNegativeLoss" + params: + num_pos: 1024 + num_hn_samples: 256 + pos_thresh: 0.1 + neg_thresh: 1.4 + MinkUNet_Fragment_pretrained: + class: minkowski.MinkowskiFragment + path_pretrained: "INSERT PATH" + weight_name: "latest" + conv_type: "SPARSE" + loss_mode: "match" + down_conv: + module_name: Res2BlockDown + dimension: 3 + bn_momentum: 0.05 + down_conv_nn: + [ + [FEAT, 32], + [32, 64], + [64, 128], + [128, 256] + ] + kernel_size: [5, 3, 3, 3] + stride: [1, 2, 2, 2] + dilation: [1, 1, 1, 1] + up_conv: + module_name: Res2BlockUp + dimension: 3 + bn_momentum: 0.05 + up_conv_nn: + [ + [256, 64], + [64 + 128, 64], + [64 + 64, 64], + [64 + 32, 64, 32] + ] + kernel_size: [3, 3, 3, 1] + stride: [2, 2, 2, 1] + dilation: [1, 1, 1, 1] + normalize_feature: True + metric_loss: + class: "ContrastiveHardestNegativeLoss" + params: + num_pos: 1024 + num_hn_samples: 256 + pos_thresh: 0.1 + neg_thresh: 1.4 + + + Res16UNet32B: + class: minkowski.Minkowski_Baseline_Model_Fragment + conv_type: "SPARSE" + model_name: "Res16UNet32B" + loss_mode: "match" + conv1_kernel_size: 5 + out_channels: 32 + D: 3 + normalize_feature: True + metric_loss: + class: "ContrastiveHardestNegativeLoss" + params: + num_pos: 1024 + num_hn_samples: 256 + pos_thresh: 0.1 + neg_thresh: 1.4 + + Res16UNet32B_FC: + class: minkowski.Minkowski_Baseline_Model_Fragment + conv_type: "SPARSE" + model_name: "Res16UNet32B" + loss_mode: "match" + conv1_kernel_size: 5 + out_channels: 32 + D: 3 + normalize_feature: True + mlp_cls: + nn: [32] + metric_loss: + class: "ContrastiveHardestNegativeLoss" + params: + num_pos: 1024 + num_hn_samples: 256 + pos_thresh: 0.1 + neg_thresh: 1.4 + + Res16UNet34A: + class: minkowski.Minkowski_Baseline_Model_Fragment + conv_type: "SPARSE" + model_name: "Res16UNet14A" + loss_mode: "match" + conv1_kernel_size: 5 + out_channels: 32 + D: 3 + normalize_feature: True + metric_loss: + class: "ContrastiveHardestNegativeLoss" + params: + num_pos: 1024 + num_hn_samples: 256 + pos_thresh: 0.1 + neg_thresh: 1.4 + + + Res16UNet34B: + # to big... + class: minkowski.Minkowski_Baseline_Model_Fragment + conv_type: "SPARSE" + model_name: "Res16UNet14B" + loss_mode: "match" + conv1_kernel_size: 5 + out_channels: 32 + D: 3 + normalize_feature: True + metric_loss: + class: "ContrastiveHardestNegativeLoss" + params: + num_pos: 1024 + num_hn_samples: 256 + pos_thresh: 0.1 + neg_thresh: 1.4 + + + + ResUNetBN2B: + class: minkowski.Minkowski_Baseline_Model_Fragment + conv_type: "SPARSE" + model_name: "ResUNetBN2B" + loss_mode: "match" + in_channels: 1 + conv1_kernel_size: 5 + out_channels: 32 + D: 3 + num_pos_pairs: 1024 + normalize_feature: True + metric_loss: + class: "ContrastiveHardestNegativeLoss" + params: + num_pos: 1024 + num_hn_samples: 256 + pos_thresh: 0.1 + neg_thresh: 1.4 diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/registration/pointnet2.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/registration/pointnet2.yaml new file mode 100644 index 00000000..4eabdc29 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/registration/pointnet2.yaml @@ -0,0 +1,40 @@ +models: + pointnet2_charlesmsg: + class: pointnet2.FragmentPointNet2_D + conv_type: "DENSE" + loss_mode: "match" + out_channels: 32 + down_conv: + module_name: PointNetMSGDown + npoint: [2048, 256] + radii: [[0.4, 0.8, 1.2], [0.8, 1.2]] + nsamples: [[32, 64, 128], [64, 128]] + down_conv_nn: + [[[FEAT+3, 32, 32, 64], + [FEAT+3, 64, 64, 128], + [FEAT+3, 64, 96, 128],], + [[64 + 128 + 128+3, 128, 128, 256], + [64 + 128 + 128+3, 128, 196, 256],],] + innermost: + module_name: GlobalDenseBaseModule + nn: [256 * 2 + 3, 256, 512, 1024] + up_conv: + module_name: DenseFPModule + up_conv_nn: + [ + [1024 + 256*2, 256, 256], + [256 + 128 * 2 + 64, 256, 128], + [128 + FEAT, 128, 128], + ] + skip: True + mlp_cls: + nn: [128, 128] + dropout: 0.5 + + metric_loss: + class: "ContrastiveHardestNegativeLoss" + params: + num_pos: 1024 + num_hn_samples: 256 + pos_thresh: 0.1 + neg_thresh: 1.4 diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/registration/pointnet2_patch.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/registration/pointnet2_patch.yaml new file mode 100644 index 00000000..cf30dd11 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/registration/pointnet2_patch.yaml @@ -0,0 +1,45 @@ +models: + minipointnet2: + class: pointnet2.PatchPointNet2_D + conv_type: "DENSE" + down_conv: + module_name: PointNetMSGDown + npoint: [256, 128] + radii: [[0.08], [0.15]] + nsamples: [[64], [64]] + use_xyz: True + down_conv_nn: + [[[FEAT+3, 32, 32, 64]], [[64+3, 64, 128, 128]]] + mlp_cls: + nn: [128, 128, 32] + dropout: 0.5 + metric_loss: + class: "TripletMarginLoss" + params: + smooth_loss: True + triplets_per_anchor: 'all' + miner: + class: "BatchHardMiner" + + + minipointnet2_small_0: + class: pointnet2.PatchPointNet2_D + conv_type: "DENSE" + down_conv: + module_name: PointNetMSGDown + npoint: [128, 32] + radii: [[0.08], [0.15]] + nsamples: [[64], [64]] + use_xyz: True + down_conv_nn: + [[[FEAT+3, 32, 32, 32]], [[32+3, 64, 64, 64]]] + mlp_cls: + nn: [64, 64, 32] + dropout: 0.5 + metric_loss: + class: "TripletMarginLoss" + params: + smooth_loss: True + triplets_per_anchor: 'all' + miner: + class: "BatchHardMiner" diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/registration/spconv3d.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/registration/spconv3d.yaml new file mode 100644 index 00000000..c92f6d8d --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/registration/spconv3d.yaml @@ -0,0 +1,47 @@ +models: + ResUnet32: + class: spconv3d.SparseConv3d + conv_type: "SPARSE" + backend: "torchsparse" + normalize_feature: True + metric_loss: + class: "ContrastiveHardestNegativeLoss" + params: + num_pos: 1024 + num_hn_samples: 256 + pos_thresh: 0.1 + neg_thresh: 1.4 + define_constants: + in_feat: 32 + block: ResBlock + backbone: + down_conv: + module_name: ResNetDown + block: block + N: [0, 1, 2, 2, 3] + down_conv_nn: + [ + [FEAT, in_feat], + [in_feat, in_feat], + [in_feat, 2*in_feat], + [2*in_feat, 4*in_feat], + [4*in_feat, 8*in_feat], + ] + kernel_size: 3 + stride: [1, 2, 2, 2, 2] + up_conv: + block: block + module_name: ResNetUp + N: [1, 1, 1, 1, 0] + up_conv_nn: + [ + [8*in_feat, 4*in_feat], + [4*in_feat + 4*in_feat, 4*in_feat], + [4*in_feat + 2*in_feat, 3*in_feat], + [3*in_feat + in_feat, 3*in_feat], + [3*in_feat + in_feat, 3*in_feat], + ] + kernel_size: 3 + stride: [2, 2, 2, 2, 1] + mlp_cls: + nn: [3*in_feat, 2*in_feat, in_feat] diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/kpconv.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/kpconv.yaml new file mode 100644 index 00000000..68d03a84 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/kpconv.yaml @@ -0,0 +1,267 @@ +models: + # KPConv: Flexible and Deformable Convolution for Point Clouds (https://arxiv.org/abs/1904.08889) + # KPConv support format_type="[PARTIAL_DENSE | MESSAGE_PASSING]" + KPConvPaper: + class: kpconv.KPConvPaper + conv_type: "PARTIAL_DENSE" + use_category: ${data.use_category} + define_constants: + in_grid_size: ${data.first_subsampling} + in_feat: 64 + bn_momentum: 0.02 + down_conv: + n_kernel_points: 15 + down_conv_nn: + [ + [[FEAT + 1, in_feat], [in_feat, 2*in_feat]], + [[2*in_feat, 2*in_feat], [2*in_feat, 4*in_feat]], + [[4*in_feat, 4*in_feat], [4*in_feat, 8*in_feat]], + [[8*in_feat, 8*in_feat], [8*in_feat, 16*in_feat]], + [[16*in_feat, 16*in_feat], [16*in_feat, 32*in_feat]], + ] + grid_size: + [ + [in_grid_size, in_grid_size], + [2*in_grid_size, 2*in_grid_size], + [4*in_grid_size, 4*in_grid_size], + [8*in_grid_size, 8*in_grid_size], + [16*in_grid_size, 16*in_grid_size], + ] + prev_grid_size: + [ + [in_grid_size, in_grid_size], + [in_grid_size, 2*in_grid_size], + [2*in_grid_size, 4*in_grid_size], + [4*in_grid_size, 8*in_grid_size], + [8*in_grid_size, 16*in_grid_size], + ] + block_names: + [ + ["SimpleBlock", "ResnetBBlock"], + ["ResnetBBlock", "ResnetBBlock"], + ["ResnetBBlock", "ResnetBBlock"], + ["ResnetBBlock", "ResnetBBlock"], + ["ResnetBBlock", "ResnetBBlock"], + ] + has_bottleneck: + [[False, True], [True, True], [True, True], [True, True], [True, True]] + deformable: + [ + [False, False], + [False, False], + [False, False], + [False, False], + [False, False], + ] + max_num_neighbors: [[25, 25], [25, 30], [30, 38], [38, 38], [38, 38]] + module_name: KPDualBlock + up_conv: + module_name: FPModule_PD + up_conv_nn: + [ + [32*in_feat + 16*in_feat, 8*in_feat], + [8*in_feat + 8*in_feat, 4*in_feat], + [4*in_feat + 4*in_feat, 2*in_feat], + [2*in_feat + 2*in_feat, in_feat], + ] + skip: True + up_k: [1, 1, 1, 1] + bn_momentum: + [bn_momentum, bn_momentum, bn_momentum, bn_momentum, bn_momentum] + mlp_cls: + nn: [in_feat, in_feat] + dropout: 0 + bn_momentum: bn_momentum + loss_weights: + lambda_reg: 0 + + KPDeformableConvPaper: + class: kpconv.KPConvPaper + conv_type: "PARTIAL_DENSE" + use_category: ${data.use_category} + define_constants: + in_grid_size: ${data.first_subsampling} + in_feat: 64 + bn_momentum: 0.02 + down_conv: + down_conv_nn: + [ + [[FEAT + 1, in_feat], [in_feat, 2*in_feat]], + [[2*in_feat, 2*in_feat], [2*in_feat, 4*in_feat]], + [[4*in_feat, 4*in_feat], [4*in_feat, 8*in_feat]], + [[8*in_feat, 8*in_feat], [8*in_feat, 16*in_feat]], + [[16*in_feat, 16*in_feat], [16*in_feat, 32*in_feat]], + ] + grid_size: + [ + [in_grid_size, in_grid_size], + [2*in_grid_size, 2*in_grid_size], + [4*in_grid_size, 4*in_grid_size], + [8*in_grid_size, 8*in_grid_size], + [16*in_grid_size, 16*in_grid_size], + ] + prev_grid_size: + [ + [in_grid_size, in_grid_size], + [in_grid_size, 2*in_grid_size], + [2*in_grid_size, 4*in_grid_size], + [4*in_grid_size, 8*in_grid_size], + [8*in_grid_size, 16*in_grid_size], + ] + block_names: + [ + ["SimpleBlock", "ResnetBBlock"], + ["ResnetBBlock", "ResnetBBlock"], + ["ResnetBBlock", "ResnetBBlock"], + ["ResnetBBlock", "ResnetBBlock"], + ["ResnetBBlock", "ResnetBBlock"], + ] + has_bottleneck: + [[False, True], [True, True], [True, True], [True, True], [True, True]] + deformable: + [ + [False, False], + [False, False], + [False, True], + [True, True], + [True, True], + ] + max_num_neighbors: [[20, 20], [20, 35], [35, 45], [45, 40], [40, 40]] + module_name: KPDualBlock + up_conv: + module_name: FPModule_PD + up_conv_nn: + [ + [32*in_feat + 16*in_feat, 8*in_feat], + [8*in_feat + 8*in_feat, 4*in_feat], + [4*in_feat + 4*in_feat, 2*in_feat], + [2*in_feat + 2*in_feat, in_feat], + ] + skip: True + up_k: [3, 3, 3, 3] + bn_momentum: + [bn_momentum, bn_momentum, bn_momentum, bn_momentum, bn_momentum] + mlp_cls: + nn: [in_feat, in_feat] + dropout: 0.5 + bn_momentum: bn_momentum + loss_weights: + lambda_reg: 0 + + KPConvPaper_innermost: + class: kpconv.KPConvPaper + conv_type: "PARTIAL_DENSE" + in_grid_size: ${data.first_subsampling} + define_constants: + in_grid_size: ${data.first_subsampling} + in_feat: 64 + bn_momentum: 0.02 + down_conv: + down_conv_nn: + [ + [[FEAT + 1, in_feat], [in_feat, 2*in_feat]], + [[2*in_feat, 2*in_feat], [2*in_feat, 4*in_feat]], + [[4*in_feat, 4*in_feat], [4*in_feat, 8*in_feat]], + [[8*in_feat, 8*in_feat], [8*in_feat, 16*in_feat]], + [[16*in_feat, 16*in_feat], [16*in_feat, 32*in_feat]], + ] + grid_size: + [ + [in_grid_size, in_grid_size], + [2*in_grid_size, 2*in_grid_size], + [4*in_grid_size, 4*in_grid_size], + [8*in_grid_size, 8*in_grid_size], + [16*in_grid_size, 16*in_grid_size], + ] + prev_grid_size: + [ + [in_grid_size, in_grid_size], + [in_grid_size, 2*in_grid_size], + [2*in_grid_size, 4*in_grid_size], + [4*in_grid_size, 8*in_grid_size], + [8*in_grid_size, 16*in_grid_size], + ] + block_names: + [ + ["SimpleBlock", "ResnetBBlock"], + ["ResnetBBlock", "ResnetBBlock"], + ["ResnetBBlock", "ResnetBBlock"], + ["ResnetBBlock", "ResnetBBlock"], + ["ResnetBBlock", "ResnetBBlock"], + ] + has_bottleneck: + [[False, True], [True, True], [True, True], [True, True], [True, True]] + deformable: + [ + [False, False], + [False, False], + [False, False], + [False, False], + [False, False], + ] + max_num_neighbors: [[20, 20], [35, 35], [50, 50], [50, 50], [50, 50]] + module_name: KPDualBlock + innermost: + module_name: GlobalBaseModule + activation: + name: LeakyReLU + negative_slope: 0.2 + aggr: "mean" + nn: [32*in_feat + 3, 8*in_feat, 32*in_feat] + up_conv: + module_name: FPModule_PD + up_conv_nn: + [ + [32*in_feat + 32*in_feat, 32*in_feat], + [32*in_feat + 16*in_feat, 8*in_feat], + [8*in_feat + 8*in_feat, 4*in_feat], + [4*in_feat + 4*in_feat, 2*in_feat], + [2*in_feat + 2*in_feat, in_feat], + ] + skip: True + up_k: [3, 3, 3, 3, 5] + bn_momentum: + [bn_momentum, bn_momentum, bn_momentum, bn_momentum, bn_momentum] + mlp_cls: + nn: [in_feat, in_feat] + dropout: 0.5 + bn_momentum: bn_momentum + loss_weights: + lambda_reg: 1e-6 + + KPConvPaper2LDMini: + class: kpconv.KPConvPaper + conv_type: "PARTIAL_DENSE" + use_category: ${data.use_category} + define_constants: + in_grid_size: ${data.first_subsampling} + in_feat: 64 + bn_momentum: 0.02 + down_conv: + down_conv_nn: + [ + [[FEAT + 1, in_feat], [in_feat, 2*in_feat]], + [[2*in_feat, 2*in_feat], [2*in_feat, 4*in_feat]], + ] + grid_size: + [[in_grid_size, in_grid_size], [2*in_grid_size, 2*in_grid_size]] + prev_grid_size: + [[in_grid_size, in_grid_size], [in_grid_size, 2*in_grid_size]] + block_names: + [["SimpleBlock", "ResnetBBlock"], ["ResnetBBlock", "ResnetBBlock"]] + has_bottleneck: [[False, True], [True, True]] + deformable: [[False, False], [False, False]] + max_num_neighbors: [[15, 15], [15, 15]] + module_name: KPDualBlock + up_conv: + module_name: FPModule_PD + up_conv_nn: [[4*in_feat + 2*in_feat, in_feat]] + skip: True + up_k: [1, 1] + bn_momentum: [bn_momentum, bn_momentum] + mlp_cls: + nn: [in_feat, in_feat] + dropout: 0 + bn_momentum: bn_momentum + loss_weights: + lambda_reg: 0 diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/minkowski_baseline.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/minkowski_baseline.yaml new file mode 100644 index 00000000..1b44f252 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/minkowski_baseline.yaml @@ -0,0 +1,158 @@ +models: + # Minkowski Engine: https://github.com/StanfordVL/MinkowskiEngine/blob/master/examples/minkunet.py + MinkUNet14A: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "MinkUNet14A" + D: 3 + + MinkUNet14B: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "MinkUNet14B" + D: 3 + + MinkUNet14C: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "MinkUNet14C" + D: 3 + + MinkUNet14D: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "MinkUNet14D" + D: 3 + + MinkUNet18A: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "MinkUNet18A" + D: 3 + + MinkUNet18B: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "MinkUNet18B" + D: 3 + + MinkUNet18D: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "MinkUNet18D" + D: 3 + + MinkUNet34A: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "MinkUNet34A" + D: 3 + + MinkUNet34B: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "MinkUNet34B" + D: 3 + + MinkUNet34C: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "MinkUNet34C" + D: 3 + + Res16UNet14: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "Res16UNet14" + D: 3 + + Res16UNet18: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "Res16UNet18" + D: 3 + + Res16UNet34: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "Res16UNet34" + D: 3 + + Res16UNet14A: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "Res16UNet14A" + D: 3 + + Res16UNet14A2: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "Res16UNet14A2" + D: 3 + + Res16UNet14B: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "Res16UNet14B" + D: 3 + + Res16UNet14B2: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "Res16UNet14B2" + D: 3 + + Res16UNet14B3: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "Res16UNet14B3" + D: 3 + + Res16UNet14C: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "Res16UNet14C" + D: 3 + + Res16UNet14D: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "Res16UNet14D" + D: 3 + + Res16UNet18A: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "Res16UNet18A" + D: 3 + + Res16UNet18B: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "Res16UNet18B" + D: 3 + + Res16UNet18D: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "Res16UNet18D" + D: 3 + + Res16UNet34A: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "Res16UNet34A" + D: 3 + + Res16UNet34B: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "Res16UNet34B" + D: 3 + + Res16UNet34C: + class: minkowski.Minkowski_Baseline_Model + conv_type: "SPARSE" + model_name: "Res16UNet34C" + extra_options: + conv1_kernel_size: 5 diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/pointcnn.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/pointcnn.yaml new file mode 100644 index 00000000..f83a6670 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/pointcnn.yaml @@ -0,0 +1,81 @@ +models: + pointcnn_small: + class: pointcnn.PointCNNSeg + conv_type: "MESSAGE_PASSING" + define_constants: + L1_OUT: 32 + L2_OUT: 32 + INNER_OUT: 64 + down_conv: + module_name: PointCNNConvDown + inN: [2048, 768] + outN: [768, 384] + K: [8, 12] + D: [1, 2] + C1: [0, L1_OUT] + C2: [L1_OUT, L2_OUT] + hidden_channel: [64, None] + innermost: + module_name: PointCNNConvDown + inN: 384 + outN: 128 + K: 16 + D: 2 + C1: L2_OUT + C2: INNER_OUT + up_conv: + module_name: PointCNNConvUp + K: [16, 12, 8] + D: [6, 6, 6] + C1: [INNER_OUT, 32 + L2_OUT, 32 + L1_OUT] + C2: [32, 32, 32] + mlp_cls: + nn: [32, 32, 32, 32, 32] + dropout: 0.5 + + pointcnn_shapenet: + class: pointcnn.PointCNNSeg + conv_type: "MESSAGE_PASSING" + down_conv: + - module_name: PointCNNConvDown + inN: 2048 + outN: 768 + K: 8 + D: 1 + C1: 0 + C2: 256 + hidden_channel: 64 + - module_name: PointCNNConvDown + inN: 768 + outN: 384 + K: 12 + D: 2 + C1: 256 + C2: 256 + innermost: + module_name: PointCNNConvDown + inN: 384 + outN: 128 + K: 16 + D: 2 + C1: 256 + C2: 512 + up_conv: + - module_name: PointCNNConvUp + K: 16 + D: 6 + C1: 512 + C2: 256 + - module_name: PointCNNConvUp + K: 12 + D: 6 + C1: 256 + C2: 256 + - module_name: PointCNNConvUp + K: 8 + D: 6 + C1: 256 + C2: 64 + mlp_cls: + nn: [64, 64, 64, 64, 64] + dropout: 0.5 diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/pointnet.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/pointnet.yaml new file mode 100644 index 00000000..487909b9 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/pointnet.yaml @@ -0,0 +1,25 @@ +models: + PointNet: + class: pointnet.PointNet + conv_type: "PARTIAL_DENSE" + input_nc: FEAT + 3 + input_stn: + local_nn: [64, 128, 1024] + global_nn: [1024, 512, 256] + local_nn_1: [64, 64] + feat_stn: + k: 64 + local_nn: [64, 64, 128, 1024] + global_nn: [1024, 512, 256] + local_nn_2: [64, 64, 128, 1024] + seg_nn: [1024 + 64, 512, 256, 128, N_CLS] + + PointNet_Features: + class: pointnet.SegPointNetModel + conv_type: "PARTIAL_DENSE" + pointnet: + local_nn: [3, 32, 64, 4] + global_nn: [4, 2] + aggr: "mean" + return_local_out: True + seg_nn: [4 + 2, 50, N_CLS] diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/pointnet2.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/pointnet2.yaml new file mode 100644 index 00000000..af2966f8 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/pointnet2.yaml @@ -0,0 +1,189 @@ +models: + # PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space (https://arxiv.org/abs/1706.02413) + pointnet2: + class: pointnet2.PointNet2_MP + conv_type: "MESSAGE_PASSING" + down_conv: + module_name: SAModule + ratios: [0.2, 0.25] + radius: [0.2, 0.4] + down_conv_nn: [[FEAT + 3, 64, 64, 128], [128 + 3, 128, 128, 256]] + radius_num_points: [64, 64] + up_conv: + module_name: FPModule + up_conv_nn: + [ + [1024 + 256, 256, 256], + [256 + 128, 256, 128], + [128 + FEAT, 128, 128, 128], + ] + up_k: [1, 3, 3] + skip: True + innermost: + module_name: GlobalBaseModule + aggr: max + nn: [256 + 3, 256, 512, 1024] + mlp_cls: + nn: [128, 128, 128, 128, 128] + dropout: 0.5 + + pointnet2ms: + class: pointnet2.PointNet2_MP + conv_type: "MESSAGE_PASSING" + down_conv: + module_name: SAModule + ratios: [0.25, 0.25] + radius: [[0.1, 0.2, 0.4], [0.4, 0.8]] + radius_num_points: [[32, 64, 128], [64, 128]] + down_conv_nn: [[FEAT+3, 64, 96, 128], [128 * 3 + 3, 128, 196, 256]] + up_conv: + module_name: FPModule + up_conv_nn: + [ + [1024 + 256 * 2, 256, 256], + [256 + 128 * 3, 128, 128], + [128 + FEAT, 128, 128], + ] + up_k: [1, 3, 3] + skip: True + innermost: + module_name: GlobalBaseModule + aggr: max + nn: [256* 2 + 3, 256, 512, 1024] + mlp_cls: + nn: [128, 128, 128, 128, 128] + dropout: 0.5 + + pointnet2_largemsg: + class: pointnet2.PointNet2_D + conv_type: "DENSE" + use_category: ${data.use_category} + down_conv: + module_name: PointNetMSGDown + npoint: [1024, 256, 64, 16] + radii: [[0.05, 0.1], [0.1, 0.2], [0.2, 0.4], [0.4, 0.8]] + nsamples: [[16, 32], [16, 32], [16, 32], [16, 32]] + down_conv_nn: + [ + [[FEAT+3, 16, 16, 32], [FEAT+3, 32, 32, 64]], + [[32 + 64+3, 64, 64, 128], [32 + 64+3, 64, 96, 128]], + [ + [128 + 128+3, 128, 196, 256], + [128 + 128+3, 128, 196, 256], + ], + [ + [256 + 256+3, 256, 256, 512], + [256 + 256+3, 256, 384, 512], + ], + ] + up_conv: + module_name: DenseFPModule + up_conv_nn: + [ + [512 + 512 + 256 + 256, 512, 512], + [512 + 128 + 128, 512, 512], + [512 + 64 + 32, 256, 256], + [256 + FEAT, 128, 128], + ] + skip: True + mlp_cls: + nn: [128, 128] + dropout: 0.5 + + pointnet2_charlesmsg: + class: pointnet2.PointNet2_D + conv_type: "DENSE" + use_category: ${data.use_category} + down_conv: + module_name: PointNetMSGDown + npoint: [512, 128] + radii: [[0.1, 0.2, 0.4], [0.4, 0.8]] + nsamples: [[32, 64, 128], [64, 128]] + down_conv_nn: + [ + [ + [FEAT+3, 32, 32, 64], + [FEAT+3, 64, 64, 128], + [FEAT+3, 64, 96, 128], + ], + [ + [64 + 128 + 128+3, 128, 128, 256], + [64 + 128 + 128+3, 128, 196, 256], + ], + ] + innermost: + module_name: GlobalDenseBaseModule + nn: [256 * 2 + 3, 256, 512, 1024] + up_conv: + module_name: DenseFPModule + up_conv_nn: + [ + [1024 + 256*2, 256, 256], + [256 + 128 * 2 + 64, 256, 128], + [128 + FEAT, 128, 128], + ] + skip: True + mlp_cls: + nn: [128, 128] + dropout: 0.5 + + pointnet2_charlesssg: + class: pointnet2.PointNet2_D + conv_type: "DENSE" + use_category: ${data.use_category} + down_conv: + module_name: PointNetMSGDown + npoint: [512, 128] + radii: [[0.2], [0.4]] + nsamples: [[64], [64]] + down_conv_nn: [[[FEAT + 3, 64, 64, 128]], [[128+3, 128, 128, 256]]] + innermost: + module_name: GlobalDenseBaseModule + nn: [256 + 3, 256, 512, 1024] + up_conv: + module_name: DenseFPModule + up_conv_nn: + [ + [1024 + 256, 256, 256], + [256 + 128, 256, 128], + [128 + FEAT, 128, 128, 128], + ] + skip: True + mlp_cls: + nn: [128, 128] + dropout: 0.5 + + pointnet2_indoor: + class: pointnet2.PointNet2_D + conv_type: "DENSE" + down_conv: + module_name: PointNetMSGDown + npoint: [2048, 1024, 512, 256] + radii: [[0.1, 0.2], [0.2, 0.4], [0.4, 0.8], [0.8, 1.6]] + nsamples: [[32, 64], [16, 32], [16, 32], [16, 32]] + down_conv_nn: + [ + [[FEAT+3, 16, 16, 32], [FEAT+3, 32, 32, 64]], + [[32 + 64+3, 64, 64, 128], [32 + 64+3, 64, 96, 128]], + [ + [128 + 128+3, 128, 196, 256], + [128 + 128+3, 128, 196, 256], + ], + [ + [256 + 256+3, 256, 256, 512], + [256 + 256+3, 256, 384, 512], + ], + ] + up_conv: + module_name: DenseFPModule + up_conv_nn: + [ + [512 + 512 + 256 + 256, 512, 512], + [512 + 128 + 128, 512, 512], + [512 + 64 + 32, 256, 256], + [256 + FEAT, 128, 128], + ] + skip: True + mlp_cls: + nn: [128, 128] + dropout: 0.5 diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/ppnet.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/ppnet.yaml new file mode 100644 index 00000000..408f69e2 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/ppnet.yaml @@ -0,0 +1,155 @@ +models: + # PPNet (PosPool): A Closer Look at Local Aggregation Operators in Point Cloud Analysis (https://arxiv.org/pdf/2007.01294.pdf) + PPNet: + class: ppnet.PPNet + conv_type: 'PARTIAL_DENSE' + use_category: ${data.use_category} + define_constants: + in_grid_size: ${data.first_subsampling} + in_feat: 72 + bn_momentum: 0.01 + position_embedding: 'sin_cos' + reduction: 'avg' + output_conv: False + bottleneck_ratio: 2 + down_conv: + down_conv_nn: + [ + [[FEAT, in_feat, in_feat], [in_feat, 2*in_feat]], + [[2*in_feat, 4*in_feat], [4*in_feat, 4*in_feat]], + [[4*in_feat, 8*in_feat], [8*in_feat, 8*in_feat]], + [[8*in_feat, 16*in_feat], [16*in_feat, 16*in_feat]], + [[16*in_feat, 32*in_feat], [32*in_feat, 32*in_feat]], + ] + grid_size: + [ + [in_grid_size, in_grid_size], + [2*in_grid_size, 2*in_grid_size], + [4*in_grid_size, 4*in_grid_size], + [8*in_grid_size, 8*in_grid_size], + [16*in_grid_size, 16*in_grid_size], + ] + prev_grid_size: + [ + [in_grid_size, in_grid_size], + [in_grid_size, 2*in_grid_size], + [2*in_grid_size, 4*in_grid_size], + [4*in_grid_size, 8*in_grid_size], + [8*in_grid_size, 16*in_grid_size], + ] + block_names: + [ + ['SimpleInputBlock', 'ResnetBBlock'], + ['ResnetBBlock', 'ResnetBBlock'], + ['ResnetBBlock', 'ResnetBBlock'], + ['ResnetBBlock', 'ResnetBBlock'], + ['ResnetBBlock', 'ResnetBBlock'], + ] + has_bottleneck: + [[False, True], [True, True], [True, True], [True, True], [True, True]] + max_num_neighbors: [[26, 26], [26, 31], [31, 38], [38, 41], [41, 39]] + position_embedding: + [position_embedding, position_embedding, position_embedding, position_embedding, position_embedding] + reduction: + [reduction, reduction, reduction, reduction, reduction] + output_conv: + [output_conv, output_conv, output_conv, output_conv, output_conv] + bottleneck_ratio: + [bottleneck_ratio, bottleneck_ratio, bottleneck_ratio, bottleneck_ratio, bottleneck_ratio, bottleneck_ratio] + bn_momentum: + [bn_momentum, bn_momentum, bn_momentum, bn_momentum, bn_momentum] + module_name: PPStageBlock + up_conv: + module_name: FPModule_PD + up_conv_nn: + [ + [32*in_feat + 16*in_feat, 8*in_feat], + [8*in_feat + 8*in_feat, 4*in_feat], + [4*in_feat + 4*in_feat, 2*in_feat], + [2*in_feat + 2*in_feat, in_feat], + ] + skip: True + up_k: [1, 1, 1, 1] + bn_momentum: + [bn_momentum, bn_momentum, bn_momentum, bn_momentum, bn_momentum] + mlp_cls: + nn: [in_feat, in_feat] + dropout: 0 + bn_momentum: bn_momentum + + PPNetxyz: + class: ppnet.PPNet + conv_type: 'PARTIAL_DENSE' + use_category: ${data.use_category} + define_constants: + in_grid_size: ${data.first_subsampling} + in_feat: 72 + bn_momentum: 0.01 + position_embedding: 'xyz' + reduction: 'avg' + output_conv: False + bottleneck_ratio: 2 + down_conv: + down_conv_nn: + [ + [[FEAT, in_feat, in_feat], [in_feat, 2*in_feat]], + [[2*in_feat, 4*in_feat], [4*in_feat, 4*in_feat]], + [[4*in_feat, 8*in_feat], [8*in_feat, 8*in_feat]], + [[8*in_feat, 16*in_feat], [16*in_feat, 16*in_feat]], + [[16*in_feat, 32*in_feat], [32*in_feat, 32*in_feat]], + ] + grid_size: + [ + [in_grid_size, in_grid_size], + [2*in_grid_size, 2*in_grid_size], + [4*in_grid_size, 4*in_grid_size], + [8*in_grid_size, 8*in_grid_size], + [16*in_grid_size, 16*in_grid_size], + ] + prev_grid_size: + [ + [in_grid_size, in_grid_size], + [in_grid_size, 2*in_grid_size], + [2*in_grid_size, 4*in_grid_size], + [4*in_grid_size, 8*in_grid_size], + [8*in_grid_size, 16*in_grid_size], + ] + block_names: + [ + ['SimpleInputBlock', 'ResnetBBlock'], + ['ResnetBBlock', 'ResnetBBlock'], + ['ResnetBBlock', 'ResnetBBlock'], + ['ResnetBBlock', 'ResnetBBlock'], + ['ResnetBBlock', 'ResnetBBlock'], + ] + has_bottleneck: + [[False, True], [True, True], [True, True], [True, True], [True, True]] + max_num_neighbors: [[26, 26], [26, 31], [31, 38], [38, 41], [41, 39]] + position_embedding: + [position_embedding, position_embedding, position_embedding, position_embedding, position_embedding] + reduction: + [reduction, reduction, reduction, reduction, reduction] + output_conv: + [output_conv, output_conv, output_conv, output_conv, output_conv] + bottleneck_ratio: + [bottleneck_ratio, bottleneck_ratio, bottleneck_ratio, bottleneck_ratio, bottleneck_ratio, bottleneck_ratio] + bn_momentum: + [bn_momentum, bn_momentum, bn_momentum, bn_momentum, bn_momentum] + module_name: PPStageBlock + up_conv: + module_name: FPModule_PD + up_conv_nn: + [ + [32*in_feat + 16*in_feat, 8*in_feat], + [8*in_feat + 8*in_feat, 4*in_feat], + [4*in_feat + 4*in_feat, 2*in_feat], + [2*in_feat + 2*in_feat, in_feat], + ] + skip: True + up_k: [1, 1, 1, 1] + bn_momentum: + [bn_momentum, bn_momentum, bn_momentum, bn_momentum, bn_momentum] + mlp_cls: + nn: [in_feat, in_feat] + dropout: 0 + bn_momentum: bn_momentum \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/pvcnn.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/pvcnn.yaml new file mode 100644 index 00000000..7deefc7e --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/pvcnn.yaml @@ -0,0 +1,6 @@ +models: + PVCNN: + class: pvcnn.PVCNN + conv_type: "SPARSE" + cr: 1 + vres: ${data.grid_size} diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/randlanet.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/randlanet.yaml new file mode 100644 index 00000000..24ad4957 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/randlanet.yaml @@ -0,0 +1,64 @@ +models: + # RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds (https://arxiv.org/pdf/1911.11236.pdf) + Randlanet_Res: + class: randlanet.RandLANetSeg + conv_type: "MESSAGE_PASSING" + input_nc: FEAT + 3 + down_conv: + module_name: RandLANetRes + ratio: [[1, 1], [0.5, 0.5]] + indim: [3, 32] + outdim: [32, 128] + point_pos_nn: + [[[10, 8, FEAT], [10, 16, 16]], [[10, 16, 32], [10, 32, 64]]] + attention_nn: + [ + [[2 * FEAT, 8, 2 * FEAT], [32, 64, 32]], + [[64, 128, 64], [128, 256, 128]], + ] + down_conv_nn: + [ + [[2 * FEAT, 8, 16], [32, 64, 32]], + [[64, 64, 64], [128, 128, 128]], + ] + innermost: + module_name: GlobalBaseModule + aggr: max + nn: [131, 128] + up_conv: + module_name: FPModule + up_conv_nn: [[128 + 128, 128], [128 + 32, 64], [64 + FEAT, 64]] + up_k: [1, 1, 1] + skip: True + mlp_cls: + nn: [64, 64, 64, 64, 64] + dropout: 0.5 + + Randlanet_Conv: + class: randlanet.RandLANetSeg + conv_type: "MESSAGE_PASSING" + down_conv: + module_name: RandlaConv + ratio: [0.25, 0.25, 0.25] + k: [16, 16, 16] + point_pos_nn: [[10, 8, FEAT], [10, 8, 16], [10, 16, 32]] + attention_nn: [[2 * FEAT, 8, 2 * FEAT], [32, 64, 32], [64, 128, 64]] + down_conv_nn: [[2 * FEAT, 8, 16], [32, 64, 32], [64, 128, 128]] + innermost: + module_name: GlobalBaseModule + aggr: max + nn: [131, 128] + up_conv: + module_name: FPModule + up_conv_nn: + [ + [128 + 128, 128], + [128 + 32, 64], + [64 + 16, 64], + [64 + FEAT, 64], + ] + up_k: [1, 1, 1, 1] + skip: True + mlp_cls: + nn: [64, 64, 64, 64, 64] + dropout: 0.5 diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/rsconv.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/rsconv.yaml new file mode 100644 index 00000000..af7eea5a --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/rsconv.yaml @@ -0,0 +1,213 @@ +models: + # Relation-Shape Convolutional Neural Network for Point Cloud Analysis (https://arxiv.org/abs/1904.07601) + RSConv_2LD: + class: rsconv.RSConv_MP + conv_type: "MESSAGE_PASSING" + down_conv: + module_name: RSConvDown + ratios: [0.2, 0.25] + radius: [0.1, 0.2] + local_nn: [[10, 8, FEAT], [10, 32, 64, 64]] + down_conv_nn: [[FEAT, 16, 32, 64], [64, 64, 128]] + innermost: + module_name: GlobalBaseModule + aggr: max + nn: [128 + FEAT, 128] + up_conv: + module_name: FPModule + ratios: [1, 0.25, 0.2] + radius: [0.2, 0.2, 0.1] + up_conv_nn: [[128 + 128, 64], [64 + 64, 64], [64, 64]] + up_k: [1, 3, 3] + skip: True + mlp_cls: + nn: [64, 64, 64, 64, 64] + dropout: 0.5 + + RSConv_4LD: + class: rsconv.RSConv_MP + conv_type: "MESSAGE_PASSING" + down_conv: + module_name: RSConvDown + ratios: [0.5, 0.5, 0.5, 0.5] + radius: [0.1, 0.2, 0.3, 0.4] + local_nn: [[10, 8, FEAT], [10, 16, 16], [10, 32, 32], [10, 64, 64]] + down_conv_nn: + [[FEAT, 16, 16], [16, 32, 32], [32, 64, 64], [64, 128, 128]] + innermost: + module_name: GlobalBaseModule + aggr: max + nn: [131, 128] #[3 + 128] + up_conv: + module_name: FPModule + up_conv_nn: + [ + [128 + 128, 128], + [128 + 64, 64], + [64 + 32, 32], + [32 + 16, 32], + [32, 64], + ] + up_k: [1, 3, 3, 3, 3] + skip: True + mlp_cls: + nn: [64, 64, 64, 64, 64] + dropout: 0.1 + + RSConv_MSN: + class: rsconv.RSConvLogicModel + conv_type: "DENSE" + use_category: ${data.use_category} + down_conv: + module_name: RSConvOriginalMSGDown + npoint: [1024, 256, 64, 16] + radii: + [ + [0.075, 0.1, 0.125], + [0.1, 0.15, 0.2], + [0.2, 0.3, 0.4], + [0.4, 0.6, 0.8], + ] + nsamples: [[16, 32, 48], [16, 48, 64], [16, 32, 48], [16, 24, 32]] + down_conv_nn: + [ + [[10, 64//2, 16], [FEAT + 3, 16]], + [10, 128//4, 64 * 3 + 3], + [10, 256//4, 128 * 3 + 3], + [10, 512//4, 256 * 3 + 3], + ] + channel_raising_nn: + [ + [16, 64], + [64 * 3 + 3, 128], + [128 * 3 + 3, 256], + [256 * 3 + 3, 512], + ] + innermost: + - module_name: GlobalDenseBaseModule + nn: [512 * 3 + 3, 128] + aggr: "mean" + - module_name: GlobalDenseBaseModule + nn: [256 * 3 + 3, 128] + aggr: "mean" + up_conv: + bn: True + bias: False + module_name: DenseFPModule + up_conv_nn: + [ + [512 * 3 + 256 * 3, 512, 512], + [128 * 3 + 512, 512, 512], + [64 * 3 + 512, 256, 256], + [256 + FEAT, 128, 128], + [], + ] + skip: True + mlp_cls: + nn: [128 * 2 + 2 * 64, 128] + dropout: 0 + + RSConv_MSN_S3DIS: + class: rsconv.RSConvLogicModel + conv_type: "DENSE" + use_category: ${data.use_category} + down_conv: + module_name: RSConvOriginalMSGDown + npoint: [2048, 1024, 512, 64] + radii: + [ + [0.075, 0.1, 0.125], + [0.1, 0.15, 0.2], + [0.2, 0.3, 0.4], + [0.4, 0.6, 0.8], + ] + nsamples: [[16, 32, 48], [16, 48, 64], [16, 32, 48], [16, 24, 32]] + down_conv_nn: + [ + [[10, 64//2, 16], [FEAT + 3, 16]], + [10, 128//4, 64 * 3 + 3], + [10, 256//4, 128 * 3 + 3], + [10, 512//4, 256 * 3 + 3], + ] + channel_raising_nn: + [ + [16, 64], + [64 * 3 + 3, 128], + [128 * 3 + 3, 256], + [256 * 3 + 3, 512], + ] + innermost: + - module_name: GlobalDenseBaseModule + nn: [512 * 3 + 3, 128] + aggr: "mean" + - module_name: GlobalDenseBaseModule + nn: [256 * 3 + 3, 128] + aggr: "mean" + up_conv: + bn: True + bias: False + module_name: DenseFPModule + up_conv_nn: + [ + [512 * 3 + 256 * 3, 512, 512], + [128 * 3 + 512, 512, 512], + [64 * 3 + 512, 256, 256], + [256 + FEAT, 128, 128], + [], + ] + skip: True + mlp_cls: + nn: [128 * 2 + 2 * 64, 128] + dropout: 0. + + RSConv_Indoor: + class: rsconv.RSConvLogicModel + conv_type: "DENSE" + down_conv: + module_name: RSConvOriginalMSGDown + npoint: [2048, 1024, 512, 256] + radii: + [ + [0.125, 0.2, 0.25], + [0.2, 0.3, 0.4], + [0.4, 0.6, 0.8], + [0.8, 1.2, 1.6], + ] + nsamples: [[16, 32, 48], [16, 48, 64], [16, 32, 48], [16, 24, 32]] + down_conv_nn: + [ + [[10, 64//2, 16], [FEAT + 3, 16]], + [10, 128//4, 64 * 3 + 3], + [10, 256//4, 128 * 3 + 3], + [10, 512//4, 256 * 3 + 3], + ] + channel_raising_nn: + [ + [16, 64], + [64 * 3 + 3, 128], + [128 * 3 + 3, 256], + [256 * 3 + 3, 512], + ] + innermost: + - module_name: GlobalDenseBaseModule + nn: [512 * 3 + 3, 128] + aggr: "mean" + - module_name: GlobalDenseBaseModule + nn: [256 * 3 + 3, 128] + aggr: "mean" + up_conv: + bn: True + bias: False + module_name: DenseFPModule + up_conv_nn: + [ + [512 * 3 + 256 * 3, 512, 512], + [128 * 3 + 512, 512, 512], + [64 * 3 + 512, 256, 256], + [256 + FEAT, 128, 128], + [], + ] + skip: True + mlp_cls: + nn: [128 * 2 + 2 * 64, 128] + dropout: 0. diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/sparseconv3d.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/sparseconv3d.yaml new file mode 100644 index 00000000..91fd9ea0 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/models/segmentation/sparseconv3d.yaml @@ -0,0 +1,74 @@ +models: + ResUNet32: + class: sparseconv3d.APIModel + conv_type: "SPARSE" + backend: "torchsparse" + backbone: + define_constants: + in_feat: 32 + block: ResBlock # Can be any of the blocks in modules/MinkowskiEngine/api_modules.py + down_conv: + module_name: ResNetDown + block: block + N: [0, 1, 2, 2, 3] + down_conv_nn: + [ + [FEAT, in_feat], + [in_feat, in_feat], + [in_feat, 2*in_feat], + [2*in_feat, 4*in_feat], + [4*in_feat, 8*in_feat], + ] + kernel_size: 3 + stride: [1, 2, 2, 2, 2] + up_conv: + block: block + module_name: ResNetUp + N: [1, 1, 1, 1, 0] + up_conv_nn: + [ + [8*in_feat, 4*in_feat], + [4*in_feat + 4*in_feat, 4*in_feat], + [4*in_feat + 2*in_feat, 3*in_feat], + [3*in_feat + in_feat, 3*in_feat], + [3*in_feat + in_feat, 3*in_feat], + ] + kernel_size: 3 + stride: [2, 2, 2, 2, 1] + + Res16UNet34: + class: sparseconv3d.APIModel + conv_type: "SPARSE" + backend: "torchsparse" + backbone: + define_constants: + in_feat: 32 + block: ResBlock # Can be any of the blocks in modules/MinkowskiEngine/api_modules.py + down_conv: + module_name: ResNetDown + block: block + N: [0, 2, 3, 4, 6] + down_conv_nn: + [ + [FEAT, in_feat], + [in_feat, in_feat], + [in_feat, 2*in_feat], + [2*in_feat, 4*in_feat], + [4*in_feat, 8*in_feat], + ] + kernel_size: 3 + stride: [1, 2, 2, 2, 2] + up_conv: + block: block + module_name: ResNetUp + N: [1, 1, 1, 1, 1] + up_conv_nn: + [ + [8*in_feat, 4*in_feat], + [4*in_feat + 4*in_feat, 4*in_feat], + [4*in_feat + 2*in_feat, 3*in_feat], + [3*in_feat + in_feat, 3*in_feat], + [3*in_feat + in_feat, 3*in_feat], + ] + kernel_size: 3 + stride: [2, 2, 2, 2, 1] diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/sota.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/sota.yaml new file mode 100644 index 00000000..d66b6fbe --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/sota.yaml @@ -0,0 +1,26 @@ +sota: + s3dis5: + miou: 67.1 + mrec: 72.8 + + s3dis: + acc: 88.2 + macc: 81.5 + miou: 70.6 + + scannet: + miou: 72.5 + + semantic3d: + miou: 76.0 + acc: 94.4 + + semantickitti: + miou: 50.3 + + modelnet40: + acc: 92.9 + + shapenet: + mciou: 85.1 + miou: 86.4 \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/default.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/default.yaml new file mode 100644 index 00000000..2a2fc706 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/default.yaml @@ -0,0 +1,44 @@ +# Those arguments defines the training hyper-parameters +training: + epochs: 100 #100 + num_workers: 1 #6 + batch_size: 1 #16 + shuffle: True + #PointNet: -1, Others; 0 + cuda: 0 # -1 -> no cuda otherwise takes the specified index, 0 for cuda + precompute_multi_scale: False # Compute multiscate features on cpu for faster training / inference + optim: + base_lr: 0.001 + # accumulated_gradient: -1 # Accumulate gradient accumulated_gradient * batch_size + grad_clip: -1 #-1 + optimizer: + class: Adam + params: + lr: ${training.optim.base_lr} # The path is cut from training + lr_scheduler: ${lr_scheduler} + bn_scheduler: + bn_policy: "step_decay" + params: + bn_momentum: 0.1 + bn_decay: 0.9 + decay_step : 10 + bn_clip : 1e-2 + weight_name: "latest" # Used during resume, select with model to load from [miou, macc, acc..., latest] + enable_cudnn: True + checkpoint_dir: "" + +# Those arguments within experiment defines which model, dataset and task to be created for benchmarking +# parameters for Weights and Biases +wandb: + entity: "" + project: default + log: True + notes: + name: + public: True # It will be display the model within wandb log, else not. + config: + model_name: ${model_name} + + # parameters for TensorBoard Visualization +tensorboard: + log: True diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/default_reg.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/default_reg.yaml new file mode 100644 index 00000000..05f739e9 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/default_reg.yaml @@ -0,0 +1,42 @@ +# Those arguments defines the training hyper-parameters +training: + + epochs: 6000 + num_workers: 6 + batch_size: 64 + shuffle: True + cuda: 0 + precompute_multi_scale: False # Compute multiscate features on cpu for faster training / inference + optim: + base_lr: 0.001 + # accumulated_gradient: -1 # Accumulate gradient accumulated_gradient * batch_size + grad_clip: -1 + optimizer: + class: Adam + params: + lr: ${training.optim.base_lr} # The path is cut from training + + lr_scheduler: ${lr_scheduler} + bn_scheduler: + bn_policy: "step_decay" + params: + bn_momentum: 0.1 + bn_decay: 0.9 + decay_step : 3000 + bn_clip : 1e-2 + weight_name: "latest" # Used during resume, select with model to load from [miou, macc, acc..., latest] + enable_cudnn: True + checkpoint_dir: "" + +# Those arguments within experiment defines which model, dataset and task to be created for benchmarking +# parameters for Weights and Biases +wandb: + project: default + log: False + notes: + name: + public: True # It will be display the model within wandb log, else not. + + # parameters for TensorBoard Visualization +tensorboard: + log: True diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/kpconv.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/kpconv.yaml new file mode 100644 index 00000000..92984f8d --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/kpconv.yaml @@ -0,0 +1,41 @@ +# Those arguments defines the training hyper-parameters +training: + epochs: 550 #550 + num_workers: 10 # 7 + batch_size: 2 # 8 + cuda: -1 #0 + precompute_multi_scale: True # Compute multiscate features on cpu for faster training / inference + optim: + base_lr: 0.01 + grad_clip: -1 #100 + optimizer: + class: SGD + params: + momentum: 0.98 + lr: ${training.optim.base_lr} # The path is cut from training + weight_decay: 1e-3 + lr_scheduler: ${lr_scheduler} + bn_scheduler: + bn_policy: "step_decay" + params: + bn_momentum: 0.98 + bn_decay: 0.9 + decay_step : 1000 + bn_clip : 1e-2 + weight_name: "latest" # Used during resume, select with model to load from [miou, macc, acc..., latest] + enable_cudnn: True + checkpoint_dir: "" + +# Those arguments within experiment defines which model, dataset and task to be created for benchmarking +# parameters for Weights and Biases +wandb: + project: urbanmesh + log: True + notes: "Fixed batch norm to 0.02" + name: "kpconv-bn02" + public: True # It will be display the model within wandb log, else not. + + +# parameters for TensorBoard Visualization +tensorboard: + log: False diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/kpconv_reg.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/kpconv_reg.yaml new file mode 100644 index 00000000..48781865 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/kpconv_reg.yaml @@ -0,0 +1,41 @@ +# Those arguments defines the training hyper-parameters +training: + epochs: 550 + num_workers: 7 + batch_size: 8 + cuda: 0 + precompute_multi_scale: True # Compute multiscate features on cpu for faster training / inference + optim: + base_lr: 0.01 + grad_clip: 100 + optimizer: + class: SGD + params: + momentum: 0.98 + lr: ${training.optim.base_lr} # The path is cut from training + weight_decay: 1e-3 + lr_scheduler: ${lr_scheduler} + bn_scheduler: + bn_policy: "step_decay" + params: + bn_momentum: 0.02 + bn_decay: 0.9 + decay_step : 1000 + bn_clip : 1e-2 + weight_name: "latest" # Used during resume, select with model to load from [miou, macc, acc..., latest] + enable_cudnn: True + checkpoint_dir: "" + +# Those arguments within experiment defines which model, dataset and task to be created for benchmarking +# parameters for Weights and Biases +wandb: + project: registration + log: True + notes: "Fixed batch norm to 0.98" + name: "kpconv-bn98" + public: True # It will be display the model within wandb log, else not. + + +# parameters for TensorBoard Visualization +tensorboard: + log: True diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/minkowski_scannet.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/minkowski_scannet.yaml new file mode 100644 index 00000000..9c414c21 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/minkowski_scannet.yaml @@ -0,0 +1,47 @@ +# Ref: https://github.com/chrischoy/SpatioTemporalSegmentation/blob/master/config.py +training: + epochs: 500 + num_workers: 4 + batch_size: 12 + shuffle: True + cuda: 0 + precompute_multi_scale: False # Compute multiscate features on cpu for faster training / inference + optim: + base_lr: 0.1 + # accumulated_gradient: -1 # Accumulate gradient accumulated_gradient * batch_size + grad_clip: -1 + optimizer: + class: SGD + params: + lr: ${training.optim.base_lr} # The path is cut from training + weight_decay: 1e-4 + momentum: 0.9 + dampening: 0.1 + lr_scheduler: ${lr_scheduler} + bn_scheduler: + bn_policy: "step_decay" + params: + bn_momentum: 0.1 + bn_decay: 0.2 + decay_step: 200 + bn_clip: 2e-2 + weight_name: "latest" # Used during resume, select with model to load from [miou, macc, acc..., latest] + enable_cudnn: True + checkpoint_dir: "" + +# Those arguments within experiment defines which model, dataset and task to be created for benchmarking +# parameters for Weights and Biases +wandb: + entity: "" + project: scannet + log: True + notes: "Minkowski baseline" + name: "Res16UNet34C" + public: True # It will be display the model within wandb log, else not. + config: + grid_size: ${data.grid_size} + model_name: ${model_name} + + # parameters for TensorBoard Visualization +tensorboard: + log: False diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/pointgroup.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/pointgroup.yaml new file mode 100644 index 00000000..8beca768 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/pointgroup.yaml @@ -0,0 +1,45 @@ +# Ref: https://github.com/chrischoy/SpatioTemporalSegmentation/blob/master/config.py +training: + epochs: 500 + num_workers: 4 + batch_size: 8 + shuffle: True + cuda: 1 + precompute_multi_scale: False # Compute multiscate features on cpu for faster training / inference + optim: + base_lr: 0.001 + grad_clip: -1 + # accumulated_gradient: -1 # Accumulate gradient accumulated_gradient * batch_size + optimizer: + class: Adam + params: + lr: ${training.optim.base_lr} # The path is cut from training + weight_decay: 0 + lr_scheduler: ${lr_scheduler} + bn_scheduler: + bn_policy: "step_decay" + params: + bn_momentum: 0.1 + bn_decay: 0.5 + decay_step: 20 + bn_clip: 1e-2 + weight_name: "latest" # Used during resume, select with model to load from [miou, macc, acc..., latest] + enable_cudnn: True + checkpoint_dir: "" + +# Those arguments within experiment defines which model, dataset and task to be created for benchmarking +# parameters for Weights and Biases +wandb: + entity: "" + project: panoptic + log: True + notes: "Minkowski baseline" + name: "PointGroup" + id: + public: True # It will be display the model within wandb log, else not. + config: + grid_size: ${data.grid_size} + + # parameters for TensorBoard Visualization +tensorboard: + log: False diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/ppnet.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/ppnet.yaml new file mode 100644 index 00000000..dc941ec1 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/ppnet.yaml @@ -0,0 +1,41 @@ +# Those arguments defines the training hyper-parameters +training: + epochs: 600 + num_workers: 8 + batch_size: 8 + cuda: 0 + precompute_multi_scale: True # Compute multiscate features on cpu for faster training / inference + optim: + base_lr: 0.01 + grad_clip: 100 + optimizer: + class: SGD + params: + momentum: 0.98 + lr: ${training.optim.base_lr} # The path is cut from training + weight_decay: 1e-3 + lr_scheduler: ${lr_scheduler} + bn_scheduler: + bn_policy: "step_decay" + params: + bn_momentum: 0.99 + bn_decay: 1.0 + decay_step : 1000 + bn_clip : 1e-2 + weight_name: "latest" # Used during resume, select with model to load from [miou, macc, acc..., latest] + enable_cudnn: True + checkpoint_dir: "" + +# Those arguments within experiment defines which model, dataset and task to be created for benchmarking +# parameters for Weights and Biases +wandb: + project: s3dis + log: True + notes: "PPNet baseline" + name: "PPNet" + public: True # It will be display the model within wandb log, else not. + + +# parameters for TensorBoard Visualization +tensorboard: + log: False diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/pvcnn.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/pvcnn.yaml new file mode 100644 index 00000000..64be929c --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/pvcnn.yaml @@ -0,0 +1,47 @@ +training: + epochs: 300 + num_workers: 4 + batch_size: 4 + shuffle: True + cuda: 0 + precompute_multi_scale: False # Compute multiscate features on cpu for faster training / inference + optim: + base_lr: 2.4e-1 + # accumulated_gradient: -1 # Accumulate gradient accumulated_gradient * batch_size + grad_clip: -1 + optimizer: + class: SGD + params: + lr: ${training.optim.base_lr} # The path is cut from training + weight_decay: 1e-4 + momentum: 0.9 + dampening: 0.1 + lr_scheduler: ${lr_scheduler} + bn_scheduler: + bn_policy: "step_decay" + params: + bn_momentum: 0.1 + bn_decay: 0.2 + decay_step: 20 + bn_clip: 2e-2 + weight_name: "latest" # Used during resume, select with model to load from [miou, macc, acc..., latest] + enable_cudnn: True + checkpoint_dir: "" + +# Those arguments within experiment defines which model, dataset and task to be created for benchmarking +# parameters for Weights and Biases +wandb: + entity: "" + project: scannet-benchmark + log: True + notes: + name: ${model_name} + id: + public: True # It will be display the model within wandb log, else not. + config: + grid_size: ${data.grid_size} + model_name: ${model_name} + + # parameters for TensorBoard Visualization +tensorboard: + log: False diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/s3dis_benchmark/kpconv.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/s3dis_benchmark/kpconv.yaml new file mode 100644 index 00000000..c03b83ff --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/s3dis_benchmark/kpconv.yaml @@ -0,0 +1,42 @@ +# Those arguments defines the training hyper-parameters +training: + epochs: 300 + num_workers: 4 + batch_size: 8 + cuda: 0 + precompute_multi_scale: True # Compute multiscate features on cpu for faster training / inference + optim: + base_lr: 0.01 + grad_clip: 100 + optimizer: + class: SGD + params: + momentum: 0.98 + lr: ${training.optim.base_lr} # The path is cut from training + weight_decay: 1e-3 + lr_scheduler: ${lr_scheduler} + bn_scheduler: + bn_policy: "step_decay" + params: + bn_momentum: 0.98 + bn_decay: 0.9 + decay_step : 1000 + bn_clip : 1e-2 + weight_name: "latest" # Used during resume, select with model to load from [miou, macc, acc..., latest] + enable_cudnn: True + checkpoint_dir: "" + +# Those arguments within experiment defines which model, dataset and task to be created for benchmarking +# parameters for Weights and Biases +wandb: + entity: nicolas + project: s3dis-benchmark + log: True + notes: "KPConv benchmark training" + name: "kpconv" + public: True # It will be display the model within wandb log, else not. + + +# parameters for TensorBoard Visualization +tensorboard: + log: False diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/s3dis_benchmark/minkowski.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/s3dis_benchmark/minkowski.yaml new file mode 100644 index 00000000..cae5c4b9 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/s3dis_benchmark/minkowski.yaml @@ -0,0 +1,46 @@ +# Ref: https://github.com/chrischoy/SpatioTemporalSegmentation/blob/master/config.py +training: + epochs: 300 + num_workers: 4 + batch_size: 8 + shuffle: True + cuda: 0 + precompute_multi_scale: False # Compute multiscate features on cpu for faster training / inference + optim: + base_lr: 0.1 + # accumulated_gradient: -1 # Accumulate gradient accumulated_gradient * batch_size + grad_clip: 100 + optimizer: + class: SGD + params: + lr: ${training.optim.base_lr} # The path is cut from training + weight_decay: 1e-4 + lr_scheduler: ${lr_scheduler} + bn_scheduler: + bn_policy: "step_decay" + params: + bn_momentum: 0.02 + bn_decay: 1 + decay_step : 1000 + bn_clip : 1e-2 + weight_name: "latest" # Used during resume, select with model to load from [miou, macc, acc..., latest] + enable_cudnn: True + checkpoint_dir: "" + +# Those arguments within experiment defines which model, dataset and task to be created for benchmarking +# parameters for Weights and Biases +wandb: + entity: "nicolas" + project: s3dis-benchmark + log: True + notes: "Minkowski baseline" + name: "Res16UNet34C " + public: True # It will be display the model within wandb log, else not. + config: + fold: ${data.fold} + model_name: ${model_name} + + + # parameters for TensorBoard Visualization +tensorboard: + log: False diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/s3dis_benchmark/pointnet2.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/s3dis_benchmark/pointnet2.yaml new file mode 100644 index 00000000..48f12bd6 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/s3dis_benchmark/pointnet2.yaml @@ -0,0 +1,42 @@ +# Those arguments defines the training hyper-parameters +training: + epochs: 300 + num_workers: 2 + batch_size: 8 + cuda: 0 + precompute_multi_scale: False # Compute multiscate features on cpu for faster training / inference + optim: + base_lr: 0.01 + grad_clip: 100 + optimizer: + class: SGD + params: + momentum: 0.02 + lr: ${training.optim.base_lr} # The path is cut from training + weight_decay: 1e-3 + lr_scheduler: ${lr_scheduler} + bn_scheduler: + bn_policy: "step_decay" + params: + bn_momentum: 0.98 + bn_decay: 0.9 + decay_step : 1000 + bn_clip : 1e-2 + weight_name: "latest" # Used during resume, select with model to load from [miou, macc, acc..., latest] + enable_cudnn: True + checkpoint_dir: "" + +# Those arguments within experiment defines which model, dataset and task to be created for benchmarking +# parameters for Weights and Biases +wandb: + entity: nicolas + project: s3dis-benchmark + log: True + notes: "Pointnet++ baseline" + name: "pointnet2" + public: True # It will be display the model within wandb log, else not. + + +# parameters for TensorBoard Visualization +tensorboard: + log: False diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/s3dis_benchmark/rsconv.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/s3dis_benchmark/rsconv.yaml new file mode 100644 index 00000000..ab7179ba --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/s3dis_benchmark/rsconv.yaml @@ -0,0 +1,42 @@ +# Those arguments defines the training hyper-parameters +training: + epochs: 300 + num_workers: 2 + batch_size: 8 + cuda: 0 + precompute_multi_scale: False # Compute multiscate features on cpu for faster training / inference + optim: + base_lr: 0.01 + grad_clip: 100 + optimizer: + class: SGD + params: + momentum: 0.02 + lr: ${training.optim.base_lr} # The path is cut from training + weight_decay: 1e-3 + lr_scheduler: ${lr_scheduler} + bn_scheduler: + bn_policy: "step_decay" + params: + bn_momentum: 0.98 + bn_decay: 0.9 + decay_step : 1000 + bn_clip : 1e-2 + weight_name: "latest" # Used during resume, select with model to load from [miou, macc, acc..., latest] + enable_cudnn: True + checkpoint_dir: "" + +# Those arguments within experiment defines which model, dataset and task to be created for benchmarking +# parameters for Weights and Biases +wandb: + entity: nicolas + project: s3dis-benchmark + log: True + notes: "rsconv baseline" + name: "rsconv" + public: True # It will be display the model within wandb log, else not. + + +# parameters for TensorBoard Visualization +tensorboard: + log: False diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/scannet_benchmark/kpconv.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/scannet_benchmark/kpconv.yaml new file mode 100644 index 00000000..1283b7a9 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/scannet_benchmark/kpconv.yaml @@ -0,0 +1,42 @@ +# Those arguments defines the training hyper-parameters +training: + epochs: 300 + num_workers: 4 + batch_size: 8 + cuda: 0 + precompute_multi_scale: True # Compute multiscate features on cpu for faster training / inference + optim: + base_lr: 0.01 + grad_clip: 100 + optimizer: + class: SGD + params: + momentum: 0.9 + lr: ${training.optim.base_lr} # The path is cut from training + weight_decay: 1e-4 + dampening: 0.1 + lr_scheduler: ${lr_scheduler} + bn_scheduler: + bn_policy: "step_decay" + params: + bn_momentum: 0.1 + bn_decay: 0.9 + decay_step: 20 + bn_clip: 1e-2 + weight_name: "latest" # Used during resume, select with model to load from [miou, macc, acc..., latest] + enable_cudnn: True + checkpoint_dir: "" + +# Those arguments within experiment defines which model, dataset and task to be created for benchmarking +# parameters for Weights and Biases +wandb: + entity: nicolas + project: scannet-benchmark + log: True + notes: "KPConv benchmark training" + name: "kpconv" + public: True # It will be display the model within wandb log, else not. + +# parameters for TensorBoard Visualization +tensorboard: + log: False diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/scannet_benchmark/minkowski.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/scannet_benchmark/minkowski.yaml new file mode 100644 index 00000000..da2a62d4 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/scannet_benchmark/minkowski.yaml @@ -0,0 +1,47 @@ +# Ref: https://github.com/chrischoy/SpatioTemporalSegmentation/blob/master/config.py +training: + epochs: 300 + num_workers: 4 + batch_size: 8 + shuffle: True + cuda: 0 + precompute_multi_scale: False # Compute multiscate features on cpu for faster training / inference + optim: + base_lr: 0.1 + # accumulated_gradient: -1 # Accumulate gradient accumulated_gradient * batch_size + grad_clip: -1 + optimizer: + class: SGD + params: + momentum: 0.9 + lr: ${training.optim.base_lr} # The path is cut from training + weight_decay: 1e-4 + dampening: 0.1 + lr_scheduler: ${lr_scheduler} + bn_scheduler: + bn_policy: "step_decay" + params: + bn_momentum: 0.02 + bn_decay: 1 + decay_step: 2000 + bn_clip: 1e-2 + weight_name: "latest" # Used during resume, select with model to load from [miou, macc, acc..., latest] + enable_cudnn: True + checkpoint_dir: "" + +# Those arguments within experiment defines which model, dataset and task to be created for benchmarking +# parameters for Weights and Biases +wandb: + entity: "" + project: scannet-benchmark + log: True + notes: "Minkowski baseline" + name: "Res16UNet34" + id: + public: True # It will be display the model within wandb log, else not. + config: + model_name: ${model_name} + + # parameters for TensorBoard Visualization +tensorboard: + log: False diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/scannet_benchmark/pointnet2.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/scannet_benchmark/pointnet2.yaml new file mode 100644 index 00000000..2a6738d0 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/scannet_benchmark/pointnet2.yaml @@ -0,0 +1,42 @@ +# Those arguments defines the training hyper-parameters +training: + epochs: 300 + num_workers: 2 + batch_size: 8 + cuda: 0 + precompute_multi_scale: False # Compute multiscate features on cpu for faster training / inference + optim: + base_lr: 0.01 + grad_clip: 100 + optimizer: + class: SGD + params: + momentum: 0.9 + lr: ${training.optim.base_lr} # The path is cut from training + weight_decay: 1e-3 + dampening: 0.1 + lr_scheduler: ${lr_scheduler} + bn_scheduler: + bn_policy: "step_decay" + params: + bn_momentum: 0.1 + bn_decay: 0.9 + decay_step: 20 + bn_clip: 1e-2 + weight_name: "latest" # Used during resume, select with model to load from [miou, macc, acc..., latest] + enable_cudnn: True + checkpoint_dir: "" + +# Those arguments within experiment defines which model, dataset and task to be created for benchmarking +# parameters for Weights and Biases +wandb: + entity: nicolas + project: scannet-benchmark + log: True + notes: "Pointnet++ baseline" + name: "pointnet2" + public: True # It will be display the model within wandb log, else not. + +# parameters for TensorBoard Visualization +tensorboard: + log: False diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/scannet_benchmark/rsconv.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/scannet_benchmark/rsconv.yaml new file mode 100644 index 00000000..c899989c --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/scannet_benchmark/rsconv.yaml @@ -0,0 +1,42 @@ +# Those arguments defines the training hyper-parameters +training: + epochs: 300 + num_workers: 2 + batch_size: 8 + cuda: 0 + precompute_multi_scale: False # Compute multiscate features on cpu for faster training / inference + optim: + base_lr: 0.01 + grad_clip: 100 + optimizer: + class: SGD + params: + momentum: 0.9 + lr: ${training.optim.base_lr} # The path is cut from training + weight_decay: 1e-4 + dampening: 0.1 + lr_scheduler: ${lr_scheduler} + bn_scheduler: + bn_policy: "step_decay" + params: + bn_momentum: 0.1 + bn_decay: 0.9 + decay_step: 20 + bn_clip: 1e-2 + weight_name: "latest" # Used during resume, select with model to load from [miou, macc, acc..., latest] + enable_cudnn: True + checkpoint_dir: "" + +# Those arguments within experiment defines which model, dataset and task to be created for benchmarking +# parameters for Weights and Biases +wandb: + entity: nicolas + project: scannet-benchmark + log: True + notes: "rsconv baseline" + name: "rsconv" + public: True # It will be display the model within wandb log, else not. + +# parameters for TensorBoard Visualization +tensorboard: + log: False diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/sparse_fragment_reg.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/sparse_fragment_reg.yaml new file mode 100644 index 00000000..80acea53 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/sparse_fragment_reg.yaml @@ -0,0 +1,47 @@ +# Those arguments defines the training hyper-parameters +training: + + epochs: 200 + num_workers: 6 + batch_size: 4 + shuffle: True + cuda: 0 + precompute_multi_scale: False # Compute multiscate features on cpu for faster training / inference + optim: + base_lr: 1e-1 + # accumulated_gradient: 1 + # accumulated_gradient: -1 # Accumulate gradient accumulated_gradient * batch_size + grad_clip: -1 + optimizer: + class: SGD + params: + lr: ${training.optim.base_lr} # The path is cut from training + momentum: 0.8 + weight_decay: 1e-4 + lr_scheduler: + class: ExponentialLR + params: + gamma: 0.99 + # bn_scheduler: + # bn_policy: "step_decay" + # params: + # bn_momentum: 0.1 + # bn_decay: 0.9 + # decay_step : 3000 + # bn_clip : 1e-2 + weight_name: "latest" # Used during resume, select with model to load from [miou, macc, acc..., latest] + enable_cudnn: True + checkpoint_dir: "" + +# Those arguments within experiment defines which model, dataset and task to be created for benchmarking +# parameters for Weights and Biases +wandb: + project: registration + log: True + notes: + name: humanpose1 + public: True # It will be display the model within wandb log, else not. + + # parameters for TensorBoard Visualization +tensorboard: + log: True diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/votenet.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/votenet.yaml new file mode 100644 index 00000000..ad71647b --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/training/votenet.yaml @@ -0,0 +1,40 @@ +training: + epochs: 200 + num_workers: 4 + batch_size: 4 + shuffle: True + cuda: 0 + precompute_multi_scale: False # Compute multiscate features on cpu for faster training / inference + optim: + base_lr: 1e-3 + # accumulated_gradient: -1 # Accumulate gradient accumulated_gradient * batch_size + grad_clip: -1 + optimizer: + class: Adam + params: + lr: ${training.optim.base_lr} # The path is cut from training + lr_scheduler: ${lr_scheduler} + bn_scheduler: + bn_policy: "step_decay" + params: + bn_momentum: 0.5 + bn_decay: 0.5 + decay_step: 20 + bn_clip: 1e-2 + weight_name: "latest" # Used during resume, select with model to load from [miou, macc, acc..., latest] + enable_cudnn: True + checkpoint_dir: "" + +# Those arguments within experiment defines which model, dataset and task to be created for benchmarking +# parameters for Weights and Biases +wandb: + entity: + project: scannet_object_detection + log: False + notes: "Corrected box prediction" + name: VoteNet + public: True # It will be display the model within wandb log, else not. + + # parameters for TensorBoard Visualization +tensorboard: + log: True diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/visualization/default.yaml b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/visualization/default.yaml new file mode 100644 index 00000000..9fdc5822 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/conf/visualization/default.yaml @@ -0,0 +1,10 @@ +visualization: + activate: True + format: "pointcloud" # image will come later + num_samples_per_epoch: -1 # -1: visualization all + deterministic: True # False -> Randomly sample elements from epoch to epoch + vis_data: [] #["test"] #visualize test when run eval.py + saved_keys: + pos: [['x', 'float'], ['y', 'float'], ['z', 'float']] + y: [['l', 'float']] + pred: [['p', 'float']] \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docker/Dockerfile b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docker/Dockerfile new file mode 100644 index 00000000..77089584 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docker/Dockerfile @@ -0,0 +1,27 @@ +FROM ubuntu:bionic + +# Avoid warnings by switching to noninteractive +ENV DEBIAN_FRONTEND=noninteractive + +COPY install_system.sh install_system.sh +RUN bash install_system.sh + +COPY install_python.sh install_python.sh +RUN bash install_python.sh cpu + +ARG MODEL="" +ENV WORKDIR=/dpb +ENV MODEL_PATH=$WORKDIR/$MODEL + +WORKDIR $WORKDIR + +COPY pyproject.toml pyproject.toml +COPY torch_points3d/__init__.py torch_points3d/__init__.py +COPY README.md README.md +RUN pip3 install . && rm -rf /root/.cache + +COPY . . + +# Setup location of model for forward inference +RUN sed -i "/checkpoint_dir:/c\checkpoint_dir: $WORKDIR" forward_scripts/conf/config.yaml +RUN model_name=$(echo "$MODEL" | cut -f 1 -d '.') && sed -i "/model_name:/c\model_name: $model_name" forward_scripts/conf/config.yaml diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docker/Dockerfile.cpu b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docker/Dockerfile.cpu new file mode 100644 index 00000000..e5dec1de --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docker/Dockerfile.cpu @@ -0,0 +1,19 @@ +FROM ubuntu:bionic + +# Avoid warnings by switching to noninteractive +ENV DEBIAN_FRONTEND=noninteractive + +COPY docker/install_system.sh install_system.sh +RUN bash install_system.sh + +COPY docker/install_python.sh install_python.sh +RUN bash install_python.sh cpu && rm -rf /root/.cache + +ENV WORKDIR=/tp3d +WORKDIR $WORKDIR + +COPY pyproject.toml pyproject.toml +COPY torch_points3d/__init__.py torch_points3d/__init__.py +COPY README.md README.md + +RUN pip3 install . && rm -rf /root/.cache diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docker/Dockerfile.gpu b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docker/Dockerfile.gpu new file mode 100644 index 00000000..2304f696 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docker/Dockerfile.gpu @@ -0,0 +1,21 @@ +FROM nvidia/cuda:10.2-devel-ubuntu18.04 + +ENV FORCE_CUDA 1 +ENV TORCH_CUDA_ARCH_LIST "3.5 5.2 6.0 6.1 7.0+PTX" + +COPY docker/install_system.sh install_system.sh +RUN bash install_system.sh + +COPY docker/install_python.sh install_python.sh +RUN bash install_python.sh gpu && rm -rf /root/.cache + +ENV WORKDIR=/tp3d +WORKDIR $WORKDIR + +COPY pyproject.toml pyproject.toml +COPY torch_points3d/__init__.py torch_points3d/__init__.py +COPY README.md README.md + +RUN pip3 install . && rm -rf /root/.cache + +COPY . . diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docker/build.sh b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docker/build.sh new file mode 100644 index 00000000..ce483669 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docker/build.sh @@ -0,0 +1,33 @@ +#/bin/bash +# +# Script that builds a docker image containing the code base +# and a specific set of pretrained weights +# +set -eu + +if [[ "$#" -ne 1 ]] +then + echo "Usage: ./build.sh path/to/kpconv.pt" + exit 1 +fi +MODEL=$1 + +# Check that file exists +cd .. +if [ ! -f "$MODEL" ]; then + echo "$MODEL does not exist" + exit 1 +fi + +# Sets a bunch of variables +MODEL_NAME="$(basename $MODEL)" + +# Add model to docker context (outputs is ignored) +cp $MODEL $MODEL_NAME + +# Build image +IMAGE_NAME=$(echo "$MODEL_NAME" | cut -f 1 -d '.'):latest +sudo docker build -f docker/Dockerfile --build-arg MODEL=$MODEL_NAME -t $IMAGE_NAME . + +# Cleanup +rm $MODEL_NAME diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docker/install_python.sh b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docker/install_python.sh new file mode 100644 index 00000000..91a9c3a6 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docker/install_python.sh @@ -0,0 +1,22 @@ +set -eu + +if [[ "$#" -ne 1 ]] +then + echo "Usage: ./install_python.sh gpu" + exit 1 +fi + +python3 -m pip install -U pip +pip3 install setuptools>=41.0.0 +if [ $1 == "gpu" ]; then + echo "Install GPU" + pip3 install torch==1.7.0 torchvision==0.8.1 + pip3 install MinkowskiEngine==v0.4.3 --install-option="--force_cuda" --install-option="--cuda_home=/usr/local/cuda" + pip3 install git+https://github.com/mit-han-lab/torchsparse.git -v + pip3 install pycuda +else + echo "Install CPU" + pip3 install torch==1.7.0+cpu torchvision==0.8.1+cpu -f https://download.pytorch.org/whl/torch_stable.html + pip3 install MinkowskiEngine==v0.4.3 + pip3 install git+https://github.com/mit-han-lab/torchsparse.git +fi diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docker/install_system.sh b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docker/install_system.sh new file mode 100644 index 00000000..0cf87008 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docker/install_system.sh @@ -0,0 +1,10 @@ +set -eu + +apt-get update +apt-get install -y --fix-missing --no-install-recommends\ + libffi-dev libssl-dev build-essential libopenblas-dev libsparsehash-dev\ + python3-pip python3-dev python3-venv python3-setuptools\ + git iproute2 procps lsb-release \ + libsm6 libxext6 libxrender-dev ninja-build +apt-get clean +rm -rf /var/lib/apt/lists/* diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/Makefile b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/Makefile new file mode 100644 index 00000000..c36250ab --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/Makefile @@ -0,0 +1,23 @@ +# Minimal makefile for Sphinx documentation +# + +# You can set these variables from the command line, and also +# from the environment for the first two. +SPHINXOPTS ?= +SPHINXBUILD ?= sphinx-build +SOURCEDIR = . +BUILDDIR = _build + +# Put it first so that "make" without argument is like "make help". +help: + @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) + +.PHONY: help Makefile + +livehtml: + sphinx-autobuild -b html $(ALLSPHINXOPTS) "$(SOURCEDIR)" $(BUILDDIR)/html + +# Catch-all target: route all unknown targets to Sphinx using the new +# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). +%: Makefile + @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/conf.py b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/conf.py new file mode 100644 index 00000000..f4a2e9e8 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/conf.py @@ -0,0 +1,93 @@ +# Configuration file for the Sphinx documentation builder. +# +# This file only contains a selection of the most common options. For a full +# list see the documentation: +# https://www.sphinx-doc.org/en/master/usage/configuration.html + +# -- Path setup -------------------------------------------------------------- + +# If extensions (or modules to document with autodoc) are in another directory, +# add these directories to sys.path here. If the directory is relative to the +# documentation root, use os.path.abspath to make it absolute, like shown here. +# +import os +import sys + +sys.path.insert(0, os.path.abspath("./..")) +import sphinx_rtd_theme + +# -- Project information ----------------------------------------------------- + +project = "Torch Points 3D" +copyright = "2020, Thomas Chaton and Nicolas Chaulet" +author = "Thomas Chaton and Nicolas Chaulet" + + +# -- General configuration --------------------------------------------------- + +# Add any Sphinx extension module names here, as strings. They can be +# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom +# ones. +extensions = [ + "sphinx_rtd_theme", + "sphinx.ext.autosectionlabel", + "sphinx.ext.autodoc", + "sphinx.ext.mathjax", + "sphinx.ext.githubpages", + "sphinx.ext.viewcode", + "sphinx.ext.napoleon", +] +autosectionlabel_prefix_document = True +autodoc_mock_imports = [ + "torch_scatter", + "torch_sparse", + "torch_cluster", + "torch_points_kernels", + "torch", + "torch_geometric", + "sklearn", + "omegaconf", + "tqdm", + "hydra", + "matplotlib", + "pytorch_metric_learning", + "scipy", + "MinkowskiEngine", + "pandas", + "numpy", + "torchnet", + "h5py", + "plyfile", + "wandb", + "numba", + "gdown", + "torchsparse", +] + +# Add any paths that contain templates here, relative to this directory. +templates_path = ["_templates"] + +# List of patterns, relative to source directory, that match files and +# directories to ignore when looking for source files. +# This pattern also affects html_static_path and html_extra_path. +exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"] + + +# -- Options for HTML output ------------------------------------------------- + +# The theme to use for HTML and HTML Help pages. See the documentation for +# a list of builtin themes. +# +html_theme = "sphinx_rtd_theme" +html_theme_options = { + "display_version": True, + "prev_next_buttons_location": "bottom", + # Toc options + "collapse_navigation": False, + "navigation_depth": 3, +} + +# Add any paths that contain custom static files (such as style sheets) here, +# relative to this directory. They are copied after the builtin static files, +# so a file named "default.css" will overwrite the builtin "default.css". +html_static_path = ["_static"] diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/Dashboard_demo.gif b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/Dashboard_demo.gif new file mode 100644 index 00000000..ffe3eba2 Binary files /dev/null and b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/Dashboard_demo.gif differ diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/classification.png b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/classification.png new file mode 100644 index 00000000..809c188c Binary files /dev/null and b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/classification.png differ diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/find_runs.PNG b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/find_runs.PNG new file mode 100644 index 00000000..2e3db8d4 Binary files /dev/null and b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/find_runs.PNG differ diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/inference_demo.gif b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/inference_demo.gif new file mode 100644 index 00000000..f01b3247 Binary files /dev/null and b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/inference_demo.gif differ diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/logging.png b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/logging.png new file mode 100644 index 00000000..69dc53ed Binary files /dev/null and b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/logging.png differ diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/objects.png b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/objects.png new file mode 100644 index 00000000..3654e400 Binary files /dev/null and b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/objects.png differ diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/panoptic.png b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/panoptic.png new file mode 100644 index 00000000..4b2ccd56 Binary files /dev/null and b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/panoptic.png differ diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/part_segmentation.png b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/part_segmentation.png new file mode 100644 index 00000000..8fbab7b3 Binary files /dev/null and b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/part_segmentation.png differ diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/pentagram.png b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/pentagram.png new file mode 100644 index 00000000..092b421b Binary files /dev/null and b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/pentagram.png differ diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/pyg_batch.PNG b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/pyg_batch.PNG new file mode 100644 index 00000000..1cb5332d Binary files /dev/null and b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/pyg_batch.PNG differ diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/registration.png b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/registration.png new file mode 100644 index 00000000..1582cb94 Binary files /dev/null and b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/registration.png differ diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/rs_conv_archi.png b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/rs_conv_archi.png new file mode 100644 index 00000000..d1c74123 Binary files /dev/null and b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/rs_conv_archi.png differ diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/rs_conv_conv.png b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/rs_conv_conv.png new file mode 100644 index 00000000..f79ae2ed Binary files /dev/null and b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/rs_conv_conv.png differ diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/segmentation.png b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/segmentation.png new file mode 100644 index 00000000..22023268 Binary files /dev/null and b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/segmentation.png differ diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/semantic.png b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/semantic.png new file mode 100644 index 00000000..c3dd44bb Binary files /dev/null and b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/semantic.png differ diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/shapenet.png b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/shapenet.png new file mode 100644 index 00000000..c354f5a8 Binary files /dev/null and b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/imgs/shapenet.png differ diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/index.rst b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/index.rst new file mode 100644 index 00000000..d72fc781 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/index.rst @@ -0,0 +1,125 @@ +.. Torch Points 3D documentation master file, created by + sphinx-quickstart on Wed Mar 18 08:19:48 2020. + You can adapt this file completely to your liking, but it should at least + contain the root `toctree` directive. + +:github_url: https://github.com/nicolas-chaulet/torch-points3d + +.. toctree:: + :maxdepth: 2 + :caption: Contents: + +.. raw:: html + + +

+ + +**Torch Points 3D** is a framework for developing and testing common +deep learning models to solve tasks related to unstructured 3D spatial data +i.e. Point Clouds. The framework currently integrates some of the best published +architectures and it integrates the most common public datasests for ease of +reproducibility. It heavily relies on `Pytorch Geometric `_ and `Facebook Hydra library `_ thanks for the great work! + +We aim to build a tool which can be used for benchmarking SOTA models, while also allowing practitioners to efficiently pursue research into point cloud analysis, with the end-goal of building models which can be applied to real-life applications. + + +.. image:: imgs/Dashboard_demo.gif + :target: imgs/Dashboard_demo.gif + :alt: dashboard + +Install with pip +----------------- +You can easily install Torch Points3D with ``pip`` + +.. code-block:: bash + + pip install torch + pip install torch-points3d + +but first make sure that the following dependencies are met + +- CUDA 10 or higher (if you want GPU version) +- Python 3.6 or higher + headers (python-dev) +- PyTorch 1.5 or higher (1.4 and 1.3.1 should also be working but are not actively supported moving forward) +- MinkowskiEngine (optional) see `here `_ for installation instructions + + + + + +Core features +--------------- + + +* **Task** driven implementation with dynamic model and dataset resolution from arguments. +* **Core** implementation of common components for point cloud deep learning - greatly simplifying the creation of new models: + + * **Core Architectures** - Unet + * **Core Modules** - Residual Block, Down-sampling and Up-sampling convolutions + * **Core Transforms** - Rotation, Scaling, Jitter + * **Core Sampling** - FPS, Random Sampling, Grid Sampling + * **Core Neighbour Finder** - Radius Search, KNN + +* + 4 **Base Convolution** base classes to simplify the implementation of new convolutions. Each base class supports a different data format (B = number of batches, C = number of features): + + + * **DENSE** (B, num_points, C) + * **PARTIAL DENSE** (B * num_points, C) + * **MESSAGE PASSING** (B * num_points, C) + * **SPARSE** (B * num_points, C) + +* + Models can be completely specified using a YAML file, greatly easing reproducability. + +* Several visualiation tools **(tensorboard, wandb)** and **dynamic metric-based model checkpointing** , which is easily customizable. +* **Dynamic customized placeholder resolution** for smart model definition. + +Supported models +------------------ + +The following models have been tested and validated: + + +* `Relation-Shape Convolutional (RSConv) Neural Network for Point Cloud Analysis `_ +* `KPConv: Flexible and Deformable Convolution for Point Clouds `_ +* `PointCNN: Convolution On X-Transformed Points `_ +* `PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space `_ +* `4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks `_ +* `Deep Hough Voting for 3D Object Detection in Point Clouds `_ + +We are actively working on adding the following ones to the framework: + +* `RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds `_ - implemented but not completely tested + +and much more to come ... + +Supported tasks +--------------- + +* Segmentation +* Registration +* Classification +* Object detection + +.. toctree:: + :glob: + :maxdepth: 1 + :caption: Developer guide + :hidden: + + src/gettingstarted + src/tutorials + src/advanced + +.. toctree:: + :glob: + :maxdepth: 2 + :caption: API + :hidden: + + src/api/models + src/api/datasets + src/api/transforms + src/api/filters diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/logo.png b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/logo.png new file mode 100644 index 00000000..541d35d1 Binary files /dev/null and b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/logo.png differ diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/logo.svg b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/logo.svg new file mode 100644 index 00000000..99750290 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/logo.svg @@ -0,0 +1,2978 @@ + + + + + + + + + + image/svg+xml + + + + + + + PyTorchPoints 3D + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/logo_gen/Pytorch-points.svg b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/logo_gen/Pytorch-points.svg new file mode 100644 index 00000000..b61017d7 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/logo_gen/Pytorch-points.svg @@ -0,0 +1,3583 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/logo_gen/Pytorch.3dm b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/logo_gen/Pytorch.3dm new file mode 100644 index 00000000..13cdb86e Binary files /dev/null and b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/logo_gen/Pytorch.3dm differ diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/logo_gen/pytorch-points-gh.png b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/logo_gen/pytorch-points-gh.png new file mode 100644 index 00000000..8e36ffeb Binary files /dev/null and b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/logo_gen/pytorch-points-gh.png differ diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/logo_gen/pytorch_logo.gh b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/logo_gen/pytorch_logo.gh new file mode 100644 index 00000000..8e259ecf Binary files /dev/null and b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/logo_gen/pytorch_logo.gh differ diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/requirements.txt b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/requirements.txt new file mode 100644 index 00000000..2bb8e806 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/requirements.txt @@ -0,0 +1,2 @@ +sphinx==2.4.4 +numpy \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/src/advanced.rst b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/src/advanced.rst new file mode 100644 index 00000000..ab3e6b34 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/src/advanced.rst @@ -0,0 +1,451 @@ +:github_url: https://github.com/nicolas-chaulet/torch-points3d + +Advanced +========== + +Configuration +---------------- + +Overview +^^^^^^^^^^ + +We have chosen `Facebook Hydra library `_ as out core tool for managing the configuration of our experiments. It provides a nice and scalable interface to defining models and datasets. We encourage our users to take a look at their documentation and get a basic understanding of its core functionalities. +As per their website + +.. + + "Hydra is a framework for elegantly configuring complex applications" + + +Configuration architecture +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +All configurations leave in the `conf `_ folder and it is organised as follow: + +.. code-block:: bash + + . + ├── config.yaml # main config file for training + ├── data # contains all configurations related to datasets + ├── debugging # configs that can be used for debugging purposes + ├── eval.yaml # Main config for running a full evaluation on a given dataset + ├── hydra # hydra specific configs + ├── lr_scheduler # learning rate schedulers + ├── models # Architectures of the models + ├── sota.yaml # SOTA scores + ├── training # Training specific parameters + └── visualization # Parameters for saving visualisation artefact + +Understanding config.yaml +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +``config.yaml`` is the config file that governs the behaviour of your trainings. It gathers multiple configurations into one, and it is organised as follow: + +.. code-block:: yaml + + defaults: + - task: ??? # Task performed (segmentation, classification etc...) + optional: True + - model_type: ??? # Type of model to use, e.g. pointnet2, rsconv etc... + optional: True + - dataset: ??? + optional: True + + - visualization: default + - lr_scheduler: multi_step + - training: default + - eval + + - debugging: default.yaml + - models: ${defaults.0.task}/${defaults.1.model_type} + - data: ${defaults.0.task}/${defaults.2.dataset} + - sota # Contains current SOTA results on different datasets (extracted from papers !). + - hydra/job_logging: custom + + model_name: ??? # Name of the specific model to load + + selection_stage: "" + pretty_print: False + +Hydra is expecting the followings arguments from the command line: + + +* task +* model_type +* dataset +* model_name + +The provided ``task`` and ``dataset`` will be used to load the configuration for the dataset at ``conf/data/{task}/{dataset}.yaml`` while the ``model_type`` argument will be used to load the model config at ``conf/models/{task}/{model_type}.yaml``. +Finally ``model_name`` is used to pull the appropriate model from the model configuration file. + +Training arguments +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. literalinclude:: ../../conf/training/default.yaml + :language: yaml + + +* ``precompute_multi_scale``: Computes spatial queries such as grid sampling and neighbour search on cpu for faster. Currently this is only supported for KPConv. + +Eval arguments +^^^^^^^^^^^^^^^^^^^^ + +.. literalinclude:: ../../conf/eval.yaml + :language: yaml + + + +Data formats for point cloud +------------------------------ + +While developing this project, we discovered there are several ways to implement a convolution. + +* "DENSE" +* "PARTIAL_DENSE" +* "MESSAGE_PASSING" +* "SPARSE" + +Dense +^^^^^^ + +This format is very similar to what you would be used to with images, during the assembling of a batch the B tensors of shape (num_points, feat_dim) will be concatenated on a new dimension +[(num_points, feat_dim), ..., (num_points, feat_dim)] -> (B, num_points, feat_dim). + +This format forces each sample to have exactly the same number of points. + +Advantages + + +* The format is dense and therefore aggregation operation are fast + +Drawbacks + + +* Handling variability in the number of neighbours happens through padding which is not very efficient +* Each sample needs to have the same number of points, as a consequence points are duplicated or removed from a sample during the data loading phase using a FixedPoints transform + + +Sparse formats +^^^^^^^^^^^^^^^^ + +The second family of convolution format is based on a sparse data format meaning that each sample can have a variable number of points and the collate function handles the complexity behind the scene. +For those intersted in learning more about it `Batch.from_data_list `_ + +.. image:: ../imgs/pyg_batch.PNG + :target: ../imgs/pyg_batch.PNG + :alt: Screenshot + +Given ``N`` tensors with their own ``num_points_{i}``\ , the collate function does: + +.. code-block:: + + [(num_points_1, feat_dim), ..., (num_points_n, feat_dim)] + -> (num_points_1 + ... + num_points_n, feat_dim) + +It also creates an associated ``batch tensor`` of size ``(num_points_1 + ... + num_points_n)`` with indices of the corresponding batch. + +.. note:: **Example** + + * A with shape (2, 2) + * B with shape (3, 2) + + ``C = Batch.from_data_list([A, B])`` + + C is a tensor of shape ``(5, 2)`` and its associated batch will contain ``[0, 0, 1, 1, 1]`` + + +PARTIAL_DENSE ConvType format +"""""""""""""""""""""""""""""" + + +This format is used by KPConv original implementation. + +Same as dense format, it forces each point to have the same number of neighbors. +It is why we called it partially dense. + + +MESSAGE_PASSING ConvType Format +"""""""""""""""""""""""""""""""" + +This ConvType is Pytorch Geometric base format. +Using `Message Passing `_ API class, it deploys the graph created by ``neighbour finder`` using internally the ``torch.index_select`` operator. + +Therefore, the ``[PointNet++]`` internal convolution looks like that. + +.. code-block:: python + + import torch + from torch_geometric.nn.conv import MessagePassing + from torch_geometric.utils import remove_self_loops, add_self_loops + + from ..inits import reset + + class PointConv(MessagePassing): + r"""The PointNet set layer from the `"PointNet: Deep Learning on Point Sets + for 3D Classification and Segmentation" + `_ and `"PointNet++: Deep Hierarchical + Feature Learning on Point Sets in a Metric Space" + `_ papers + """ + + def __init__(self, local_nn=None, global_nn=None, **kwargs): + super(PointConv, self).__init__(aggr='max', **kwargs) + + self.local_nn = local_nn + self.global_nn = global_nn + + self.reset_parameters() + + def reset_parameters(self): + reset(self.local_nn) + reset(self.global_nn) + + + def forward(self, x, pos, edge_index): + r""" + Args: + x (Tensor): The node feature matrix. Allowed to be :obj:`None`. + pos (Tensor or tuple): The node position matrix. Either given as + tensor for use in general message passing or as tuple for use + in message passing in bipartite graphs. + edge_index (LongTensor): The edge indices. + """ + if torch.is_tensor(pos): # Add self-loops for symmetric adjacencies. + edge_index, _ = remove_self_loops(edge_index) + edge_index, _ = add_self_loops(edge_index, num_nodes=pos.size(0)) + + return self.propagate(edge_index, x=x, pos=pos) + + + def message(self, x_j, pos_i, pos_j): + msg = pos_j - pos_i + if x_j is not None: + msg = torch.cat([x_j, msg], dim=1) + if self.local_nn is not None: + msg = self.local_nn(msg) + return msg + + def update(self, aggr_out): + if self.global_nn is not None: + aggr_out = self.global_nn(aggr_out) + return aggr_out + + def __repr__(self): + return '{}(local_nn={}, global_nn={})'.format( + self.__class__.__name__, self.local_nn, self.global_nn) + + +SPARSE ConvType Format +""""""""""""""""""""""" + +The sparse conv type is used by project like `SparseConv `_ or `Minkowski Engine `_, +therefore, the points have to be converted into indices living within a grid. + + + +Backbone Architectures +------------------------ + +Several unet could be built using different convolution or blocks. +However, the final model will still be a UNet. + +In the ``base_architectures`` folder, we intend to provide base architecture builder which could be used across tasks and datasets. + +We provide two UNet implementations: + + +* UnetBasedModel +* UnwrappedUnetBasedModel + +The main difference between them if ``UnetBasedModel`` implements the forward function and ``UnwrappedUnetBasedModel`` doesn't. + +UnetBasedModel +^^^^^^^^^^^^^^^ + +.. code-block:: python + + def forward(self, data): + if self.innermost: + data_out = self.inner(data) + data = (data_out, data) + return self.up(data) + else: + data_out = self.down(data) + data_out2 = self.submodule(data_out) + data = (data_out2, data) + return self.up(data) + +The UNet will be built recursively from the middle using the ``UnetSkipConnectionBlock`` class. + +**UnetSkipConnectionBlock** +.. code-block:: + + Defines the Unet submodule with skip connection. + X -------------------identity---------------------- + -- downsampling -- |submodule| -- upsampling --| + +UnwrappedUnetBasedModel +^^^^^^^^^^^^^^^^^^^^^^^^ + +The ``UnwrappedUnetBasedModel`` will create the model based on the configuration and add the created layers within the followings ``ModuleList`` + +.. code-block:: python + + self.down_modules = nn.ModuleList() + self.inner_modules = nn.ModuleList() + self.up_modules = nn.ModuleList() + + +Datasets +--------- + + +Segmentation +^^^^^^^^^^^^^ + +Preprocessed S3DIS +""""""""""""""""""" + +We support a couple of flavours or `S3DIS `_. The dataset used for ``S3DIS1x1`` is coming from +https://pytorch-geometric.readthedocs.io/en/latest/_modules/torch_geometric/datasets/s3dis.html. + +It is a preprocessed version of the original data where each sample is a 1mx1m extraction of the original data. It was initially used in PointNet. + + +Raw S3DIS +""""""""""""""""""" + +The dataset used for `S3DIS `_ is the original dataset without any pre-processing applied. +Here is the `area_1 `_ if you want to visualize it. +We provide some data transform for combining each area back together and split the dataset into digestible chunks. Please refer to `code base `_ and associated configuration file for more details: + +.. literalinclude:: ../../conf/data/segmentation/s3disfused.yaml + :language: yaml + + +Shapenet +""""""""""""""""""" + +`Shapenet `_ is a simple dataset that allows quick prototyping for segmentation models. +When used in single class mode, for part segmentation on airplanes for example, it is a good way to figure out if your implementation is correct. + +.. image:: ../imgs/shapenet.png + :target: ../imgs/shapenet.png + :alt: Screenshot + + + +Classification +^^^^^^^^^^^^^^^^^^^^^^^^^^ + +ModelNet +""""""""" + +The dataset used for ``ModelNet`` comes in two format: + + +* ModelNet10 +* ModelNet40 + Their website is here https://modelnet.cs.princeton.edu/. + + +Registration +^^^^^^^^^^^^^^^^^^^^^^^^^^ + +3D Match +""""""""" + +http://3dmatch.cs.princeton.edu/ + + +IRALab Benchmark +""""""""""""""""""" + +https://arxiv.org/abs/2003.12841 composed of data from: + +* the ETH datasets (https://projects.asl.ethz.ch/datasets/doku.php?id=laserregistration:laserregistration) +* the Canadian Planetary Emulation Terrain 3D Mapping datasets (http://asrl.utias.utoronto.ca/datasets/3dmap/index.html) +* the TUM Vision Groud RGBD datasets (https://vision.in.tum.de/data/datasets/rgbd-dataset) +* the KAIST Urban datasets (https://irap.kaist.ac.kr/dataset) + + +Model checkpoint +------------------ + +Model Saving +^^^^^^^^^^^^^^^^^^^^ + +Our custom ``Checkpoint`` class keeps track of the models for ``every metric``\ , the stats for ``"train", "test", "val"``\ , ``optimizer`` and ``its learning params``. + +.. code-block:: python + + self._objects = {} + self._objects["models"] = {} + self._objects["stats"] = {"train": [], "test": [], "val": []} + self._objects["optimizer"] = None + self._objects["lr_params"] = None + +Model Loading +^^^^^^^^^^^^^^^^^^^^ + +In training.yaml and eval.yaml, you can find the followings parameters: + + +* weight_name +* checkpoint_dir +* resume + +As the model is saved for every metric + the latest epoch. +It is possible by loading any of them using ``weight_name``. + +Example: ``weight_name: "miou"`` + +If the checkpoint contains weight with the key "miou", it will set the model state to them. If not, it will try the latest if it exists. If None are found, the model will be randonmly initialized. + + +Adding a new metric +^^^^^^^^^^^^^^^^^^^^ + +Within the file ``torch_points3d/metrics/model_checkpoint.py``\ , +It contains a mapping dictionnary between a sub ``metric_name`` and an ``optimization function``. + +Currently, we support the following metrics. + +.. code-block:: python + + DEFAULT_METRICS_FUNC = { + "iou": max, + "acc": max, + "loss": min, + "mer": min, + } # Those map subsentences to their optimization functions + + + +Visualization +---------------- + +.. raw:: html + +

The associated visualization
+ + +The framework currently support both `wandb `_ and `tensorboard `_ + +.. code-block:: yaml + + # parameters for Weights and Biases + wandb: + project: benchmarking + log: False + + # parameters for TensorBoard Visualization + tensorboard: + log: True + +Custom logging +--------------- +We use a custom hydra logging message which you can find within ``conf/hydra/job_logging/custom.yaml`` + +.. literalinclude:: ../../conf/hydra/job_logging/custom.yaml + :language: yaml \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/src/api/datasets.rst b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/src/api/datasets.rst new file mode 100644 index 00000000..371f89b9 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/src/api/datasets.rst @@ -0,0 +1,49 @@ +:github_url: https://github.com/nicolas-chaulet/torch-points3d + +Datasets +======== + +Below is a list of the datasets we support as part of the framework. They all inherit from +`Pytorch Geometric dataset `_ +and they can be accessed either as raw datasets or wrapped into a +`base class `_ that builds test, train and validations data loaders for you. +This base class also provides a helper functions for pre-computing neighboors and point cloud sampling at data loading time. + +ShapeNet +--------- + +Raw dataset +^^^^^^^^^^^ +.. autoclass:: torch_points3d.datasets.segmentation.ShapeNet + +Wrapped dataset +^^^^^^^^^^^^^^^ +.. autoclass:: torch_points3d.datasets.segmentation.ShapeNetDataset + + +S3DIS +----- + +Raw dataset +^^^^^^^^^^^ +.. autoclass:: torch_points3d.datasets.segmentation.S3DISOriginalFused + +.. autoclass:: torch_points3d.datasets.segmentation.S3DISSphere + +Wrapped dataset +^^^^^^^^^^^^^^^ +.. autoclass:: torch_points3d.datasets.segmentation.S3DIS1x1Dataset + +.. autoclass:: torch_points3d.datasets.segmentation.S3DISFusedDataset + + +Scannet +------- + +Raw dataset +^^^^^^^^^^^ +.. autoclass:: torch_points3d.datasets.segmentation.Scannet + +Wrapped dataset +^^^^^^^^^^^^^^^ +.. autoclass:: torch_points3d.datasets.segmentation.ScannetDataset \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/src/api/filters.rst b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/src/api/filters.rst new file mode 100644 index 00000000..1a1670bc --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/src/api/filters.rst @@ -0,0 +1,11 @@ +:github_url: https://github.com/nicolas-chaulet/torch-points3d + + +Filters +========== + +.. autoclass:: torch_points3d.core.data_transform.PlanarityFilter + +.. autoclass:: torch_points3d.core.data_transform.RandomFilter + +.. autoclass:: torch_points3d.core.data_transform.FCompose diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/src/api/models.rst b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/src/api/models.rst new file mode 100644 index 00000000..f554e40f --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/src/api/models.rst @@ -0,0 +1,14 @@ +:github_url: https://github.com/nicolas-chaulet/torch-points3d + +Models +====== + +.. autofunction:: torch_points3d.applications.sparseconv3d.SparseConv3d + +.. autofunction:: torch_points3d.applications.kpconv.KPConv + +.. autofunction:: torch_points3d.applications.pointnet2.PointNet2 + +.. autofunction:: torch_points3d.applications.rsconv.RSConv + +.. autofunction:: torch_points3d.applications.votenet.VoteNet \ No newline at end of file diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/src/api/transforms.rst b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/src/api/transforms.rst new file mode 100644 index 00000000..09acb1af --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/src/api/transforms.rst @@ -0,0 +1,79 @@ +:github_url: https://github.com/nicolas-chaulet/torch-points3d + + +Transforms +========== + +.. autoclass:: torch_points3d.core.data_transform.PointCloudFusion + +.. autoclass:: torch_points3d.core.data_transform.GridSphereSampling + +.. autoclass:: torch_points3d.core.data_transform.RandomSphere + +.. autoclass:: torch_points3d.core.data_transform.GridSampling3D + +.. autoclass:: torch_points3d.core.data_transform.RandomSymmetry + +.. autoclass:: torch_points3d.core.data_transform.RandomNoise + +.. autoclass:: torch_points3d.core.data_transform.RandomScaleAnisotropic + +.. autoclass:: torch_points3d.core.data_transform.MultiScaleTransform + +.. autoclass:: torch_points3d.core.data_transform.ModelInference + +.. autoclass:: torch_points3d.core.data_transform.PointNetForward + +.. autoclass:: torch_points3d.core.data_transform.AddFeatsByKeys + +.. autoclass:: torch_points3d.core.data_transform.AddFeatByKey + +.. autoclass:: torch_points3d.core.data_transform.RemoveAttributes + +.. autoclass:: torch_points3d.core.data_transform.ShuffleData + +.. autoclass:: torch_points3d.core.data_transform.ShiftVoxels + +.. autoclass:: torch_points3d.core.data_transform.ChromaticTranslation + +.. autoclass:: torch_points3d.core.data_transform.ChromaticAutoContrast + +.. autoclass:: torch_points3d.core.data_transform.ChromaticJitter + +.. autoclass:: torch_points3d.core.data_transform.Jitter + +.. autoclass:: torch_points3d.core.data_transform.RandomDropout + +.. autoclass:: torch_points3d.core.data_transform.DropFeature + +.. autoclass:: torch_points3d.core.data_transform.NormalizeFeature + +.. autoclass:: torch_points3d.core.data_transform.PCACompute + +.. autoclass:: torch_points3d.core.data_transform.ClampBatchSize + +.. autoclass:: torch_points3d.core.data_transform.LotteryTransform + +.. autoclass:: torch_points3d.core.data_transform.RandomParamTransform + +.. autoclass:: torch_points3d.core.data_transform.Select + +.. autofunction:: torch_points3d.core.data_transform.NormalizeRGB + +.. autofunction:: torch_points3d.core.data_transform.ElasticDistortion + +.. autofunction:: torch_points3d.core.data_transform.Random3AxisRotation + +.. autofunction:: torch_points3d.core.data_transform.RandomCoordsFlip + +.. autofunction:: torch_points3d.core.data_transform.ScalePos + +.. autofunction:: torch_points3d.core.data_transform.RandomWalkDropout + +.. autofunction:: torch_points3d.core.data_transform.RandomSphereDropout + +.. autofunction:: torch_points3d.core.data_transform.SphereCrop + +.. autofunction:: torch_points3d.core.data_transform.CubeCrop + +.. autofunction:: torch_points3d.core.data_transform.compute_planarity diff --git a/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/src/gettingstarted.rst b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/src/gettingstarted.rst new file mode 100644 index 00000000..55d7ce65 --- /dev/null +++ b/competing_methods/my_pointnet_and_pointnet++/my_torch_point3d/docs/src/gettingstarted.rst @@ -0,0 +1,198 @@ +:github_url: https://github.com/nicolas-chaulet/torch-points3d + +Getting Started +================ + +You're reading this because the API wasn't cracking it and you would like to extend the framework for your own task or use +some of the deeper layers of our codebase. This set of pages will take you from setting up the code for local development +all the way to adding a new task or a new dataset to the framework. +For using Torch Points3D as a library please refer to :ref:`this section`. + + +Installation +---------------------------- + + +Install Python 3.6 or higher +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Start by installing Python > 3.6. You can use pyenv by doing the following: + +.. code-block:: bash + + curl -L https://github.com/pyenv/pyenv-installer/raw/master/bin/pyenv-installer | bash + +Add these three lines to your ``.bashrc`` + +.. code-block:: bash + + export PATH="$HOME/.pyenv/bin:$PATH" + eval "$(pyenv init -)" + eval "$(pyenv virtualenv-init -)" + +Finaly you can install ``python 3.6.10`` by running the following command + +.. code-block:: bash + + pyenv install 3.6.10 + +Install dependencies using poetry +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Start by installing poetry: + +.. code-block:: bash + + pip install poetry + +You can clone the repository and install all the required dependencies as follow: + +.. code-block:: bash + + git clone https://github.com/nicolas-chaulet/torch-points3d.git + cd torch-points3d + pyenv local 3.6.10 + poetry install --no-root + +You can check that the install has been successful by running + +.. code-block:: bash + + poetry shell + python -m unittest -v + +Minkowski engine support +^^^^^^^^^^^^^^^^^^^^^^^^^ + +The repository is supporting `Minkowski Engine `_ which requires `openblas-dev` and `nvcc` if you have a CUDA device on your machine. First install `openblas` + +.. code-block:: bash + + sudo apt install libopenblas-dev + +then make sure that `nvcc` is in your path: + +.. code-block:: bash + + nvcc -V + +If it's not then locate it (`locate nvcc`) and add its location to your `PATH` variable. On my machine: + +.. code-block:: bash + + export PATH="/usr/local/cuda-10.2/bin:$PATH" + +You are now in a position to install MinkowskiEngine with GPU support: + +.. code-block:: bash + + poetry install -E MinkowskiEngine --no-root + + +Installation within a virtual environment +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +We try to maintain a ``requirements.txt`` file for those who want to use plain old ``pip``. Start by cloning the repo: + +.. code-block:: bash + + git clone https://github.com/nicolas-chaulet/torch-points3d.git + cd torch-points3d + +We still recommend that you first create a virtual environment and activate it before installing the dependencies: + +.. code-block:: bash + + python3 -m virtualenv pcb + source pcb/bin/activate + +Install all dependencies: + +.. code-block:: bash + + pip install -r requirements.txt + +You should now be able to run the tests successfully: + +.. code-block:: bash + + python -m unittest -v + + +Train! +----------------------- + +You should now be in a position to train your first model. Here is how is goes to train pointnet++ on part segmentation task for dataset shapenet, simply run the following: + +.. code-block:: bash + + python train.py \ + task=segmentation model_type=pointnet2 model_name=pointnet2_charlesssg dataset=shapenet-fixed + +And you should see something like that + + +.. image:: ../imgs/logging.png + :target: ../imgs/logging.png + :alt: logging + + +The `config `_ for pointnet++ is a good example starting point to understand how models are defined: + +.. literalinclude:: ../../conf/models/segmentation/pointnet2.yaml + :language: yaml + :lines: 87-122 + +Once the training is complete, you can access the model checkpoint as well as any visualisation and graphs that you may have generated in the ``outputs//