Skip to content
Snippets Groups Projects
Commit 900a596f authored by Tim's avatar Tim
Browse files

remove stuff

parent cb90800b
Branches
No related tags found
No related merge requests found
# Abstract
<!-- evlt titles anderst da von avt tool vordefinierte liste ca 3500 carachters-->
links sind im avt draussen, hier aber noch drinnen
--------------------------------------------------
## Initial Situation
Schutz & Rettung Zurich are taking a lot of different resources into account, when preparing a rescue operation. One of this resources is [OpenStreetMap](osm.org) which is open-source and a real contender to the well known Google Maps.
For planning a rescue operation Schutz & Rettung Zurich has to rely on the correctness of the underlying data of OpenStreetMap. To track and monitor data changes a tool is needed to list and filter changesets in Switzerland. Changesets acts like a time-stamp whenever data of OpenStreetMap is edited.
There exists many tools which analyse and present the data of OpenStreetMap in different ways but none of these are able to filter for specific features, called tags, directly on the data of OpenStreetMap. This is a main requirement and is really needed by Schutz & Rettung Zurich.
![Control romm Schutz & Rettung Zurich](images/b_abstract_1.jpeg)
--------------------------------------------------
### Vision - Welche Ziele wurden gesteckt für das tool
- improve existing tool
- remove dependencies
- get a working solution (can be used for SRZ usecases)
- specificly:
- show changesets
- filter changesets to only get relevant ones
- set status to track progress
### State of the art - Was machen andere / welche ähnlichen Arbeiten gibt es zum Thema? eher generell beschreiben
Even if the project is done as a greenfield project, there is some existing work targeting the same application.
- existing SA
- very similar, done for same client
- but has some problems -> currently not suitable for the work at SRZ
- osmcha
- current state of the art
- external dependency which can not be customized and ?????
- not everything can be done as well
- some things are very hard to understand (rating system), black box
...
This leads back to our initial reason for the project of providing a targeted solution for a real issue.
--------------------------------------------------
## Approach / Technology
First we analysed existing tools like ["Targeted Monitoring Tool"](https://srzedi.srz.borsnet.ch/) which were created as a term thesis at OST and is currently used by Schutz & Rettung Zurich as well as other famous tools like [OSMCha](https://osmcha.org/). We got inspired by these tools and could take over some idea as well. After taking all requirements, existing tools and conditions regarding a bachelor thesis at OST into considerations, we started a new project on the greenfield. The main focus were creating a tool which fulfils the requirements, has a classic and extensible architecture and consists of established and state of the art technology.
"OSM Monitoring Tool" consists of a fullstack web application and is split into three distinct parts.
A frontend which enables the user to interact with the whole application. A database for storing all the required data like application specific data but also the complete data of OpenStreetMap of Switzerland. The database gets updated with the newest data on a regular basis. The third part of the application is the business-layer which acts like a middleman between the frontend and the database.
For easier deployment every part of the application runs in separate Docker container.
![architecture overview](images/architecture/architecture_rough2.png)
--------------------------------------------------
## Approach / Technology
- fullstack web application
- apply gathered knowlege from studies
- work with agile
- solution oriented -> clear vision
Vorgehen
1. look at existing projects and identify their problems
2. create a set of requirements that solve the problems from step 1
3. develop a rough architecture that can support all requirements
4. implementation with an agile approach and in small steps
## Design & technologies
"OSM Monitoring Tool" consists of a fullstack web application and is split into three destinct parts: -> DONE
...
- 3 stack architecture with back-, frontend and database all containerized with Docker. -> DONE
- Django, Quasar, Postgres, docker
- the interfaces are done with ....
- the database container self updates (cronjob) -> DONE
Unsere umsetzung?
- rough structure / layout of software -> DONE
- some requirements
- architecture -> DONE
--------------------------------------------------
## Result
As our bachelor thesis "OSM Monitoring Tool" we created an application that allows the monitoring of changes in OpenStreetMap. It consists of a fullstack web application with a variety of features in order to permit a targeted processing of the desired modifications.
The changeset list can be filtered and sorted according to different filters to provide a view on the most important changes of OpenStreetMap.
One of the main distinctions from "OSM Monitoring Tool" to already existing tools is: "OSM Monitoring Tool" provides a tag filter which acts directly on the underlying data and not only on the changesets itself.
Finally Schutz & Rettung Zurich accepts "OSM Monitoring Tool" and will install it in their daily work.
![screenshot from "OSM Monitoring Tool"]()
--------------------------------------------------
- Was ist das Resultat?
- application that satisfies the main requirements -> DONE
- display changesets as list and with details on different map options -> DONE
- filter list according to date, user, tags, location, ... -> DONE
- sort results -> DONE
- Bewertung der Resultate, was ist Neuartig an der Arbeit? -> DONE
- tag filters ??? -> DONE
- was kann man sonst noch machen, evtl 1-2 use cases erklären
<!-- keine ahnung ob dieser title sowie unterkapitel hier auch sinnvoll ist, schaue ich sonst morgen im avt nach-->
## Ausblick -> conclusion ?? reflection ?? outcome ??
TODO: Was hat man mit Durchführung des Projekts gelernt?
- details brauchen viel zeit
- grosse datenmengen sind langsam
- es ist muehsam andere projekte, tools einzubinden, insbesondere wenn diese nicht gut maintained sind
- agiles vorgehen mit regelmaessigen meetings ist gut -> wandelnde beduerfnisse abdecken
- wenig vorgaben zu begin bedeuten stetig wechselnde anforderungen
TODO: Verbleibende Probleme, (zukünftige) Gegenmassnahmen bez. Risiken
- wechselnde anforderungen -> weniger aenderungen zulassen wenn zeitrahmen fix
TODO: Was würde man anders machen, was ist weiter zu tun
- further optimizations (frontend) -> keine prioritaet da usecases an desktop
- optimize database -> experienced people for feedback, more research
- improve and add features -> question of time
File deleted
This diff is collapsed.
cumulative rate: 6,896/sec
parsed 120,260,000
cumulative rate: 6,896/sec
parsed 120,270,000
cumulative rate: 6,896/sec
parsed 120,280,000
cumulative rate: 6,896/sec
parsed 120,290,000
cumulative rate: 6,896/sec
parsed 120,300,000
cumulative rate: 6,896/sec
parsed 120,310,000
cumulative rate: 6,896/sec
parsed 120,320,000
cumulative rate: 6,896/sec
parsed 120,330,000
cumulative rate: 6,896/sec
parsed 120,340,000
cumulative rate: 6,896/sec
parsed 120,350,000
cumulative rate: 6,896/sec
parsed 120,360,000
cumulative rate: 6,896/sec
parsed 120,370,000
cumulative rate: 6,896/sec
parsed 120,380,000
cumulative rate: 6,896/sec
parsed 120,390,000
cumulative rate: 6,896/sec
parsed 120,400,000
cumulative rate: 6,896/sec
parsed 120,410,000
cumulative rate: 6,896/sec
parsed 120,420,000
cumulative rate: 6,896/sec
parsed 120,430,000
cumulative rate: 6,896/sec
parsed 120,440,000
cumulative rate: 6,896/sec
parsed 120,450,000
cumulative rate: 6,896/sec
parsed 120,460,000
cumulative rate: 6,896/sec
parsed 120,470,000
cumulative rate: 6,896/sec
parsed 120,480,000
cumulative rate: 6,896/sec
parsed 120,490,000
cumulative rate: 6,896/sec
parsed 120,500,000
cumulative rate: 6,895/sec
parsed 120,510,000
cumulative rate: 6,895/sec
parsed 120,520,000
cumulative rate: 6,895/sec
parsed 120,530,000
cumulative rate: 6,895/sec
parsed 120,540,000
cumulative rate: 6,895/sec
parsed 120,550,000
cumulative rate: 6,895/sec
parsed 120,560,000
cumulative rate: 6,895/sec
parsed 120,570,000
cumulative rate: 6,895/sec
parsed 120,580,000
cumulative rate: 6,895/sec
parsed 120,590,000
cumulative rate: 6,895/sec
parsed 120,600,000
cumulative rate: 6,895/sec
parsed 120,610,000
cumulative rate: 6,895/sec
parsed 120,620,000
cumulative rate: 6,895/sec
parsed 120,630,000
cumulative rate: 6,895/sec
parsed 120,640,000
cumulative rate: 6,895/sec
parsed 120,650,000
cumulative rate: 6,895/sec
parsed 120,660,000
cumulative rate: 6,895/sec
parsed 120,670,000
cumulative rate: 6,895/sec
parsed 120,680,000
cumulative rate: 6,895/sec
parsed 120,690,000
cumulative rate: 6,895/sec
parsed 120,700,000
cumulative rate: 6,895/sec
parsed 120,710,000
cumulative rate: 6,895/sec
parsed 120,720,000
cumulative rate: 6,895/sec
parsed 120,730,000
cumulative rate: 6,895/sec
parsed 120,740,000
cumulative rate: 6,895/sec
parsed 120,750,000
cumulative rate: 6,895/sec
parsed 120,760,000
cumulative rate: 6,895/sec
parsed 120,770,000
cumulative rate: 6,895/sec
parsed 120,780,000
cumulative rate: 6,895/sec
parsed 120,790,000
cumulative rate: 6,895/sec
parsed 120,800,000
cumulative rate: 6,895/sec
parsed 120,810,000
cumulative rate: 6,895/sec
parsed 120,820,000
cumulative rate: 6,895/sec
parsed 120,830,000
cumulative rate: 6,895/sec
parsed 120,840,000
cumulative rate: 6,895/sec
parsed 120,850,000
cumulative rate: 6,895/sec
parsed 120,860,000
cumulative rate: 6,895/sec
parsed 120,870,000
cumulative rate: 6,895/sec
parsed 120,880,000
cumulative rate: 6,895/sec
parsed 120,890,000
cumulative rate: 6,895/sec
parsed 120,900,000
cumulative rate: 6,895/sec
parsed 120,910,000
cumulative rate: 6,895/sec
parsed 120,920,000
cumulative rate: 6,895/sec
parsed 120,930,000
cumulative rate: 6,895/sec
parsed 120,940,000
cumulative rate: 6,895/sec
parsed 120,950,000
cumulative rate: 6,895/sec
parsed 120,960,000
cumulative rate: 6,895/sec
parsed 120,970,000
cumulative rate: 6,895/sec
parsed 120,980,000
cumulative rate: 6,895/sec
parsed 120,990,000
cumulative rate: 6,895/sec
parsed 121,000,000
cumulative rate: 6,895/sec
parsed 121,010,000
cumulative rate: 6,895/sec
parsed 121,020,000
cumulative rate: 6,895/sec
parsed 121,030,000
cumulative rate: 6,895/sec
parsed 121,040,000
cumulative rate: 6,895/sec
parsed 121,050,000
cumulative rate: 6,895/sec
parsed 121,060,000
cumulative rate: 6,895/sec
parsed 121,070,000
cumulative rate: 6,895/sec
parsed 121,080,000
cumulative rate: 6,895/sec
parsed 121,090,000
cumulative rate: 6,895/sec
parsed 121,100,000
cumulative rate: 6,895/sec
parsed 121,110,000
cumulative rate: 6,895/sec
parsed 121,120,000
cumulative rate: 6,895/sec
parsed 121,130,000
cumulative rate: 6,895/sec
parsed 121,140,000
cumulative rate: 6,894/sec
parsed 121,150,000
cumulative rate: 6,894/sec
parsed 121,160,000
cumulative rate: 6,894/sec
parsed 121,170,000
cumulative rate: 6,894/sec
parsed 121,180,000
cumulative rate: 6,894/sec
parsed 121,190,000
cumulative rate: 6,894/sec
parsed 121,200,000
cumulative rate: 6,894/sec
parsed 121,210,000
cumulative rate: 6,894/sec
parsed 121,220,000
cumulative rate: 6,894/sec
parsed 121,230,000
cumulative rate: 6,894/sec
parsed 121,240,000
cumulative rate: 6,894/sec
parsed 121,250,000
cumulative rate: 6,894/sec
parsed 121,260,000
cumulative rate: 6,894/sec
parsed 121,270,000
cumulative rate: 6,894/sec
parsed 121,280,000
cumulative rate: 6,894/sec
parsed 121,290,000
cumulative rate: 6,894/sec
parsed 121,300,000
cumulative rate: 6,894/sec
parsed 121,310,000
cumulative rate: 6,894/sec
parsed 121,320,000
cumulative rate: 6,894/sec
parsed 121,330,000
cumulative rate: 6,894/sec
parsed 121,340,000
cumulative rate: 6,894/sec
parsed 121,350,000
cumulative rate: 6,894/sec
parsed 121,360,000
cumulative rate: 6,894/sec
parsed 121,370,000
cumulative rate: 6,894/sec
parsed 121,380,000
cumulative rate: 6,894/sec
parsed 121,390,000
cumulative rate: 6,894/sec
parsed 121,400,000
cumulative rate: 6,894/sec
parsed 121,410,000
cumulative rate: 6,894/sec
parsed 121,420,000
cumulative rate: 6,894/sec
parsed 121,430,000
cumulative rate: 6,894/sec
parsed 121,440,000
cumulative rate: 6,894/sec
parsed 121,450,000
cumulative rate: 6,894/sec
parsed 121,460,000
cumulative rate: 6,894/sec
parsed 121,470,000
cumulative rate: 6,894/sec
parsed 121,480,000
cumulative rate: 6,894/sec
parsed 121,490,000
cumulative rate: 6,894/sec
parsed 121,500,000
cumulative rate: 6,894/sec
parsed 121,510,000
cumulative rate: 6,894/sec
parsed 121,520,000
cumulative rate: 6,894/sec
parsed 121,530,000
cumulative rate: 6,894/sec
parsed 121,540,000
cumulative rate: 6,894/sec
parsed 121,550,000
cumulative rate: 6,894/sec
parsed 121,560,000
cumulative rate: 6,894/sec
parsed 121,570,000
cumulative rate: 6,894/sec
parsed 121,580,000
cumulative rate: 6,894/sec
parsed 121,590,000
cumulative rate: 6,894/sec
parsed 121,600,000
cumulative rate: 6,894/sec
parsed 121,610,000
cumulative rate: 6,894/sec
parsed 121,620,000
cumulative rate: 6,894/sec
parsed 121,630,000
cumulative rate: 6,894/sec
parsed 121,640,000
cumulative rate: 6,894/sec
parsed 121,650,000
cumulative rate: 6,894/sec
parsing complete
parsed 121,653,363
creating constraints
creating indexes
Processing time cost is 5:57:30.711023
All done. Enjoy your (meta)data!
UPDATE 1
-- The C compiler identification is GNU 10.2.1
-- The CXX compiler identification is GNU 10.2.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Boost: /usr/lib/x86_64-linux-gnu/cmake/Boost-1.74.0/BoostConfig.cmake (found suitable version "1.74.0", minimum required is "1.55.0") found components: program_options
-- Found ZLIB: /usr/lib/x86_64-linux-gnu/libz.so (found version "1.2.11")
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Found Protozero: /usr/include (found suitable version "1.7.0", minimum required is "1.6.3")
-- Found EXPAT: /usr/lib/x86_64-linux-gnu/libexpat.so (found version "2.2.10")
-- Found BZip2: /usr/lib/x86_64-linux-gnu/libbz2.so (found version "1.0.8")
-- Looking for BZ2_bzCompressInit
-- Looking for BZ2_bzCompressInit - found
-- Found Osmium: /usr/include (found suitable version "2.16.0", minimum required is "2.14.2")
-- Use C++ version: c++11
-- Looking for clang-tidy
-- Looking for clang-tidy - not found
-- Build target 'clang-tidy' will not be available.
-- Looking for cppcheck
-- Looking for cppcheck - not found
-- Build target 'cppcheck' will not be available.
-- Configuring done
-- Generating done
-- Build files have been written to: /app/osm-postgresql-experiments/build
Scanning dependencies of target ope
[ 20%] Building CXX object src/CMakeFiles/ope.dir/main.cpp.o
[ 40%] Building CXX object src/CMakeFiles/ope.dir/util.cpp.o
[ 60%] Building CXX object src/CMakeFiles/ope.dir/formatting.cpp.o
[ 80%] Building CXX object src/CMakeFiles/ope.dir/table.cpp.o
[100%] Linking CXX executable ope
[100%] Built target ope
Install the project...
-- Install configuration: "RelWithDebInfo"
-- Installing: /usr/local/bin/ope
2022-06-15 04:04:40 ERROR: Cannot read state file from server. Is the URL correct?
Timing is on.
psql:users.sql:3: NOTICE: table "users" does not exist, skipping
DROP TABLE
Time: 2.052 ms
psql:users.sql:8: ERROR: could not extend file "base/19614/20805": No space left on device
HINT: Check free disk space.
Time: 8.326 ms
psql:users.sql:10: ERROR: relation "users" does not exist
Time: 0.102 ms
psql:users.sql:12: ERROR: relation "users" does not exist
Time: 0.078 ms
psql:users.sql:14: ERROR: relation "users" does not exist
Time: 0.977 ms
Timing is on.
psql:relations.sql:3: NOTICE: table "relations" does not exist, skipping
DROP TABLE
Time: 0.241 ms
psql:relations.sql:14: ERROR: could not extend file "base/19614/2658": No space left on device
HINT: Check free disk space.
Time: 1.297 ms
psql:relations.sql:16: ERROR: relation "relations" does not exist
Time: 0.094 ms
psql:relations.sql:18: ERROR: relation "relations" does not exist
Time: 0.082 ms
psql:relations.sql:20: ERROR: relation "relations" does not exist
Time: 0.096 ms
Timing is on.
psql:ways.sql:3: NOTICE: table "ways" does not exist, skipping
DROP TABLE
Time: 0.323 ms
psql:ways.sql:14: ERROR: could not extend file "base/19614/2658": No space left on device
HINT: Check free disk space.
Time: 1.092 ms
psql:ways.sql:16: ERROR: relation "ways" does not exist
Time: 0.090 ms
psql:ways.sql:18: ERROR: relation "ways" does not exist
Time: 0.055 ms
psql:ways.sql:20: ERROR: relation "ways" does not exist
Time: 0.089 ms
Timing is on.
psql:nodes.sql:3: NOTICE: extension "postgis" already exists, skipping
CREATE EXTENSION
Time: 0.884 ms
psql:nodes.sql:5: NOTICE: table "nodes" does not exist, skipping
DROP TABLE
Time: 0.165 ms
psql:nodes.sql:16: ERROR: could not extend file "base/19614/1249": No space left on device
HINT: Check free disk space.
Time: 21.147 ms
psql:nodes.sql:18: ERROR: relation "nodes" does not exist
Time: 0.126 ms
psql:nodes.sql:20: ERROR: relation "nodes" does not exist
Time: 0.069 ms
psql:nodes.sql:22: ERROR: relation "nodes" does not exist
Time: 0.077 ms
psql:db_setup.sql:10: ERROR: could not extend file "base/19614/1247": No space left on device
HINT: Check free disk space.
psql:db_setup.sql:21: ERROR: could not extend file "base/19614/1247": No space left on device
HINT: Check free disk space.
psql:db_setup.sql:32: ERROR: could not extend file "base/19614/1247": No space left on device
HINT: Check free disk space.
psql:db_setup.sql:37: ERROR: could not extend file "base/19614/1247": No space left on device
HINT: Check free disk space.
psql:db_setup.sql:39: ERROR: relation "nodes" does not exist
psql:db_setup.sql:40: ERROR: relation "relations" does not exist
psql:db_setup.sql:41: ERROR: relation "users" does not exist
psql:db_setup.sql:42: ERROR: relation "ways" does not exist
psql:db_setup.sql:44: ERROR: relation "nodes" does not exist
psql:db_setup.sql:45: ERROR: relation "relations" does not exist
psql:db_setup.sql:46: ERROR: relation "ways" does not exist
psql:db_setup.sql:47: ERROR: relation "nodes" does not exist
psql:db_setup.sql:48: ERROR: relation "nodes" does not exist
psql:db_setup.sql:49: ERROR: relation "ways" does not exist
psql:db_setup.sql:50: ERROR: relation "relations" does not exist
Traceback (most recent call last):
File "/app/osmhistorydb-ch/OSM_Objects/osm_pg_db_clipper.py", line 62, in <module>
conn = psycopg2.connect(dbname=args.dbName, user=args.dbUser, password=args.dbPassword, host=args.dbHost, port=args.dbPort)
File "/usr/local/lib/python3.9/dist-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: role "root" does not exist
u@uboot:[development !x?] prototype $
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment