Library

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • 2005-2009  (32)
  • 1920-1924
  • 1890-1899
  • 2007  (32)
  • ddc:000  (25)
  • ddc:004  (7)
Source
Years
  • 2005-2009  (32)
  • 1920-1924
  • 1890-1899
Year
Keywords
Language
  • 1
    Publication Date: 2020-12-11
    Description: In dieser Diplomarbeit wird untersucht, wie auf der Basis von Literaturreferenzen ein Zitationsgraph durch ein automatisches Verfahren aufgebaut werden kann. Zur Lösung des Problems werden Probabilistische Relationale Modelle herangezogen. Eine problemspezifische Erweiterung des Modells ermöglicht es, dass bestehende Unsicherheiten im Zitationsgraphen mit Hilfe eines Inferenzverfahrens aufgelöst werden können. Zur Evaluierung des Verfahren werden Experimente auf dem Cora-Datensatz durchgeführt.
    Keywords: ddc:004
    Language: German
    Type: masterthesis , doc-type:masterThesis
    Format: application/pdf
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2020-08-05
    Description: This article is about the optimal track allocation problem (OPTRA) to find, in a given railway network, a conflict free set of train routes of maximum value. We study two types of integer programming formulations: a standard formulation that models block conflicts in terms of packing constraints, and a new extended formulation that is based on additional configuration' variables. We show that the packing constraints in the standard formulation stem from an interval graph, and that they can be separated in polynomial time. It follows that the LP relaxation of a strong version of this model, including all clique inequalities from block conflicts, can be solved in polynomial time. We prove that the extended formulation produces the same LP bound, and that it can also be computed with this model in polynomial time. Albeit the two formulations are in this sense equivalent, the extended formulation has advantages from a computational point of view, because it features a constant number of rows and is therefore amenable to standard column generation techniques. Results of an empirical model comparison on mesoscopic data for the Hannover-Fulda-Kassel region of the German long distance railway network are reported.
    Keywords: ddc:000
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Format: application/postscript
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2016-06-09
    Description: We study barrier methods for state constrained optimal control problems with PDEs. In the focus of our analysis is the path of minimizers of the barrier subproblems with the aim to provide a solid theoretical basis for function space oriented path-following algorithms. We establish results on existence, continuity and convergence of this path. Moreover, we consider the structure of barrier subdifferentials, which play the role of dual variables.
    Keywords: ddc:000
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Format: application/postscript
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2016-06-09
    Description: For the treatment of equilibrated molecular systems in a heat bath we propose a transition state theory that is based on conformation dynamics. In general, a set-based discretization of a Markov operator ${\cal P}^\tau$ does not preserve the Markov property. In this article, we propose a discretization method which is based on a Galerkin approach. This discretization method preserves the Markov property of the operator and can be interpreted as a decomposition of the state space into (fuzzy) sets. The conformation-based transition state theory presented here can be seen as a first step in conformation dynamics towards the computation of essential dynamical properties of molecular systems without time-consuming molecular dynamics simulations.
    Keywords: ddc:000
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2021-01-22
    Description: We present a middleware to store multidimensional data sets on Internet-scale distributed systems and to efficiently perform range queries on them. Our structured overlay network \emph{SONAR (Structured Overlay Network with Arbitrary Range queries)} puts keys which are adjacent in the key space on logically adjacent nodes in the overlay and is thereby able to process multidimensional range queries with a single logarithmic data lookup and local forwarding. The specified ranges may have arbitrary shapes like rectangles, circles, spheres or polygons. Empirical results demonstrate the routing performance of SONAR on several data sets, ranging from real-world data to artificially constructed worst case distributions. We study the quality of SONAR's routing information which is based on local knowledge only and measure the indegree of the overlay nodes to find potential hot spots in the routing process. We show that SONAR's routing table is self-adjusting, even under extreme situations, keeping always a maximum of $\lceil \log N \rceil$ routing entries.
    Keywords: ddc:000
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2016-06-09
    Description: \begin{abstract} In systems biology, the stochastic description of biochemical reaction kinetics is increasingly being employed to model gene regulatory networks and signalling pathways. Mathematically speaking, such models require the numerical solution of the underlying evolution equat ion, also known as the chemical master equation (CME). Up to now, the CME has almost exclusively been treated by Monte-Carlo techniques, the most prominent of which is the simulation algorithm suggest ed by Gillespie in 1976. Since this algorithm requires an update for each single reaction event, realizations can be computationally very costly. As an alternative, we here propose a novel approach, which focuses on the discrete partial differential equation (PDE) structure of the CME and thus allows to adopt ideas from adaptive discrete Galerkin methods (as designed by two of the present authors in 1989), which have proven to be highly efficient in the mathematical modelling of polyreaction kinetics. Among the two different options of discretizing the CME as a discrete PDE, the method of lines approach (first space, then time) and the Rothe method (first time, then space), we select the latter one for clear theoretical and algorithmic reasons. First numeric al experiments at a challenging model problem illustrate the promising features of the proposed method and, at the same time, indicate lines of necessary further research. \end{abstract}
    Keywords: ddc:000
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2014-02-26
    Description: Chvatal-Gomory cuts are among the most well-known classes of cutting planes for general integer linear programs (ILPs). In case the constraint multipliers are either 0 or $\frac{1}{2}$, such cuts are known as $\{0,\frac{1}{2}\}$-cuts. It has been proven by Caprara and Fischetti (1996) that separation of $\{0,\frac{1}{2}\}$-cuts is NP-hard. In this paper, we study ways to separate $\{0,\frac{1}{2}\}$-cuts effectively in practice. We propose a range of preprocessing rules to reduce the size of the separation problem. The core of the preprocessing builds a Gaussian elimination-like procedure. To separate the most violated $\{0,\frac{1}{2}\}$-cut, we formulate the (reduced) problem as integer linear program. Some simple heuristic separation routines complete the algorithmic framework. Computational experiments on benchmark instances show that the combination of preprocessing with exact and/or heuristic separation is a very vital idea to generate strong generic cutting planes for integer linear programs and to reduce the overall computation times of state-of-the-art ILP-solvers.
    Keywords: ddc:000
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Format: application/postscript
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2014-02-26
    Description: In this paper a Godunov-type projection method for computing approximate solutions of the zero Froude number (incompressible) shallow water equations is presented. It is second-order accurate and locally conserves height (mass) and momentum. To enforce the underlying divergence constraint on the velocity field, the predicted numerical fluxes, computed with a standard second order method for hyperbolic conservation laws, are corrected in two steps. First, a MAC-type projection adjusts the advective velocity divergence. In a second projection step, additional momentum flux corrections are computed to obtain new time level cell-centered velocities, which satisfy another discrete version of the divergence constraint. The scheme features an exact and stable second projection. It is obtained by a Petrov-Galerkin finite element ansatz with piecewise bilinear trial functions for the unknown incompressible height and piecewise constant test functions. The stability of the projection is proved using the theory of generalized mixed finite elements, which goes back to Nicola{\"i}des (1982). In order to do so, the validity of three different inf-sup conditions has to be shown. Since the zero Froude number shallow water equations have the same mathematical structure as the incompressible Euler equations of isentropic gas dynamics, the method can be easily transfered to the computation of incompressible variable density flow problems.
    Keywords: ddc:000
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Format: application/postscript
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 9
    facet.materialart.
    Unknown
    Publication Date: 2019-10-24
    Keywords: ddc:000
    Language: German
    Type: annualzib , doc-type:report
    Format: application/pdf
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2016-06-30
    Description: Die Diplomarbeit präsentiert ein Transaktionsverfahren für strukturierte Overlay-Netzwerke, das an die Erfordernisse verteilter Informationssysteme mit relationalem Datenmodell angepasst ist. Insbesondere wird der Einsatz von Transaktionen für verteilte Wikis betrachtet, die moderne Funktionalitäten, wie Metadaten und zusätzliche Indexe für die Navigation, unterstützen. Konsistenz und Dauerhaftigkeit der gespeicherten Daten erfordert die Behandlung von Knotenausfällen. Die Arbeit schlägt dafür das Zellenmodell vor: Das Overlay wird aus replizierten Zustandsmaschinen gebildet, um Verfügbarkeit zu gewährleisten. Das Transaktionsverfahren baut darauf auf und verwendet Two-Phase-Commit mit Fehlererkennung und Widerherstellung von ausgefallenen Transaktionsmanagern. Anwendungen wird eine Auswahl an pessimistischen und hybrid-optimistischen Nebenläufigkeitskontrollverfahren geboten, die die Minimierung von Latenzeffekten und die schnelle Ausführung von Nur-Lese-Transaktionen ermöglichen. Für die Beispielanwendung Wiki wird der erforderliche Pseudocode angegeben und die verschiedenen Nebenläufigkeitskontrollverfahren hinsichtlich ihrer Nachrichtenkomplexität verglichen.
    Description: The diploma thesis presents a transaction processing scheme for structured overlay networks and uses it to develop a distributed Wiki application based on a relational data model. The Wiki supports rich metadata and additional indexes for navigation purposes. Ensuring consistency and durability requires handling of node failures. Such failures are masked by providing high availability of nodes. This in turn is achieved by constructing the overlay from replicated state machines (cell model). Atomicity is realized using two phase commit with additional support for failure detection and restoration of the transaction manager. The developed transaction processing scheme provides the application with a mixture of pessimistic, hybrid optimistic and multiversioning concurrency control techniques to minimize the impact of replication on latency and optimize for read operations. The pseudocode of the relevant Wiki functions is presented and the different concurrency control techniques are evaluated in terms of message complexity.
    Keywords: ddc:004
    Language: German
    Type: masterthesis , doc-type:masterThesis
    Format: application/pdf
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 11
    Publication Date: 2020-12-15
    Description: We provide information on the Survivable Network Design Library (SNDlib), a data library for fixed telecommunication network design that can be accessed at http://sndlib.zib.de. In version 1.0, the library contains data related to 22 networks which, combined with a set of selected planning parameters, leads to 830 network planning problem instances. In this paper, we provide a mathematical model for each planning problem considered in the library and describe the data concepts of the SNDlib. Furthermore, we provide statistical information and details about the origin of the data sets.
    Keywords: ddc:000
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Format: application/postscript
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 12
    Publication Date: 2020-08-05
    Description: The \emph{optimal track allocation problem} (\textsc{OPTRA}), also known as the train routing problem or the train timetabling problem, is to find, in a given railway network, a conflict-free set of train routes of maximum value. We propose a novel integer programming formulation for this problem that is based on additional configuration' variables. Its LP-relaxation can be solved in polynomial time. These results are the theoretical basis for a column generation algorithm to solve large-scale track allocation problems. Computational results for the Hanover-Kassel-Fulda area of the German long distance railway network involving up to 570 trains are reported.
    Keywords: ddc:000
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Format: application/postscript
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 13
    Publication Date: 2020-12-15
    Description: We consider a multicommodity routing problem, where demands are released \emph{online} and have to be routed in a network during specified time windows. The objective is to minimize a time and load dependent convex cost function of the aggregate arc flow. First, we study the fractional routing variant. We present two online algorithms, called Seq and Seq$^2$. Our first main result states that, for cost functions defined by polynomial price functions with nonnegative coefficients and maximum degree~$d$, the competitive ratio of Seq and Seq$^2$ is at most $(d+1)^{d+1}$, which is tight. We also present lower bounds of $(0.265\,(d+1))^{d+1}$ for any online algorithm. In the case of a network with two nodes and parallel arcs, we prove a lower bound of $(2-\frac{1}{2} \sqrt{3})$ on the competitive ratio for Seq and Seq$^2$, even for affine linear price functions. Furthermore, we study resource augmentation, where the online algorithm has to route less demand than the offline adversary. Second, we consider unsplittable routings. For this setting, we present two online algorithms, called U-Seq and U-Seq$^2$. We prove that for polynomial price functions with nonnegative coefficients and maximum degree~$d$, the competitive ratio of U-Seq and U-Seq$^2$ is bounded by $O{1.77^d\,d^{d+1}}$. We present lower bounds of $(0.5307\,(d+1))^{d+1}$ for any online algorithm and $(d+1)^{d+1}$ for our algorithms. Third, we consider a special case of our framework: online load balancing in the $\ell_p$-norm. For the fractional and unsplittable variant of this problem, we show that our online algorithms are $p$ and $O{p}$ competitive, respectively. Such results where previously known only for scheduling jobs on restricted (un)related parallel machines.
    Keywords: ddc:000
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Format: application/postscript
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 14
    Publication Date: 2014-02-26
    Description: Die Zentrale des Kooperativen Bibliotheksverbunds Berlin-Brandenburg (KOBV) betreibt seit Januar 2004 das KOBV-Portal, in dem u.a. vielfältige Open-Linking-Dienste eingebunden sind. Dieser Beitrag erläutert Open-Linking allgemein und stellt die KOBV spezifischen Dienste im Detail vor. Dabei wird auch die Zugriffsentwicklung auf die KOBV-Open-Linking-Dienste evaluiert. Ein Ergebnis ist, dass signifikante Steigerungen der Nutzung erst dann bewirkt werden, wenn Maßnahmen durchgeführt werden, die erstens die Open-Linking-Dienste stärker ins Bewusstsein der NutzerInnen rücken und zweitens den Weg dorthin im KOBV-Portal verkürzen. Vor allem muss ein schneller Weg zu den Open-Linking-Diensten gewährleistet sein, um die Nutzung deutlich zu steigern. Um zusätzlich den Bekanntheitsgrad der Open-Linking-Dienste bundesweit zu erhöhen, regt die KOBV-Zentrale andere Bibliotheken und Verbünde dazu an, analoge Open-Linking-Dienste einzurichten. Auf diese Weise wird die Handhabung von Open-Linking selbstverständlicher.
    Keywords: ddc:000
    Language: German
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 15
    Publication Date: 2014-02-26
    Description: Dieser Artikel berichtet über eine erfolgreiche Schüleraktivität, die seit Jahren am Zuse-Institut Berlin (ZIB) bei Besuchen von Schülergruppen erprobt und verfeinert worden ist. Das hier zusammengestellte Material ist gedacht als Basis für eine Unterrichtseinheit in Leistungskursen Mathematik an Gymnasien. Inhaltlich wird von einem zwar für Schüler (wie auch Lehrer) neuen, aber leicht fasslichen Gegenstand ausgegangen: der Drei-Term-Rekursion für Besselfunktionen. Die Struktur wird erklärt und in ein kleines Programm umgesetzt. Dazu teilen sich die Schüler selbstorganisierend in Gruppen ein, die mit unterschiedlichen Taschenrechnern "um die Wette" rechnen. Die Schüler und Schülerinnen erfahren unmittelbar die katastrophale Wirkung von an sich kleinen'' Rundungsfehlern, sie landen -- ebenso wie der Supercomputer des ZIB -- im Bessel'schen Irrgarten''. Die auftretenden Phänomene werden mathematisch elementar erklärt, wobei lediglich auf das Konzept der linearen Unabhängigkeit zurückgegriffen wird. Das dabei gewonnene vertiefte Verständnis fließt ein in die Konstruktion eines klassischen Algorithmus sowie eines wesentlich verbesserten Horner-artigen Algorithmus.
    Keywords: ddc:000
    Language: German
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Format: application/postscript
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 16
    Publication Date: 2017-08-01
    Description: In this paper we study capacitated network design problems, differentiating directed, bidirected and undirected link capacity models. We complement existing polyhedral results for the three variants by new classes of facet-defining valid inequalities and unified lifting results. For this, we study the restriction of the problems to a cut of the network. First, we show that facets of the resulting cutset polyhedra translate into facets of the original network design polyhedra if the two subgraphs defined by the network cut are (strongly) connected. Second, we provide an analysis of the facial structure of cutset polyhedra, elaborating the differences caused by the three different types of capacity constraints. We present flow-cutset inequalities for all three models and show under which conditions these are facet-defining. We also state a new class of facets for the bidirected and undirected case and it is shown how to handle multiple capacity modules by Mixed Integer Rounding (MIR).
    Keywords: ddc:000
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Format: application/pdf
    Format: application/postscript
    Format: application/postscript
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 17
    Publication Date: 2020-12-15
    Description: In this paper we study online multicommodity routing problems in networks, in which commodities have to be routed sequentially. The flow of each commodity can be split on several paths. Arcs are equipped with load dependent price functions defining routing costs, which have to be minimized. We discuss a greedy online algorithm that routes each commodity by minimizing a convex cost function that only depends on the demands previously routed. We present a competitive analysis of this algorithm showing that for affine linear price functions this algorithm is 4K2 (1+K)2 -competitive, where K is the number of commodities. For the single-source single-destination case, this algorithm is optimal. Without restrictions on the price functions and network, no algorithm is competitive. Finally, we investigate a variant in which the demands have to be routed unsplittably.
    Keywords: ddc:000
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Format: application/pdf
    Format: application/postscript
    Format: application/postscript
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 18
    Publication Date: 2020-12-15
    Description: In this paper, we empirically investigate the NP-hard problem of finding sparse solutions to linear equation systems, i.e., solutions with as few nonzeros as possible. This problem has received considerable interest in the sparse approximation and signal processing literature, recently. We use a branch-and-cut approach via the maximum feasible subsystem problem to compute optimal solutions for small instances and investigate the uniqueness of the optimal solutions. We furthermore discuss five (modifications of) heuristics for this problem that appear in different parts of the literature. For small instances, the exact optimal solutions allow us to evaluate the quality of the heuristics, while for larger instances we compare their relative performance. One outcome is that the basis pursuit heuristic performs worse, compared to the other methods. Among the best heuristics are a method due to Mangasarian and a bilinear approach.
    Keywords: ddc:000
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Format: application/postscript
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 19
    Publication Date: 2014-02-26
    Description: The performance evaluation of W-CDMA networks is intricate as cells are strongly coupled through interference. Pole equations have been developed as a simple tool to analyze cell capacity. Numerous scientific contributions have been made on their basis. In the established forms, the pole equations rely on strong assumptions such as homogeneous traffic, uniform users, and constant downlink orthogonality factor. These assumptions are not met in realistic scenarios. Hence, the pole equations are typically used during initial network dimensioning only. Actual network (fine-) planning requires a more faithful analysis of each individual cell's capacity. Complex analytical analysis or Monte-Carlo simulations are used for this purposes. In this paper, we generalize the pole equations to include inhomogeneous data. We show how the equations can be parametrized in a cell-specific way provided the transmit powers are known. This allows to carry over prior results to realistic settings. This is illustrated with an example: Based on the pole equation, we investigate the accuracy of average snapshot'' approximations for downlink transmit powers used in state-of-the-art network optimization schemes. We confirm that the analytical insights apply to practice-relevant settings on the basis of results from detailed Monte-Carlo simulation on realistic datasets.
    Keywords: ddc:000
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Format: application/postscript
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 20
    Publication Date: 2017-08-01
    Description: This paper deals with directed, bidirected, and undirected capacitated network design problems. Using mixed integer rounding (MIR), we generalize flow-cutset inequalities to these three link types and to an arbitrary modular link capacity structure, and propose a generic separation algorithm. In an extensive computational study on 54 instances from the Survivable Network Design Library (SNDlib), we show that the performance of cplex can significantly be enhanced by this class of cutting planes. The computations reveal the particular importance of the subclass of cutset-inequalities.
    Keywords: ddc:000
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Format: application/postscript
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 21
    Publication Date: 2020-02-04
    Description: Wigner transformation provides a one-to-one correspondence between functions on position space (wave functions) and functions on phase space (Wigner functions). Weighted integrals of Wigner functions yield quadratic quantities of wave functions like position and momentum densities or expectation values. For molecular quantum systems, suitably modified classical transport of Wigner functions provides an asymptotic approximation of the dynamics in the high energy regime. The article addresses the computation of Wigner functions by Monte Carlo quadrature. An ad aption of the Metropolis algorithm for the approximation of signed measures with disconnected support is systematically tested in combination with a surface hopping algorithm for non-adiabatic quantum dynamics. The numerical experiments give expectation values and level populations with an error of two to three percent, which agrees with the theoretically expected accuracy.
    Keywords: ddc:000
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 22
    Publication Date: 2020-11-13
    Description: We study a planning problem arising in SDH/WDM multi-layer telecommunication network design. The goal is to find a minimum cost installation of link and node hardware of both network layers such that traffic demands can be realized via grooming and a survivable routing. We present a mixed-integer programming formulation that takes many practical side constraints into account, including node hardware, several bitrates, and survivability against single physical node or link failures. This model is solved using a branch-and-cut approach with problem-specific preprocessing and cutting planes based on either of the two layers. On several realistic two-layer planning scenarios, we show that these cutting planes are still useful in the multi-layer context, helping to increase the dual bound and to reduce the optimality gaps.
    Keywords: ddc:000
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Format: application/postscript
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 23
    Publication Date: 2020-08-05
    Description: In \emph{classical optimization} it is assumed that full information about the problem to be solved is given. This, in particular, includes that all data are at hand. The real world may not be so nice'' to optimizers. Some problem constraints may not be known, the data may be corrupted, or some data may not be available at the moments when decisions have to be made. The last issue is the subject of \emph{online optimization} which will be addressed here. We explain some theory that has been developed to cope with such situations and provide examples from practice where unavailable information is not the result of bad data handling but an inevitable phenomenon.
    Keywords: ddc:000
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 24
    Publication Date: 2016-06-30
    Description: Die Arbeit präsentiert ein Transaktionsverfahren für strukturierte Overlay-Netzwerke, das an die Erfordernisse verteilter Informationssysteme mit relationalem Datenmodell angepasst ist. Insbesondere wird der Einsatz von Transaktionen für verteilte Wikis betrachtet, die moderne Funktionalitäten, wie Metadaten und zusätzliche Indexe für die Navigation, unterstützen. Konsistenz und Dauerhaftigkeit der gespeicherten Daten erfordert die Behandlung von Knotenausfällen. Die Arbeit schlägt dafür das Zellenmodell vor: Das Overlay wird aus replizierten Zustandsmaschinen gebildet, um Verfügbarkeit zu gewährleisten. Das Transaktionsverfahren baut darauf auf und verwendet Two-Phase-Commit mit Fehlererkennung und Widerherstellung von ausgefallenen Transaktionsmanagern. Anwendungen wird eine Auswahl an pessimistischen und hybrid-optimistischen Nebenläufigkeitskontrollverfahren geboten, die die Minimierung von Latenzeffekten und die schnelle Ausführung von Nur-Lese-Transaktionen ermöglichen. Für die Beispielanwendung Wiki wird der erforderliche Pseudocode angegeben und die verschiedenen Nebenläufigkeitskontrollverfahren hinsichtlich ihrer Nachrichtenkomplexität verglichen.
    Description: The report presents a transaction processing scheme for structured overlay networks and uses it to develop a distributed Wiki application based on a relational data model. The Wiki supports rich metadata and additional indexes for navigation purposes. Ensuring consistency and durability requires handling of node failures. Such failures are masked by providing high availability of nodes. This in turn is achieved by constructing the overlay from replicated state machines (cell model). Atomicity is realized using two phase commit with additional support for failure detection and restoration of the transaction manager. The developed transaction processing scheme provides the application with a mixture of pessimistic, hybrid optimistic and multiversioning concurrency control techniques to minimize the impact of replication on latency and optimize for read operations. The pseudocode of the relevant Wiki functions is presented and the different concurrency control techniques are evaluated in terms of message complexit
    Keywords: ddc:004
    Language: German
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 25
    Publication Date: 2016-06-30
    Description: Berlin als Stadtstaat ist Kommune und Land der Bundesrepublik zugleich und Standort vieler renommier-ter Wissenschafts- und Kultureinrichtungen. In enger Zusammenarbeit der Wissenschaftseinrichtungen mit dem IT-Dienstleistungszentrum Berlin (ITDZ, ehemals Landesbetrieb für Informationstechnik), der für die Behörden Berlins zuständigen Einrichtung, wurde seit 1993 ein landeseigenes Glasfasernetz mit einer derzeitigen Länge von 856 km Glasfaserkabel (je Kabel bis zu 144 Einzelfasern) zur gemeinsamen Nutzung von Wissenschaft und Verwaltung errichtet und weiter ausgebaut. 1994 erfolgte der offizielle Start des Berliner Wissenschaftsnetzes BRAIN (Berlin Research Area Information Network), als durch einen Beschluss des Senats von Berlin die Nutzung des landeseigenen Glasfasernetzes durch die Wissen-schaftseinrichtungen festgeschrieben wurde. Bereits 1995 wurden durch die Wissenschaftseinrichtungen auf diesem Glasfasernetz die ersten sieben Anschlüsse in ATM-Technik (Classical BRAIN-ATM) in Betrieb genommen, 1999 wurden anschließend auch erste Strecken in Ethernet-Technik (Classical BRAIN-GE) betrieben. Diese heterogenen Netze mit unterschiedlichen Netzgeräten wurden dezentral von den Netzadministratoren der beteiligten Einrichtungen nach globalen Absprachen betreut. Die dezentrale Administration erschwerte das Management und die Erweiterungen der Gesamtnetze. Basierend auf den vorliegenden Erfahrungen vereinbarten die Berliner Wissenschaftseinrichtungen, ein technisch neues Verbundnetz in Gigabit-Ethernet-Technik mit einheitlichen Geräten und einem zentralen Netzwerkmana-gement aufzubauen und zu betreiben. Seit November 2003 betreibt BRAIN auf dem landeseignen Glasfasernetz ein auf MPLS-Technik basie-rendes Gigabit-Ethernet-Netz, das „BRAIN-Verbundnetz“, mit den Diensten LAN-to-LAN-Kopplung der Einrichtungen, regionaler IP-Verkehr, Übergang zum Verwaltungsnetz und WiN-Backup. Das BRAIN-Verbundnetz löste die dezentral betreuten Vorläufernetze komplett ab. Von den derzeit 27 BRAIN-Teilnehmern nutzen 24 Einrichtungen an 53 in der Stadt verteilten Standorten die Dienste des BRAIN-Verbundnetzes, 18 Standorte sind mit 1000 Mbit/s und 35 Standorte mit 100 Mbit/s angeschlossen. Für verteilte Standorte einer Einrichtung besteht zudem die Möglichkeit, diese über dedizierte Fasern oder Bandbreiten miteinander zu vernetzen. Seit dem 2. Quartal 2007 wird im Rahmen eines Pilotprojekts der Nutzen eines zentral gemanagten Fibre Channel-Netzwerks "BRAIN-SAN" ermittelt, um Möglichkeiten einer verteilten Datenhaltung der Berliner Hochschulen und wissenschaftlichen Einrichtungen zu schaf-fen. Zusätzlich zu den vorgenannten Diensten nutzt der DFN-Verein die BRAIN-Struktur für die Verbindun-gen der X-WiN-Kernnetzknoten in Berlin und Potsdam untereinander und für Zugangsleitungen zu den Anwendern. Mit Stand 2007 nutzt das Berliner Wissenschaftsnetz BRAIN vom landeigenen Glasfasernetz 2100 km Einzelfasern und verbindet insgesamt 43 Einrichtungen (BRAIN-Teilnehmer und DFN-Anwender) aus Wissenschaft, Bildung und Kultur mit 129 Standorten. Der Betrieb von BRAIN wird im wesentlichen durch seine Nutzer finanziert. Das Land Berlin trägt aller-dings pauschal die überwiegenden Kosten für die Wartung des Glasfasernetzes, soweit es vom ITDZ be-reit gestellt wird. Zentrales Planungs- und Steuerungsorgan für BRAIN ist die von der Senatsverwaltung für Bildung, Wis-senschaft und Forschung eingerichtete BRAIN-Planungsgruppe. Sie besteht aus Mitarbeitern der Rechen-zentren der drei Berliner Universitäten und des ZIB. Nach außen wird BRAIN in rechtlicher und wirtschaftlicher Hinsicht treuhänderisch vom ZIB vertreten, die BRAIN-Geschäftsstelle befindet sich ebenfalls im ZIB.
    Description: Berlin as a city state is both local authority and federal state of the Federal Republic, as well as a location of many renowned institutions of research and culture. In close cooperation of the institutions of research with the IT service centre Berlin (ITDZ, the former Landesbetrieb für Informationstechnik) - which is the appropriate facility for the authorities of Berlin - a glass fibre network of a total extension of 856 kilome-tres of fibre optics (144 fibres each cable optic) for the common use of research and administration has been established and advanced since 1993. In 1994, when a resolution of the Senate of Berlin laid down the use of the appropriate fibre networks by the research facilities, this was the official beginning of the Berlin Research Area Information Network (BRAIN). The first seven interfaces in this fibre network in ATM technology (Classical BRAIN-ATM) were already established by the research facilities in 1995. In 1999, first systems run in Ethernet technology (Classical BRAIN-GE). These heterogeneous networks with different interfaces have been supported locally by the network administrators of the research facili-ties following global agreements. Management and advancement of the overall networks were encum-bered by these local administrations. Based on the existing experience, Berlin's research facilities agreed on the building and advancement of a technically new integrated network in gigabit Ethernet technology with standardised facilities and a centrally managed network. Since November 2003 the Berlin Research Area Information Network established a Gigabit Ethernet - called “BRAIN Integrated Network” - based on MPLS technology, including LAN to LAN linking of the facilities, local IP traffic, interface to the administration's network and WIN back-up. This BRAIN Inte-grated Network has completely replaced the locally administered predecessor networks. 24 of 27 BRAIN participants use the services of the BRAIN Integrated Network on 53 locations spread all over the city. 18 locations are connected with 1000 Mbit/s and 35 locations with 100 Mbit/s. Moreover, spread locations of a single facility have the possibililty to communicate by dedicated fibres or bandwidths. From the 2nd quarter 2007 within the scope of a pilot scheme, the advantage of a centrally administered fibre channel network "BRAIN-SAN" will be determined in order to accomplish possibilities of a spread data manage-ment of Berlin's universities and research facilities. In addition to the aforementioned services the DFN association makes use of BRAIN's structure for the connection of the X-WiN-core network nodes in Berlin and Potsdam und for access pathways to the us-ers. As from 2007, Berlin's research network BRAIN uses 2100 kilometres of single fibres from the country's fibre glass network and connects a total of 43 facilities (BRAIN participants and DFN users) from re-search, education and culture with 129 locations. The operations of BRAIN are funded basically by its users. However, the country of Berlin bears most of the costs for the maintenance of the glass fibre network, as far as it is provided by ITDZ. Central planning and steering body for BRAIN is the BRAIN planning group, which has been arranged by the administration of the Senatsverwaltung für Bildung, Wissenschaft und Forschung. It consists of staff from the computing centres of Berlin's three universities and of ZIB. BRAIN is represented legally and economically on a trust basis by the ZIB, where the BRAIN office is located also.
    Keywords: ddc:004
    Language: German
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 26
    Publication Date: 2020-12-11
    Description: Die Intention ist der kooperative Aufbau einer Infrastruktur durch die Bibliotheksverbünde, um den Nutzern Volltext-Angebote dauerhaft und komfortabel zur Verfügung zu stellen: Zeitschriftenartikel und elektronische Dokumente werden mittels Suchmaschinentechnologie indexiert und unter Berücksichtigung von Zugriffsrechten zugänglich gemacht. Realisiert ist dies bereits im KOBV-Volltextserver, der seit Ende 2005 im Routinebetrieb läuft. Vorstellbar ist ein überregionales Netz von Volltextservern der Verbünde, die mittels Suchmaschinentechnologie indiziert und nahtlos in das regionale und lokale Literaturangebot integriert werden. Bei den lizenzierten Materialien sind insbesondere auch die Rechte der Verlage zu wahren und entsprechende Rechtemanagement-Verfahren einzusetzen. Es gilt, transparente Verfahren zu konzipieren und umzusetzen, um für die Verlage die notwendige Vertrauensbasis zu schaffen und gleichzeitig den Einrichtungen ihren berechtigten Zugriff auf die Volltexte zu sichern. Der vorliegende Text ist die schriftliche Fassung eines Vortrages auf dem 3. Leipziger Kongress für Information und Bibliothek "Information und Ethik", der vom 19.-22. März 2007 im Congress Center Leipzig stattfand.
    Keywords: ddc:000
    Language: German
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 27
    Publication Date: 2020-12-11
    Description: Zur Unterstützung der Bibliotheken bei ihren Open-Access-Aktivitäten betreibt die KOBV-Zentrale seit Anfang 2005 den Service "Opus- und Archivierungsdienste". Die KOBV-Zentrale agiert als Application Service Provider (ASP) für sämtliche technischen Komponenten des Publikationsprozesses, indem sie die gesamte technische Infrastruktur bereitstellt und betreibt – angefangen bei den lokalen Publikationsservern bis hin zu lokalen Repositories zur Archivierung der elektronischen Dokumente. Der vorliegende Text ist die schriftliche Fassung eines gleichnamigen Vortrages auf der 31. ASpB-Tagung "Kooperation versus Eigenprofil?" der Arbeitsgemeinschaft der Spezialbibliotheken, die vom 25.-28. September 2007 in der Technischen Universität Berlin stattfand.
    Keywords: ddc:000
    Language: German
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 28
    Publication Date: 2022-07-07
    Description: A new approach to derive transparent boundary conditions (TBCs) for wave, Schrödinger, heat and drift-diffusion equations is presented. It relies on the pole condition and distinguishes between physical reasonable and unreasonable solutions by the location of the singularities of the spatial Laplace transform of the exterior solution. To obtain a numerical algorithm, a Möbius transform is applied to map the Laplace transform onto the unit disc. In the transformed coordinate the solution is expanded into a power series. Finally, equations for the coefficients of the power series are derived. These are coupled to the equation in the interior, and yield transparent boundary conditions. Numerical results are presented in the last section, showing that the error introduced by the new approximate TBCs decays exponentially in the number of coefficients.
    Keywords: ddc:000
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Format: application/postscript
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 29
    Publication Date: 2022-07-19
    Description: We present a unified approach for consistent remeshing of arbitrary non-manifold triangle meshes with additional user-defined feature lines, which together form a feature skeleton. Our method is based on local operations only and produces meshes of high regularity and triangle quality while preserving the geometry as well as topology of the feature skeleton and the input mesh.
    Keywords: ddc:000
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 30
    Publication Date: 2022-07-19
    Description: For medical diagnosis, visualization, and model-based therapy planning three-dimensional geometric reconstructions of individual anatomical structures are often indispensable. Computer-assisted, model-based planning procedures typically cover specific modifications of “virtual anatomy” as well as numeric simulations of associated phenomena, like e.g. mechanical loads, fluid dynamics, or diffusion processes, in order to evaluate a potential therapeutic outcome. Since internal anatomical structures cannot be measured optically or mechanically in vivo, three-dimensional reconstruction of tomographic image data remains the method of choice. In this work the process chain of individual anatomy reconstruction is described which consists of segmentation of medical image data, geometrical reconstruction of all relevant tissue interfaces, up to the generation of geometric approximations (boundary surfaces and volumetric meshes) of three-dimensional anatomy being suited for finite element analysis. All results presented herein are generated with amira ® – a highly interactive software system for 3D data analysis, visualization and geometry reconstruction.
    Keywords: ddc:004
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 31
    Publication Date: 2022-07-19
    Description: This work introduces novel internal and external memory algorithms for computing voxel skeletons of massive voxel objects with complex network-like architecture and for converting these voxel skeletons to piecewise linear geometry, that is triangle meshes and piecewise straight lines. The presented techniques help to tackle the challenge of visualizing and analyzing 3d images of increasing size and complexity, which are becoming more and more important in, for example, biological and medical research. Section 2.3.1 contributes to the theoretical foundations of thinning algorithms with a discussion of homotopic thinning in the grid cell model. The grid cell model explicitly represents a cell complex built of faces, edges, and vertices shared between voxels. A characterization of pairs of cells to be deleted is much simpler than characterizations of simple voxels were before. The grid cell model resolves topologically unclear voxel configurations at junctions and locked voxel configurations causing, for example, interior voxels in sets of non-simple voxels. A general conclusion is that the grid cell model is superior to indecomposable voxels for algorithms that need detailed control of topology. Section 2.3.2 introduces a noise-insensitive measure based on the geodesic distance along the boundary to compute two-dimensional skeletons. The measure is able to retain thin object structures if they are geometrically important while ignoring noise on the object's boundary. This combination of properties is not known of other measures. The measure is also used to guide erosion in a thinning process from the boundary towards lines centered within plate-like structures. Geodesic distance based quantities seem to be well suited to robustly identify one- and two-dimensional skeletons. Chapter 6 applies the method to visualization of bone micro-architecture. Chapter 3 describes a novel geometry generation scheme for representing voxel skeletons, which retracts voxel skeletons to piecewise linear geometry per dual cube. The generated triangle meshes and graphs provide a link to geometry processing and efficient rendering of voxel skeletons. The scheme creates non-closed surfaces with boundaries, which contain fewer triangles than a representation of voxel skeletons using closed surfaces like small cubes or iso-surfaces. A conclusion is that thinking specifically about voxel skeleton configurations instead of generic voxel configurations helps to deal with the topological implications. The geometry generation is one foundation of the applications presented in Chapter 6. Chapter 5 presents a novel external memory algorithm for distance ordered homotopic thinning. The presented method extends known algorithms for computing chamfer distance transformations and thinning to execute I/O-efficiently when input is larger than the available main memory. The applied block-wise decomposition schemes are quite simple. Yet it was necessary to carefully analyze effects of block boundaries to devise globally correct external memory variants of known algorithms. In general, doing so is superior to naive block-wise processing ignoring boundary effects. Chapter 6 applies the algorithms in a novel method based on confocal microscopy for quantitative study of micro-vascular networks in the field of microcirculation.
    Description: Die vorliegende Arbeit führt I/O-effiziente Algorithmen und Standard-Algorithmen zur Berechnung von Voxel-Skeletten aus großen Voxel-Objekten mit komplexer, netzwerkartiger Struktur und zur Umwandlung solcher Voxel-Skelette in stückweise-lineare Geometrie ein. Die vorgestellten Techniken werden zur Visualisierung und Analyse komplexer drei-dimensionaler Bilddaten, beispielsweise aus Biologie und Medizin, eingesetzt. Abschnitt 2.3.1 leistet mit der Diskussion von topologischem Thinning im Grid-Cell-Modell einen Beitrag zu den theoretischen Grundlagen von Thinning-Algorithmen. Im Grid-Cell-Modell wird ein Voxel-Objekt als Zellkomplex dargestellt, der aus den Ecken, Kanten, Flächen und den eingeschlossenen Volumina der Voxel gebildet wird. Topologisch unklare Situationen an Verzweigungen und blockierte Voxel-Kombinationen werden aufgelöst. Die Charakterisierung von Zellpaaren, die im Thinning-Prozess entfernt werden dürfen, ist einfacher als bekannte Charakterisierungen von so genannten "Simple Voxels". Eine wesentliche Schlussfolgerung ist, dass das Grid-Cell-Modell atomaren Voxeln überlegen ist, wenn Algorithmen detaillierte Kontrolle über Topologie benötigen. Abschnitt 2.3.2 präsentiert ein rauschunempfindliches Maß, das den geodätischen Abstand entlang der Oberfläche verwendet, um zweidimensionale Skelette zu berechnen, welche dünne, aber geometrisch bedeutsame, Strukturen des Objekts rauschunempfindlich abbilden. Das Maß wird im weiteren mit Thinning kombiniert, um die Erosion von Voxeln auf Linien zuzusteuern, die zentriert in plattenförmigen Strukturen liegen. Maße, die auf dem geodätischen Abstand aufbauen, scheinen sehr geeignet zu sein, um ein- und zwei-dimensionale Skelette bei vorhandenem Rauschen zu identifizieren. Eine theoretische Begründung für diese Beobachtung steht noch aus. In Abschnitt 6 werden die diskutierten Methoden zur Visualisierung von Knochenfeinstruktur eingesetzt. Abschnitt 3 beschreibt eine Methode, um Voxel-Skelette durch kontrollierte Retraktion in eine stückweise-lineare geometrische Darstellung umzuwandeln, die als Eingabe für Geometrieverarbeitung und effizientes Rendering von Voxel-Skeletten dient. Es zeigt sich, dass eine detaillierte Betrachtung der topologischen Eigenschaften eines Voxel-Skeletts einer Betrachtung von allgemeinen Voxel-Konfigurationen für die Umwandlung zu einer geometrischen Darstellung überlegen ist. Die diskutierte Methode bildet die Grundlage für die Anwendungen, die in Abschnitt 6 diskutiert werden. Abschnitt 5 führt einen I/O-effizienten Algorithmus für Thinning ein. Die vorgestellte Methode erweitert bekannte Algorithmen zur Berechung von Chamfer-Distanztransformationen und Thinning so, dass diese effizient ausführbar sind, wenn die Eingabedaten den verfügbaren Hauptspeicher übersteigen. Der Einfluss der Blockgrenzen auf die Algorithmen wurde analysiert, um global korrekte Ergebnisse sicherzustellen. Eine detaillierte Analyse ist einer naiven Zerlegung, die die Einflüsse von Blockgrenzen vernachlässigt, überlegen. In Abschnitt 6 wird, aufbauend auf den I/O-effizienten Algorithmen, ein Verfahren zur quantitativen Analyse von Mikrogefäßnetzwerken diskutiert.
    Keywords: ddc:004
    Language: English
    Type: doctoralthesis , doc-type:doctoralThesis
    Format: application/pdf
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 32
    Publication Date: 2022-07-19
    Description: One crucial step in virtual drug design is the identification of new lead structures with respect to a pharmacological target molecule. The search for new lead structures is often done with the help of a pharmacophore, which carries the essential structural as well as physico-chemical properties that a molecule needs to have in order to bind to the target molecule. In the absence of the target molecule, such a pharmacophore can be established by comparison of a set of active compounds. In order to identify their common features,a multiple alignment of all or most of the active compounds is necessary. Moreover, since the “outer shape” of the molecules plays a major role in the interaction between drug and target, an alignment algorithm aiming at the identification of common binding properties needs to consider the molecule’s “outer shape”, which can be approximated by the solvent excluded surface. In this thesis, we present a new approach to molecular surface alignment based on a discrete representation of shape as well as physico-chemical properties by points distributed on the solvent excluded surface. We propose a new method to distribute points regularly on a surface w.r.t. a smoothly varying point density given on that surface. Since the point distribution algorithm is not restricted to molecular surfaces, it might also be of interest for other applications. For the computation of pairwise surface alignments, we extend an existing point matching scheme to surface points, and we develop an efficient data structure speeding up the computation by a factor of three. Moreover, we present an approach to compute multiple alignments from pairwise alignments, which is able to handle a large number of surface points. All algorithms are evaluated on two sets of molecules: eight thermolysin inhibitors and seven HIV-1 protease inhibitors. Finally, we compare the results obtained from surface alignment with the results obtained by applying an atom alignment approach.
    Description: Die Identifizierung neuer Leitstrukturen (lead structures) zur Entwicklung optimierter Wirkstoffe ist ein äußerst wichtiger Schritt in der virtuellen Wirkstoffentwicklung (virtual drug design). Die Suche nach neuen Leitstrukturen wird oft mit Hilfe eines Pharmakophor-Modells durchgeführt, welches die wichtigsten strukturellen wie auch physiko-chemischen Eigenschaften eines bindenden Moleküls in sich vereint. Ist das Zielmolekül (target) nicht bekannt, kann das Pharmakophor-Modell mit Hilfe des Vergleiches aktiver Moleküle erstellt werden. Hier ist insbesondere die gleichzeitige Überlagerung (multiple alignment) aller oder nahezu aller Moleküle notwendig. Da bei der Interaktion zweier Moleküle die "äußere Form" der Moleküle eine besondere Rolle spielt, sollte diese von jedem Überlagerungsalgorithmus, der sich mit der Identifizierung von Bindungseigenschaften befasst, berücksichtigt werden. Dabei kann die "äußere Form" durch eine bestimmte Art von molekularer Oberfläche approximiert werden, die man als solvent excluded surface bezeichnet. In dieser Arbeit stellen wir einen neuen Ansatz zur Überlagerung molekularer Oberflächen dar, der auf einer diskreten Repräsentation sowohl der Form als auch der molekularen Eigenschaften mittels Punkten beruht. Um die Punkte auf der molekularen Oberfläche möglichst regulär entsprechend einer gegebenen Punktdichte zu verteilen, entwickeln wir eine neue Methode. Diese Methode ist nicht auf Moleküloberflächen beschränkt und könnte daher auch für andere Anwendungen von Interesse sein. Basierend auf einem bekannten Point-Matching Verfahren entwickeln wir einen Point-Matching Algorithmus für Oberflächenpunkte. Dazu erarbeiten wir u.a. eine effiziente Datenstruktur, die den Algorithmus um einen Faktor von drei beschleunigt. Darüberhinaus stellen wir einen Ansatz vor, der Mehrfachüberlagerungen (multiple alignments) aus paarweisen Überlagerungen berechnet. Die Herausforderung besteht hierbei vor allem in der großen Anzahl von Punkten, die berücksichtigt werden muss. Die vorgestellten Algorithmen werden an zwei Gruppen von Molekülen evaluiert, wobei die erste Gruppe aus acht Thermolysin Inhibitoren besteht, die zweite aus sieben HIV-1 Protease Inhibitoren. Darüberhinaus vergleichen wir die Ergebnisse der Oberflächenüberlagerung mit denen einer Atommittelpunktüberlagerung.
    Keywords: ddc:004
    Language: English
    Type: doctoralthesis , doc-type:doctoralThesis
    Format: application/pdf
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...