VYPR
Low severityNVD Advisory· Published Oct 27, 2015· Updated May 6, 2026

CVE-2015-5240

CVE-2015-5240

Description

Race condition in OpenStack Neutron before 2014.2.4 and 2015.1 before 2015.1.2, when using the ML2 plugin or the security groups AMQP API, allows remote authenticated users to bypass IP anti-spoofing controls by changing the device owner of a port to start with network: before the security group rules are applied.

Affected packages

Versions sourced from the GitHub Security Advisory.

PackageAffected versionsPatched versions
neutronPyPI
< 7.0.07.0.0

Affected products

3
  • OpenStack/Neutron3 versions
    cpe:2.3:a:openstack:neutron:2014.2.3:*:*:*:*:*:*:*+ 2 more
    • cpe:2.3:a:openstack:neutron:2014.2.3:*:*:*:*:*:*:*
    • cpe:2.3:a:openstack:neutron:2015.1.0:*:*:*:*:*:*:*
    • cpe:2.3:a:openstack:neutron:2015.1.1:*:*:*:*:*:*:*

Patches

3
fdc3431ccd21

Merge remote-tracking branch 'origin/master' into walnut

https://github.com/openstack/neutronarmando-migliaccioSep 17, 2015via ghsa
265 files changed · +7082 10736
  • devstack/lib/l2_agent_sriovnicswitch+23 0 added
    @@ -0,0 +1,23 @@
    +SRIOV_AGENT_CONF="${Q_PLUGIN_CONF_PATH}/sriov_agent.ini"
    +SRIOV_AGENT_BINARY="${NEUTRON_BIN_DIR}/neutron-sriov-nic-agent"
    +
    +function configure_l2_agent_sriovnicswitch {
    +    if [[ -n "$PHYSICAL_NETWORK" ]] && [[ -n "$PHYSICAL_INTERFACE" ]]; then
    +        PHYSICAL_DEVICE_MAPPINGS=$PHYSICAL_NETWORK:$PHYSICAL_INTERFACE
    +    fi
    +    if [[ -n "$PHYSICAL_DEVICE_MAPPINGS" ]]; then
    +        iniset /$SRIOV_AGENT_CONF sriov_nic physical_device_mappings $PHYSICAL_DEVICE_MAPPINGS
    +    fi
    +
    +    iniset /$SRIOV_AGENT_CONF securitygroup firewall_driver neutron.agent.firewall.NoopFirewallDriver
    +
    +    iniset /$SRIOV_AGENT_CONF agent extensions "$L2_AGENT_EXTENSIONS"
    +}
    +
    +function start_l2_agent_sriov {
    +    run_process q-sriov-agt "$SRIOV_AGENT_BINARY --config-file $NEUTRON_CONF --config-file /$SRIOV_AGENT_CONF"
    +}
    +
    +function stop_l2_agent_sriov {
    +    stop_process q-sriov-agt
    +}
    
  • devstack/lib/ml2+16 0 modified
    @@ -1,3 +1,6 @@
    +source $LIBDIR/ml2_drivers/sriovnicswitch
    +
    +
     function enable_ml2_extension_driver {
         local extension_driver=$1
         if [[ -z "$Q_ML2_PLUGIN_EXT_DRIVERS" ]]; then
    @@ -11,3 +14,16 @@ function enable_ml2_extension_driver {
     function configure_qos_ml2 {
         enable_ml2_extension_driver "qos"
     }
    +
    +
    +function configure_ml2 {
    +    OIFS=$IFS;
    +    IFS=",";
    +    mechanism_drivers_array=($Q_ML2_PLUGIN_MECHANISM_DRIVERS);
    +    IFS=$OIFS;
    +    for mechanism_driver in "${mechanism_drivers_array[@]}"; do
    +        if [ "$(type -t configure_ml2_$mechanism_driver)" = function ]; then
    +            configure_ml2_$mechanism_driver
    +        fi
    +    done
    +}
    \ No newline at end of file
    
  • devstack/lib/ml2_drivers/sriovnicswitch+3 0 added
    @@ -0,0 +1,3 @@
    +function configure_ml2_sriovnicswitch {
    +    iniset /$Q_PLUGIN_CONF_FILE ml2_sriov agent_required True
    +}
    \ No newline at end of file
    
  • devstack/plugin.sh+23 0 modified
    @@ -1,6 +1,7 @@
     LIBDIR=$DEST/neutron/devstack/lib
     
     source $LIBDIR/l2_agent
    +source $LIBDIR/l2_agent_sriovnicswitch
     source $LIBDIR/ml2
     source $LIBDIR/qos
     
    @@ -15,4 +16,26 @@ if [[ "$1" == "stack" && "$2" == "post-config" ]]; then
         if is_service_enabled q-agt; then
             configure_l2_agent
         fi
    +    #Note: sriov agent should run with OVS or linux bridge agent
    +    #because they are the mechanisms that bind the DHCP and router ports.
    +    #Currently devstack lacks the option to run two agents on the same node.
    +    #Therefore we create new service, q-sriov-agt, and the q-agt should be OVS
    +    #or linux bridge.
    +    if is_service_enabled q-sriov-agt; then
    +        configure_$Q_PLUGIN
    +        configure_l2_agent
    +        configure_l2_agent_sriovnicswitch
    +    fi
     fi
    +
    +if [[ "$1" == "stack" && "$2" == "extra" ]]; then
    +    if is_service_enabled q-sriov-agt; then
    +        start_l2_agent_sriov
    +    fi
    +fi
    +
    +if [[ "$1" == "unstack" ]]; then
    +    if is_service_enabled q-sriov-agt; then
    +        stop_l2_agent_sriov
    +    fi
    +fi
    \ No newline at end of file
    
  • doc/dashboards/graphite.dashboard.html+2 2 modified
    @@ -25,8 +25,8 @@ <h2>
     </a>
     </td>
     <td align="center">
    -Failure Percentage - Last 10 Days - Large Opts<br>
    -<a href="http://graphite.openstack.org/render/?title=Failure Percentage - Last 10 Days - Large Opts&from=-10days&height=500&until=now&width=1200&bgcolor=ffffff&fgcolor=000000&yMax=100&yMin=0&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-large-ops.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-large-ops.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-tempest-dsvm-neutron-large-ops%27%29,%27orange%27%29">
    +Failure Percentage - Last 10 Days - Large Ops<br>
    +<a href="http://graphite.openstack.org/render/?title=Failure Percentage - Last 10 Days - Large Ops&from=-10days&height=500&until=now&width=1200&bgcolor=ffffff&fgcolor=000000&yMax=100&yMin=0&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-large-ops.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-large-ops.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-tempest-dsvm-neutron-large-ops%27%29,%27orange%27%29">
     <img src="http://graphite.openstack.org/render/?from=-10days&height=500&until=now&width=1200&bgcolor=ffffff&fgcolor=000000&yMax=100&yMin=0&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-large-ops.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-large-ops.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-tempest-dsvm-neutron-large-ops%27%29,%27orange%27%29" width="400">
     </a>
     </td>
    
  • doc/source/devref/callbacks.rst+19 19 modified
    @@ -94,18 +94,18 @@ In practical terms this scenario would be translated in the code below:
     
     
       def callback1(resource, event, trigger, **kwargs):
    -      print 'Callback1 called by trigger: ', trigger
    -      print 'kwargs: ', kwargs
    +      print('Callback1 called by trigger: ', trigger)
    +      print('kwargs: ', kwargs)
     
       def callback2(resource, event, trigger, **kwargs):
    -      print 'Callback2 called by trigger: ', trigger
    -      print 'kwargs: ', kwargs
    +      print('Callback2 called by trigger: ', trigger)
    +      print('kwargs: ', kwargs)
     
     
       # B and C express interest with I
       registry.subscribe(callback1, resources.ROUTER, events.BEFORE_CREATE)
       registry.subscribe(callback2, resources.ROUTER, events.BEFORE_CREATE)
    -  print 'Subscribed'
    +  print('Subscribed')
     
     
       # A notifies
    @@ -114,7 +114,7 @@ In practical terms this scenario would be translated in the code below:
           registry.notify(resources.ROUTER, events.BEFORE_CREATE, do_notify, **kwargs)
     
     
    -  print 'Notifying...'
    +  print('Notifying...')
       do_notify()
     
     
    @@ -171,25 +171,25 @@ to abort events are ignored. The snippet below shows this in action:
           raise Exception('I am failing!')
     
       def callback2(resource, event, trigger, **kwargs):
    -      print 'Callback2 called by %s on event  %s' % (trigger, event)
    +      print('Callback2 called by %s on event  %s' % (trigger, event))
     
     
       registry.subscribe(callback1, resources.ROUTER, events.BEFORE_CREATE)
       registry.subscribe(callback2, resources.ROUTER, events.BEFORE_CREATE)
       registry.subscribe(callback2, resources.ROUTER, events.ABORT_CREATE)
    -  print 'Subscribed'
    +  print('Subscribed')
     
     
       def do_notify():
           kwargs = {'foo': 'bar'}
           registry.notify(resources.ROUTER, events.BEFORE_CREATE, do_notify, **kwargs)
     
     
    -  print 'Notifying...'
    +  print('Notifying...')
       try:
           do_notify()
       except exceptions.CallbackFailure as e:
    -      print 'Error: ', e
    +      print('Error: ', e)
     
     The output is:
     
    @@ -237,23 +237,23 @@ The snippet below shows these concepts in action:
     
     
       def callback1(resource, event, trigger, **kwargs):
    -      print 'Callback1 called by %s on event %s for resource %s' % (trigger, event, resource)
    +      print('Callback1 called by %s on event %s for resource %s' % (trigger, event, resource))
     
     
       def callback2(resource, event, trigger, **kwargs):
    -      print 'Callback2 called by %s on event %s for resource %s' % (trigger, event, resource)
    +      print('Callback2 called by %s on event %s for resource %s' % (trigger, event, resource))
     
     
       registry.subscribe(callback1, resources.ROUTER, events.BEFORE_READ)
       registry.subscribe(callback1, resources.ROUTER, events.BEFORE_CREATE)
       registry.subscribe(callback1, resources.ROUTER, events.AFTER_DELETE)
       registry.subscribe(callback1, resources.PORT, events.BEFORE_UPDATE)
       registry.subscribe(callback2, resources.ROUTER_GATEWAY, events.BEFORE_UPDATE)
    -  print 'Subscribed'
    +  print('Subscribed')
     
     
       def do_notify():
    -      print 'Notifying...'
    +      print('Notifying...')
           kwargs = {'foo': 'bar'}
           registry.notify(resources.ROUTER, events.BEFORE_READ, do_notify, **kwargs)
           registry.notify(resources.ROUTER, events.BEFORE_CREATE, do_notify, **kwargs)
    @@ -356,17 +356,17 @@ What kind of function can be a callback?
     
     
       def callback1(resource, event, trigger, **kwargs):
    -      print 'module callback'
    +      print('module callback')
     
     
       class MyCallback(object):
     
           def callback2(self, resource, event, trigger, **kwargs):
    -          print 'object callback'
    +          print('object callback')
     
           @classmethod
           def callback3(cls, resource, event, trigger, **kwargs):
    -          print 'class callback'
    +          print('class callback')
     
     
       c = MyCallback()
    @@ -376,15 +376,15 @@ What kind of function can be a callback?
     
       def do_notify():
           def nested_subscribe(resource, event, trigger, **kwargs):
    -          print 'nested callback'
    +          print('nested callback')
     
           registry.subscribe(nested_subscribe, resources.ROUTER, events.BEFORE_CREATE)
     
           kwargs = {'foo': 'bar'}
           registry.notify(resources.ROUTER, events.BEFORE_CREATE, do_notify, **kwargs)
     
     
    -  print 'Notifying...'
    +  print('Notifying...')
       do_notify()
     
     And the output is going to be:
    
  • doc/source/devref/contribute.rst+22 0 modified
    @@ -506,6 +506,28 @@ Extensions can be loaded in two ways:
        variable is commented.
     
     
    +Service Providers
    +~~~~~~~~~~~~~~~~~
    +
    +If your project uses service provider(s) the same way VPNAAS and LBAAS do, you
    +specify your service provider in your ``project_name.conf`` file like so::
    +
    +    [service_providers]
    +    # Must be in form:
    +    # service_provider=<service_type>:<name>:<driver>[:default][,...]
    +
    +In order for Neutron to load this correctly, make sure you do the following in
    +your code::
    +
    +    from neutron.db import servicetype_db
    +    service_type_manager = servicetype_db.ServiceTypeManager.get_instance()
    +    service_type_manager.add_provider_configuration(
    +        YOUR_SERVICE_TYPE,
    +        pconf.ProviderConfiguration(YOUR_SERVICE_MODULE))
    +
    +This is typically required when you instantiate your service plugin class.
    +
    +
     Interface Drivers
     ~~~~~~~~~~~~~~~~~
     
    
  • doc/source/devref/fullstack_testing.rst+7 3 modified
    @@ -83,6 +83,12 @@ When?
        stack testing can help here as the full stack infrastructure can restart an
        agent during the test.
     
    +Prerequisites
    +-------------
    +
    +Fullstack test suite assumes 240.0.0.0/3 range in root namespace of the test
    +machine is available for its usage.
    +
     Short Term Goals
     ----------------
     
    @@ -103,9 +109,6 @@ the fact as there will probably be something to copy/paste from.
     Long Term Goals
     ---------------
     
    -* Currently we configure the OVS agent with VLANs segmentation (Only because
    -  it's easier). This allows us to validate most functionality, but we might
    -  need to support tunneling somehow.
     * How will advanced services use the full stack testing infrastructure? Full
       stack tests infrastructure classes are expected to change quite a bit over
       the next coming months. This means that other repositories may import these
    @@ -116,3 +119,4 @@ Long Term Goals
       mechanism driver. We may modularize the topology configuration further to
       allow to rerun full stack tests against different Neutron plugins or ML2
       mechanism drivers.
    +* Add OVS ARP responder coverage when the gate supports OVS 2.1+
    
  • doc/source/devref/quality_of_service.rst+8 2 modified
    @@ -84,8 +84,14 @@ for a port or a network:
     
     Each QoS policy contains zero or more QoS rules. A policy is then applied to a
     network or a port, making all rules of the policy applied to the corresponding
    -Neutron resource (for a network, applying a policy means that the policy will
    -be applied to all ports that belong to it).
    +Neutron resource.
    +
    +When applied through a network association, policy rules could apply or not
    +to neutron internal ports (like router, dhcp, load balancer, etc..). The QosRule
    +base object provides a default should_apply_to_port method which could be
    +overridden. In the future we may want to have a flag in QoSNetworkPolicyBinding
    +or QosRule to enforce such type of application (for example when limiting all
    +the ingress of routers devices on an external network automatically).
     
     From database point of view, following objects are defined in schema:
     
    
  • doc/source/devref/quota.rst+0 6 modified
    @@ -164,12 +164,6 @@ difference between CountableResource and TrackedResource.
     Quota Enforcement
     -----------------
     
    -**NOTE: The reservation engine is currently not wired into the API controller
    -as issues have been discovered with multiple workers. For more information
    -see _bug1468134**
    -
    -.. _bug1468134: https://bugs.launchpad.net/neutron/+bug/1486134
    -
     Before dispatching a request to the plugin, the Neutron 'base' controller [#]_
     attempts to make a reservation for requested resource(s).
     Reservations are made by calling the make_reservation method in
    
  • doc/source/devref/sub_project_guidelines.rst+25 11 modified
    @@ -130,19 +130,33 @@ needed.
     Sub-Project Release Process
     ~~~~~~~~~~~~~~~~~~~~~~~~~~~
     
    +Only members of the `neutron-release
    +<https://review.openstack.org/#/admin/groups/150,members>`_ gerrit group can do
    +releases. Make sure you talk to a member of neutron-release to perform your
    +release.
    +
     To release a sub-project, follow the following steps:
     
    -* Only members of the `neutron-release
    -  <https://review.openstack.org/#/admin/groups/150,members>`_ gerrit group can
    -  do releases. Make sure you talk to a member of neutron-release to perform
    -  your release.
     * For projects which have not moved to post-versioning, we need to push an
    -  alpha tag to avoid pbr complaining. The neutron-release group will handle
    -  this.
    -* Modify setup.cfg to remove the version (if you have one), which moves your
    -  project to post-versioning, similar to all the other Neutron projects. You
    -  can skip this step if you don't have a version in setup.cfg.
    -* Have neutron-release push the tag to gerrit.
    -* Have neutron-release `tag the release
    +  alpha tag to avoid pbr complaining. A member of the neutron-release group
    +  will handle this.
    +* A sub-project owner should modify setup.cfg to remove the version (if you
    +  have one), which moves your project to post-versioning, similar to all the
    +  other Neutron projects. You can skip this step if you don't have a version in
    +  setup.cfg.
    +* A member of neutron-release will then `tag the release
       <http://docs.openstack.org/infra/manual/drivers.html#tagging-a-release>`_,
       which will release the code to PyPi.
    +* The releases will now be on PyPi. A sub-project owner should verify this by
    +  going to an URL similar to
    +  `this <https://pypi.python.org/pypi/networking-odl>`_.
    +* A sub-project owner should next go to Launchpad and release this version
    +  using the "Release Now" button for the release itself.
    +* A sub-project owner should update any bugs that were fixed with this
    +  release to "Fix Released" in Launchpad.
    +* A sub-project owner should add the tarball to the Launchpad page for the
    +  release using the "Add download file" link.
    +* A sub-project owner should add the next milestone to the Launchpad series, or
    +  if a new series is required, create the new series and a new milestone.
    +* Finally a sub-project owner should send an email to the openstack-announce
    +  mailing list announcing the new release.
    
  • etc/l3_agent.ini+1 0 modified
    @@ -64,6 +64,7 @@
     # Name of bridge used for external network traffic. This should be set to
     # empty value for the linux bridge. when this parameter is set, each L3 agent
     # can be associated with no more than one external network.
    +# This option is deprecated and will be removed in the M release.
     # external_network_bridge = br-ex
     
     # TCP Port used by Neutron metadata server
    
  • etc/neutron.conf+7 13 modified
    @@ -190,9 +190,9 @@
     
     # =========== items for agent scheduler extension =============
     # Driver to use for scheduling network to DHCP agent
    -# network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler
    +# network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.WeightScheduler
     # Driver to use for scheduling router to a default L3 agent
    -# router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
    +# router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.LeastRoutersScheduler
     # Driver to use for scheduling a loadbalancer pool to an lbaas agent
     # loadbalancer_pool_scheduler_driver = neutron.services.loadbalancer.agent_scheduler.ChanceScheduler
     
    @@ -306,19 +306,13 @@
     # ========== end of items for VLAN trunking networks ==========
     
     # =========== WSGI parameters related to the API server ==============
    -# Number of separate worker processes to spawn.  A value of 0 runs the
    -# worker thread in the current process.  Greater than 0 launches that number of
    -# child processes as workers.  The parent process manages them.  If not
    -# specified, the default value is equal to the number of CPUs available to
    -# achieve best performance.
    +# Number of separate API worker processes to spawn. If not specified or < 1,
    +# the default value is equal to the number of CPUs available.
     # api_workers = <number of CPUs>
     
    -# Number of separate RPC worker processes to spawn.  The default, 0, runs the
    -# worker thread in the current process.  Greater than 0 launches that number of
    -# child processes as RPC workers.  The parent process manages them.
    -# This feature is experimental until issues are addressed and testing has been
    -# enabled for various plugins for compatibility.
    -# rpc_workers = 0
    +# Number of separate RPC worker processes to spawn. If not specified or < 1,
    +# a single RPC worker process is spawned by the parent process.
    +# rpc_workers = 1
     
     # Timeout for client connections socket operations. If an
     # incoming connection is idle for this number of seconds it
    
  • etc/neutron/plugins/cisco/cisco_cfg_agent.ini+0 15 removed
    @@ -1,15 +0,0 @@
    -[cfg_agent]
    -# (IntOpt) Interval in seconds for processing of service updates.
    -# That is when the config agent's process_services() loop executes
    -# and it lets each service helper to process its service resources.
    -# rpc_loop_interval = 10
    -
    -# (StrOpt) Period-separated module path to the routing service helper class.
    -# routing_svc_helper_class = neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper.RoutingServiceHelper
    -
    -# (IntOpt) Timeout value in seconds for connecting to a hosting device.
    -# device_connection_timeout = 30
    -
    -# (IntOpt) The time in seconds until a backlogged hosting device is
    -# presumed dead or booted to an error state.
    -# hosting_device_dead_timeout = 300
    
  • etc/neutron/plugins/cisco/cisco_plugins.ini+0 107 removed
    @@ -1,107 +0,0 @@
    -[cisco]
    -
    -# (StrOpt) A short prefix to prepend to the VLAN number when creating a
    -# VLAN interface. For example, if an interface is being created for
    -# VLAN 2001 it will be named 'q-2001' using the default prefix.
    -#
    -# vlan_name_prefix = q-
    -# Example: vlan_name_prefix = vnet-
    -
    -# (StrOpt) A short prefix to prepend to the VLAN number when creating a
    -# provider VLAN interface. For example, if an interface is being created
    -# for provider VLAN 3003 it will be named 'p-3003' using the default prefix.
    -#
    -# provider_vlan_name_prefix = p-
    -# Example: provider_vlan_name_prefix = PV-
    -
    -# (BoolOpt) A flag indicating whether Openstack networking should manage the
    -# creation and removal of VLAN interfaces for provider networks on the Nexus
    -# switches. If the flag is set to False then Openstack will not create or
    -# remove VLAN interfaces for provider networks, and the administrator needs
    -# to manage these interfaces manually or by external orchestration.
    -#
    -# provider_vlan_auto_create = True
    -
    -# (BoolOpt) A flag indicating whether Openstack networking should manage
    -# the adding and removing of provider VLANs from trunk ports on the Nexus
    -# switches. If the flag is set to False then Openstack will not add or
    -# remove provider VLANs from trunk ports, and the administrator needs to
    -# manage these operations manually or by external orchestration.
    -#
    -# provider_vlan_auto_trunk = True
    -
    -# (StrOpt) Period-separated module path to the model class to use for
    -# the Cisco neutron plugin.
    -#
    -# model_class = neutron.plugins.cisco.models.virt_phy_sw_v2.VirtualPhysicalSwitchModelV2
    -
    -# (BoolOpt) A flag to enable Layer 3 support on the Nexus switches.
    -# Note: This feature is not supported on all models/versions of Cisco
    -# Nexus switches. To use this feature, all of the Nexus switches in the
    -# deployment must support it.
    -# nexus_l3_enable = False
    -
    -# (BoolOpt) A flag to enable round robin scheduling of routers for SVI.
    -# svi_round_robin = False
    -
    -# Cisco Nexus Switch configurations.
    -# Each switch to be managed by Openstack Neutron must be configured here.
    -#
    -# N1KV Format.
    -# [N1KV:<IP address of VSM>]
    -# username=<credential username>
    -# password=<credential password>
    -#
    -# Example:
    -# [N1KV:2.2.2.2]
    -# username=admin
    -# password=mySecretPassword
    -
    -[cisco_n1k]
    -
    -# (StrOpt) Specify the name of the integration bridge to which the VIFs are
    -# attached.
    -# Default value: br-int
    -# integration_bridge = br-int
    -
    -# (StrOpt) Name of the policy profile to be associated with a port when no
    -# policy profile is specified during port creates.
    -# Default value: service_profile
    -# default_policy_profile = service_profile
    -
    -# (StrOpt) Name of the policy profile to be associated with a port owned by
    -# network node (dhcp, router).
    -# Default value: dhcp_pp
    -# network_node_policy_profile = dhcp_pp
    -
    -# (StrOpt) Name of the network profile to be associated with a network when no
    -# network profile is specified during network creates. Admin should pre-create
    -# a network profile with this name.
    -# Default value: default_network_profile
    -# default_network_profile = network_pool
    -
    -# (IntOpt) Time in seconds for which the plugin polls the VSM for updates in
    -# policy profiles.
    -# Default value: 60
    -# poll_duration = 60
    -
    -# (BoolOpt) Specify whether tenants are restricted from accessing all the
    -# policy profiles.
    -# Default value: False, indicating all tenants can access all policy profiles.
    -#
    -# restrict_policy_profiles = False
    -
    -# (IntOpt) Number of threads to use to make HTTP requests to the VSM.
    -# Default value: 4
    -# http_pool_size = 4
    -
    -# (IntOpt) Timeout duration in seconds for the http request
    -# Default value: 15
    -# http_timeout = 15
    -
    -# (BoolOpt) Specify whether tenants are restricted from accessing network
    -# profiles belonging to other tenants.
    -# Default value: True, indicating other tenants cannot access network
    -# profiles belonging to a tenant.
    -#
    -# restrict_network_profiles = True
    
  • etc/neutron/plugins/cisco/cisco_router_plugin.ini+0 76 removed
    @@ -1,76 +0,0 @@
    -[general]
    -#(IntOpt) Time in seconds between renewed scheduling attempts of non-scheduled routers
    -# backlog_processing_interval = 10
    -
    -#(StrOpt) Name of the L3 admin tenant
    -# l3_admin_tenant = L3AdminTenant
    -
    -#(StrOpt) Name of management network for hosting device configuration
    -# management_network = osn_mgmt_nw
    -
    -#(StrOpt) Default security group applied on management port
    -# default_security_group = mgmt_sec_grp
    -
    -#(IntOpt) Seconds of no status update until a cfg agent is considered down
    -# cfg_agent_down_time = 60
    -
    -#(StrOpt) Path to templates for hosting devices
    -# templates_path = /opt/stack/data/neutron/cisco/templates
    -
    -#(StrOpt) Path to config drive files for service VM instances
    -# service_vm_config_path = /opt/stack/data/neutron/cisco/config_drive
    -
    -#(BoolOpt) Ensure that Nova is running before attempting to create any VM
    -# ensure_nova_running = True
    -
    -[hosting_devices]
    -# Settings coupled to CSR1kv VM devices
    -# -------------------------------------
    -#(StrOpt) Name of Glance image for CSR1kv
    -# csr1kv_image = csr1kv_openstack_img
    -
    -#(StrOpt) UUID of Nova flavor for CSR1kv
    -# csr1kv_flavor = 621
    -
    -#(StrOpt) Plugging driver for CSR1kv
    -# csr1kv_plugging_driver = neutron.plugins.cisco.l3.plugging_drivers.n1kv_trunking_driver.N1kvTrunkingPlugDriver
    -
    -#(StrOpt) Hosting device driver for CSR1kv
    -# csr1kv_device_driver = neutron.plugins.cisco.l3.hosting_device_drivers.csr1kv_hd_driver.CSR1kvHostingDeviceDriver
    -
    -#(StrOpt) Config agent router service driver for CSR1kv
    -# csr1kv_cfgagent_router_driver = neutron.plugins.cisco.cfg_agent.device_drivers.csr1kv.csr1kv_routing_driver.CSR1kvRoutingDriver
    -
    -#(StrOpt) Configdrive template file for CSR1kv
    -# csr1kv_configdrive_template = csr1kv_cfg_template
    -
    -#(IntOpt) Booting time in seconds before a CSR1kv becomes operational
    -# csr1kv_booting_time = 420
    -
    -#(StrOpt) Username to use for CSR1kv configurations
    -# csr1kv_username = stack
    -
    -#(StrOpt) Password to use for CSR1kv configurations
    -# csr1kv_password = cisco
    -
    -[n1kv]
    -# Settings coupled to inter-working with N1kv plugin
    -# --------------------------------------------------
    -#(StrOpt) Name of N1kv port profile for management ports
    -# management_port_profile = osn_mgmt_pp
    -
    -#(StrOpt) Name of N1kv port profile for T1 ports (i.e., ports carrying traffic
    -# from VXLAN segmented networks).
    -# t1_port_profile = osn_t1_pp
    -
    -#(StrOpt) Name of N1kv port profile for T2 ports (i.e., ports carrying traffic
    -# from VLAN segmented networks).
    -# t2_port_profile = osn_t2_pp
    -
    -#(StrOpt) Name of N1kv network profile for T1 networks (i.e., trunk networks
    -# for VXLAN segmented traffic).
    -# t1_network_profile = osn_t1_np
    -
    -#(StrOpt) Name of N1kv network profile for T2 networks (i.e., trunk networks
    -# for VLAN segmented traffic).
    -# t2_network_profile = osn_t2_np
    
  • etc/neutron/plugins/ml2/openvswitch_agent.ini+27 1 modified
    @@ -54,8 +54,28 @@
     # ovsdb_connection = tcp:127.0.0.1:6640
     
     # (StrOpt) OpenFlow interface to use.
    -# 'ovs-ofctl' is currently the only available choice.
    +# 'ovs-ofctl' or 'native'.
     # of_interface = ovs-ofctl
    +#
    +# (IPOpt)
    +# Address to listen on for OpenFlow connections.
    +# Used only for 'native' driver.
    +# of_listen_address = 127.0.0.1
    +#
    +# (IntOpt)
    +# Port to listen on for OpenFlow connections.
    +# Used only for 'native' driver.
    +# of_listen_port = 6633
    +#
    +# (IntOpt)
    +# Timeout in seconds to wait for the local switch connecting the controller.
    +# Used only for 'native' driver.
    +# of_connect_timeout=30
    +#
    +# (IntOpt)
    +# Timeout in seconds to wait for a single OpenFlow request.
    +# Used only for 'native' driver.
    +# of_request_timeout=10
     
     # (StrOpt) ovs datapath to use.
     # 'system' is the default value and corresponds to the kernel datapath.
    @@ -143,6 +163,12 @@
     #
     # extensions =
     
    +# (BoolOpt) Set or un-set the checksum on outgoing IP packet
    +# carrying GRE/VXLAN tunnel. The default value is False.
    +#
    +# tunnel_csum = False
    +
    +
     [securitygroup]
     # Firewall driver for realizing neutron security group function.
     # firewall_driver = neutron.agent.firewall.NoopFirewallDriver
    
  • etc/neutron/rootwrap.d/ebtables.filters+0 2 modified
    @@ -8,6 +8,4 @@
     
     [Filters]
     
    -# neutron/agent/linux/ebtables_driver.py
     ebtables: CommandFilter, ebtables, root
    -ebtablesEnv: EnvFilter, ebtables, root, EBTABLES_ATOMIC_FILE=
    
  • etc/neutron/rootwrap.d/openvswitch-plugin.filters+1 0 modified
    @@ -12,6 +12,7 @@
     # unclear whether both variants are necessary, but I'm transliterating
     # from the old mechanism
     ovs-vsctl: CommandFilter, ovs-vsctl, root
    +# NOTE(yamamoto): of_interface=native doesn't use ovs-ofctl
     ovs-ofctl: CommandFilter, ovs-ofctl, root
     kill_ovsdb_client: KillFilter, root, /usr/bin/ovsdb-client, -9
     ovsdb-client: CommandFilter, ovsdb-client, root
    
  • etc/policy.json+3 0 modified
    @@ -56,7 +56,9 @@
         "update_network:router:external": "rule:admin_only",
         "delete_network": "rule:admin_or_owner",
     
    +    "network_device": "field:port:device_owner=~^network:",
         "create_port": "",
    +    "create_port:device_owner": "not rule:network_device or rule:admin_or_network_owner or rule:context_is_advsvc",
         "create_port:mac_address": "rule:admin_or_network_owner or rule:context_is_advsvc",
         "create_port:fixed_ips": "rule:admin_or_network_owner or rule:context_is_advsvc",
         "create_port:port_security_enabled": "rule:admin_or_network_owner or rule:context_is_advsvc",
    @@ -71,6 +73,7 @@
         "get_port:binding:host_id": "rule:admin_only",
         "get_port:binding:profile": "rule:admin_only",
         "update_port": "rule:admin_or_owner or rule:context_is_advsvc",
    +    "update_port:device_owner": "not rule:network_device or rule:admin_or_network_owner or rule:context_is_advsvc",
         "update_port:mac_address": "rule:admin_only or rule:context_is_advsvc",
         "update_port:fixed_ips": "rule:admin_or_network_owner or rule:context_is_advsvc",
         "update_port:port_security_enabled": "rule:admin_or_network_owner or rule:context_is_advsvc",
    
  • neutron/agent/common/ovs_lib.py+5 11 modified
    @@ -152,7 +152,7 @@ def __init__(self, br_name, datapath_type=constants.OVS_DATAPATH_SYSTEM):
             super(OVSBridge, self).__init__()
             self.br_name = br_name
             self.datapath_type = datapath_type
    -        self.agent_uuid_stamp = '0x0'
    +        self.agent_uuid_stamp = 0
     
         def set_agent_uuid_stamp(self, val):
             self.agent_uuid_stamp = val
    @@ -195,15 +195,6 @@ def create(self, secure_mode=False):
         def destroy(self):
             self.delete_bridge(self.br_name)
     
    -    def reset_bridge(self, secure_mode=False):
    -        with self.ovsdb.transaction() as txn:
    -            txn.add(self.ovsdb.del_br(self.br_name))
    -            txn.add(self.ovsdb.add_br(self.br_name,
    -                                      datapath_type=self.datapath_type))
    -            if secure_mode:
    -                txn.add(self.ovsdb.set_fail_mode(self.br_name,
    -                                                 FAILMODE_SECURE))
    -
         def add_port(self, port_name, *interface_attr_tuples):
             with self.ovsdb.transaction() as txn:
                 txn.add(self.ovsdb.add_port(self.br_name, port_name))
    @@ -299,7 +290,8 @@ def deferred(self, **kwargs):
         def add_tunnel_port(self, port_name, remote_ip, local_ip,
                             tunnel_type=p_const.TYPE_GRE,
                             vxlan_udp_port=p_const.VXLAN_UDP_PORT,
    -                        dont_fragment=True):
    +                        dont_fragment=True,
    +                        tunnel_csum=False):
             attrs = [('type', tunnel_type)]
             # TODO(twilson) This is an OrderedDict solely to make a test happy
             options = collections.OrderedDict()
    @@ -314,6 +306,8 @@ def add_tunnel_port(self, port_name, remote_ip, local_ip,
             options['local_ip'] = local_ip
             options['in_key'] = 'flow'
             options['out_key'] = 'flow'
    +        if tunnel_csum:
    +            options['csum'] = str(tunnel_csum).lower()
             attrs.append(('options', options))
     
             return self.add_port(port_name, *attrs)
    
  • neutron/agent/dhcp/agent.py+21 16 modified
    @@ -51,15 +51,16 @@ class DhcpAgent(manager.Manager):
         """
         target = oslo_messaging.Target(version='1.0')
     
    -    def __init__(self, host=None):
    +    def __init__(self, host=None, conf=None):
             super(DhcpAgent, self).__init__(host=host)
             self.needs_resync_reasons = collections.defaultdict(list)
    -        self.conf = cfg.CONF
    +        self.conf = conf or cfg.CONF
             self.cache = NetworkCache()
             self.dhcp_driver_cls = importutils.import_class(self.conf.dhcp_driver)
             ctx = context.get_admin_context_without_session()
             self.plugin_rpc = DhcpPluginApi(topics.PLUGIN,
    -                                        ctx, self.conf.use_namespaces)
    +                                        ctx, self.conf.use_namespaces,
    +                                        self.conf.host)
             # create dhcp dir to store dhcp info
             dhcp_dir = os.path.dirname("/%s/dhcp/" % self.conf.state_path)
             utils.ensure_dir(dhcp_dir)
    @@ -136,11 +137,11 @@ def call_driver(self, action, network, **action_kwargs):
                     LOG.exception(_LE('Unable to %(action)s dhcp for %(net_id)s.'),
                                   {'net_id': network.id, 'action': action})
     
    -    def schedule_resync(self, reason, network=None):
    +    def schedule_resync(self, reason, network_id=None):
             """Schedule a resync for a given network and reason. If no network is
             specified, resync all networks.
             """
    -        self.needs_resync_reasons[network].append(reason)
    +        self.needs_resync_reasons[network_id].append(reason)
     
         @utils.synchronized('dhcp-agent')
         def sync_state(self, networks=None):
    @@ -149,7 +150,7 @@ def sync_state(self, networks=None):
             """
             only_nets = set([] if (not networks or None in networks) else networks)
             LOG.info(_LI('Synchronizing state'))
    -        pool = eventlet.GreenPool(cfg.CONF.num_sync_threads)
    +        pool = eventlet.GreenPool(self.conf.num_sync_threads)
             known_network_ids = set(self.cache.get_network_ids())
     
             try:
    @@ -172,7 +173,11 @@ def sync_state(self, networks=None):
                 LOG.info(_LI('Synchronizing state complete'))
     
             except Exception as e:
    -            self.schedule_resync(e)
    +            if only_nets:
    +                for network_id in only_nets:
    +                    self.schedule_resync(e, network_id)
    +            else:
    +                self.schedule_resync(e)
                 LOG.exception(_LE('Unable to sync network state.'))
     
         @utils.exception_logger()
    @@ -399,9 +404,9 @@ class DhcpPluginApi(object):
     
         """
     
    -    def __init__(self, topic, context, use_namespaces):
    +    def __init__(self, topic, context, use_namespaces, host):
             self.context = context
    -        self.host = cfg.CONF.host
    +        self.host = host
             self.use_namespaces = use_namespaces
             target = oslo_messaging.Target(
                     topic=topic,
    @@ -537,21 +542,21 @@ def get_state(self):
     
     
     class DhcpAgentWithStateReport(DhcpAgent):
    -    def __init__(self, host=None):
    -        super(DhcpAgentWithStateReport, self).__init__(host=host)
    +    def __init__(self, host=None, conf=None):
    +        super(DhcpAgentWithStateReport, self).__init__(host=host, conf=conf)
             self.state_rpc = agent_rpc.PluginReportStateAPI(topics.PLUGIN)
             self.agent_state = {
                 'binary': 'neutron-dhcp-agent',
                 'host': host,
                 'topic': topics.DHCP_AGENT,
                 'configurations': {
    -                'dhcp_driver': cfg.CONF.dhcp_driver,
    -                'use_namespaces': cfg.CONF.use_namespaces,
    -                'dhcp_lease_duration': cfg.CONF.dhcp_lease_duration,
    -                'log_agent_heartbeats': cfg.CONF.AGENT.log_agent_heartbeats},
    +                'dhcp_driver': self.conf.dhcp_driver,
    +                'use_namespaces': self.conf.use_namespaces,
    +                'dhcp_lease_duration': self.conf.dhcp_lease_duration,
    +                'log_agent_heartbeats': self.conf.AGENT.log_agent_heartbeats},
                 'start_flag': True,
                 'agent_type': constants.AGENT_TYPE_DHCP}
    -        report_interval = cfg.CONF.AGENT.report_interval
    +        report_interval = self.conf.AGENT.report_interval
             self.use_call = True
             if report_interval:
                 self.heartbeat = loopingcall.FixedIntervalLoopingCall(
    
  • neutron/agent/dhcp_agent.py+11 11 modified
    @@ -28,20 +28,20 @@
     from neutron import service as neutron_service
     
     
    -def register_options():
    -    config.register_interface_driver_opts_helper(cfg.CONF)
    -    config.register_use_namespaces_opts_helper(cfg.CONF)
    -    config.register_agent_state_opts_helper(cfg.CONF)
    -    cfg.CONF.register_opts(dhcp_config.DHCP_AGENT_OPTS)
    -    cfg.CONF.register_opts(dhcp_config.DHCP_OPTS)
    -    cfg.CONF.register_opts(dhcp_config.DNSMASQ_OPTS)
    -    cfg.CONF.register_opts(metadata_config.DRIVER_OPTS)
    -    cfg.CONF.register_opts(metadata_config.SHARED_OPTS)
    -    cfg.CONF.register_opts(interface.OPTS)
    +def register_options(conf):
    +    config.register_interface_driver_opts_helper(conf)
    +    config.register_use_namespaces_opts_helper(conf)
    +    config.register_agent_state_opts_helper(conf)
    +    conf.register_opts(dhcp_config.DHCP_AGENT_OPTS)
    +    conf.register_opts(dhcp_config.DHCP_OPTS)
    +    conf.register_opts(dhcp_config.DNSMASQ_OPTS)
    +    conf.register_opts(metadata_config.DRIVER_OPTS)
    +    conf.register_opts(metadata_config.SHARED_OPTS)
    +    conf.register_opts(interface.OPTS)
     
     
     def main():
    -    register_options()
    +    register_options(cfg.CONF)
         common_config.init(sys.argv[1:])
         config.setup_logging()
         server = neutron_service.Service.create(
    
  • neutron/agent/l2/extensions/qos.py+137 29 modified
    @@ -17,15 +17,20 @@
     import collections
     
     from oslo_concurrency import lockutils
    +from oslo_log import log as logging
     import six
     
     from neutron.agent.l2 import agent_extension
     from neutron.api.rpc.callbacks.consumer import registry
     from neutron.api.rpc.callbacks import events
     from neutron.api.rpc.callbacks import resources
     from neutron.api.rpc.handlers import resources_rpc
    +from neutron.common import exceptions
    +from neutron.i18n import _LW, _LI
     from neutron import manager
     
    +LOG = logging.getLogger(__name__)
    +
     
     @six.add_metaclass(abc.ABCMeta)
     class QosAgentDriver(object):
    @@ -35,36 +40,130 @@ class QosAgentDriver(object):
         for applying QoS Rules on a port.
         """
     
    +    # Each QoS driver should define the set of rule types that it supports, and
    +    # correspoding handlers that has the following names:
    +    #
    +    # create_<type>
    +    # update_<type>
    +    # delete_<type>
    +    #
    +    # where <type> is one of VALID_RULE_TYPES
    +    SUPPORTED_RULES = set()
    +
         @abc.abstractmethod
         def initialize(self):
             """Perform QoS agent driver initialization.
             """
     
    -    @abc.abstractmethod
         def create(self, port, qos_policy):
             """Apply QoS rules on port for the first time.
     
             :param port: port object.
             :param qos_policy: the QoS policy to be applied on port.
             """
    -        #TODO(QoS) we may want to provide default implementations of calling
    -        #delete and then update
    +        self._handle_update_create_rules('create', port, qos_policy)
     
    -    @abc.abstractmethod
         def update(self, port, qos_policy):
             """Apply QoS rules on port.
     
             :param port: port object.
             :param qos_policy: the QoS policy to be applied on port.
             """
    +        self._handle_update_create_rules('update', port, qos_policy)
     
    -    @abc.abstractmethod
    -    def delete(self, port, qos_policy):
    +    def delete(self, port, qos_policy=None):
             """Remove QoS rules from port.
     
             :param port: port object.
             :param qos_policy: the QoS policy to be removed from port.
             """
    +        if qos_policy is None:
    +            rule_types = self.SUPPORTED_RULES
    +        else:
    +            rule_types = set(
    +                [rule.rule_type
    +                 for rule in self._iterate_rules(qos_policy.rules)])
    +
    +        for rule_type in rule_types:
    +            self._handle_rule_delete(port, rule_type)
    +
    +    def _iterate_rules(self, rules):
    +        for rule in rules:
    +            rule_type = rule.rule_type
    +            if rule_type in self.SUPPORTED_RULES:
    +                yield rule
    +            else:
    +                LOG.warning(_LW('Unsupported QoS rule type for %(rule_id)s: '
    +                                '%(rule_type)s; skipping'),
    +                            {'rule_id': rule.id, 'rule_type': rule_type})
    +
    +    def _handle_rule_delete(self, port, rule_type):
    +        handler_name = "".join(("delete_", rule_type))
    +        handler = getattr(self, handler_name)
    +        handler(port)
    +
    +    def _handle_update_create_rules(self, action, port, qos_policy):
    +        for rule in self._iterate_rules(qos_policy.rules):
    +            if rule.should_apply_to_port(port):
    +                handler_name = "".join((action, "_", rule.rule_type))
    +                handler = getattr(self, handler_name)
    +                handler(port, rule)
    +            else:
    +                LOG.debug("Port %(port)s excluded from QoS rule %(rule)s",
    +                          {'port': port, 'rule': rule.id})
    +
    +
    +class PortPolicyMap(object):
    +    def __init__(self):
    +        # we cannot use a dict of sets here because port dicts are not hashable
    +        self.qos_policy_ports = collections.defaultdict(dict)
    +        self.known_policies = {}
    +        self.port_policies = {}
    +
    +    def get_ports(self, policy):
    +        return self.qos_policy_ports[policy.id].values()
    +
    +    def get_policy(self, policy_id):
    +        return self.known_policies.get(policy_id)
    +
    +    def update_policy(self, policy):
    +        self.known_policies[policy.id] = policy
    +
    +    def has_policy_changed(self, port, policy_id):
    +        return self.port_policies.get(port['port_id']) != policy_id
    +
    +    def get_port_policy(self, port):
    +        policy_id = self.port_policies.get(port['port_id'])
    +        if policy_id:
    +            return self.get_policy(policy_id)
    +
    +    def set_port_policy(self, port, policy):
    +        """Attach a port to policy and return any previous policy on port."""
    +        port_id = port['port_id']
    +        old_policy = self.get_port_policy(port)
    +        self.known_policies[policy.id] = policy
    +        self.port_policies[port_id] = policy.id
    +        self.qos_policy_ports[policy.id][port_id] = port
    +        if old_policy and old_policy.id != policy.id:
    +            del self.qos_policy_ports[old_policy.id][port_id]
    +        return old_policy
    +
    +    def clean_by_port(self, port):
    +        """Detach port from policy and cleanup data we don't need anymore."""
    +        port_id = port['port_id']
    +        if port_id in self.port_policies:
    +            del self.port_policies[port_id]
    +            for qos_policy_id, port_dict in self.qos_policy_ports.items():
    +                if port_id in port_dict:
    +                    del port_dict[port_id]
    +                    if not port_dict:
    +                        self._clean_policy_info(qos_policy_id)
    +                    return
    +        raise exceptions.PortNotFound(port_id=port['port_id'])
    +
    +    def _clean_policy_info(self, qos_policy_id):
    +        del self.qos_policy_ports[qos_policy_id]
    +        del self.known_policies[qos_policy_id]
     
     
     class QosAgentExtension(agent_extension.AgentCoreResourceExtension):
    @@ -79,9 +178,7 @@ def initialize(self, connection, driver_type):
                 'neutron.qos.agent_drivers', driver_type)()
             self.qos_driver.initialize()
     
    -        # we cannot use a dict of sets here because port dicts are not hashable
    -        self.qos_policy_ports = collections.defaultdict(dict)
    -        self.known_ports = set()
    +        self.policy_map = PortPolicyMap()
     
             registry.subscribe(self._handle_notification, resources.QOS_POLICY)
             self._register_rpc_consumers(connection)
    @@ -111,39 +208,50 @@ def handle_port(self, context, port):
             Update events are handled in _handle_notification.
             """
             port_id = port['port_id']
    -        qos_policy_id = port.get('qos_policy_id')
    +        port_qos_policy_id = port.get('qos_policy_id')
    +        network_qos_policy_id = port.get('network_qos_policy_id')
    +        qos_policy_id = port_qos_policy_id or network_qos_policy_id
             if qos_policy_id is None:
                 self._process_reset_port(port)
                 return
     
    -        #Note(moshele) check if we have seen this port
    -        #and it has the same policy we do nothing.
    -        if (port_id in self.known_ports and
    -                port_id in self.qos_policy_ports[qos_policy_id]):
    +        if not self.policy_map.has_policy_changed(port, qos_policy_id):
                 return
     
    -        self.qos_policy_ports[qos_policy_id][port_id] = port
    -        self.known_ports.add(port_id)
             qos_policy = self.resource_rpc.pull(
                 context, resources.QOS_POLICY, qos_policy_id)
    -        self.qos_driver.create(port, qos_policy)
    +        if qos_policy is None:
    +            LOG.info(_LI("QoS policy %(qos_policy_id)s applied to port "
    +                         "%(port_id)s is not available on server, "
    +                         "it has been deleted. Skipping."),
    +                     {'qos_policy_id': qos_policy_id, 'port_id': port_id})
    +            self._process_reset_port(port)
    +        else:
    +            old_qos_policy = self.policy_map.set_port_policy(port, qos_policy)
    +            if old_qos_policy:
    +                self.qos_driver.delete(port, old_qos_policy)
    +                self.qos_driver.update(port, qos_policy)
    +            else:
    +                self.qos_driver.create(port, qos_policy)
     
         def delete_port(self, context, port):
             self._process_reset_port(port)
     
         def _process_update_policy(self, qos_policy):
    -        for port_id, port in self.qos_policy_ports[qos_policy.id].items():
    -            # TODO(QoS): for now, just reflush the rules on the port. Later, we
    -            # may want to apply the difference between the rules lists only.
    -            self.qos_driver.delete(port, None)
    +        old_qos_policy = self.policy_map.get_policy(qos_policy.id)
    +        for port in self.policy_map.get_ports(qos_policy):
    +            #NOTE(QoS): for now, just reflush the rules on the port. Later, we
    +            #           may want to apply the difference between the old and
    +            #           new rule lists.
    +            self.qos_driver.delete(port, old_qos_policy)
                 self.qos_driver.update(port, qos_policy)
    +            self.policy_map.update_policy(qos_policy)
     
         def _process_reset_port(self, port):
    -        port_id = port['port_id']
    -        if port_id in self.known_ports:
    -            self.known_ports.remove(port_id)
    -            for qos_policy_id, port_dict in self.qos_policy_ports.items():
    -                if port_id in port_dict:
    -                    del port_dict[port_id]
    -                    self.qos_driver.delete(port, None)
    -                    return
    +        try:
    +            self.policy_map.clean_by_port(port)
    +            self.qos_driver.delete(port)
    +        except exceptions.PortNotFound:
    +            LOG.info(_LI("QoS extension did have no information about the "
    +                         "port %s that we were trying to reset"),
    +                     port['port_id'])
    
  • neutron/agent/l3/agent.py+9 3 modified
    @@ -80,7 +80,8 @@ class L3PluginApi(object):
                   to update_ha_routers_states
             1.5 - Added update_ha_routers_states
             1.6 - Added process_prefix_update
    -
    +        1.7 - DVR support: new L3 plugin methods added.
    +              - delete_agent_gateway_port
         """
     
         def __init__(self, topic, host):
    @@ -139,6 +140,12 @@ def process_prefix_update(self, context, prefix_update):
             return cctxt.call(context, 'process_prefix_update',
                               subnets=prefix_update)
     
    +    def delete_agent_gateway_port(self, context, fip_net):
    +        """Delete Floatingip_agent_gateway_port."""
    +        cctxt = self.client.prepare(version='1.7')
    +        return cctxt.call(context, 'delete_agent_gateway_port',
    +                          host=self.host, network_id=fip_net)
    +
     
     class L3NATAgent(firewall_l3_agent.FWaaSL3AgentRpcCallback,
                      ha.AgentMixin,
    @@ -517,10 +524,9 @@ def _process_routers_loop(self):
         @periodic_task.periodic_task(spacing=1)
         def periodic_sync_routers_task(self, context):
             self.process_services_sync(context)
    -        LOG.debug("Starting periodic_sync_routers_task - fullsync:%s",
    -                  self.fullsync)
             if not self.fullsync:
                 return
    +        LOG.debug("Starting fullsync periodic_sync_routers_task")
     
             # self.fullsync is True at this point. If an exception -- caught or
             # uncaught -- prevents setting it to False below then the next call
    
  • neutron/agent/l3/config.py+1 0 modified
    @@ -37,6 +37,7 @@
                           "running on a centralized node (or in single-host "
                           "deployments, e.g. devstack)")),
         cfg.StrOpt('external_network_bridge', default='br-ex',
    +               deprecated_for_removal=True,
                    help=_("Name of bridge used for external network "
                           "traffic.")),
         cfg.IntOpt('metadata_port',
    
  • neutron/agent/l3/dvr_edge_router.py+2 4 modified
    @@ -101,13 +101,11 @@ def _dvr_internal_network_removed(self, port):
             if not self.ex_gw_port:
                 return
     
    -        sn_port = self.get_snat_port_for_internal_port(port)
    +        sn_port = self.get_snat_port_for_internal_port(port, self.snat_ports)
             if not sn_port:
                 return
     
    -        is_this_snat_host = ('binding:host_id' in self.ex_gw_port) and (
    -            self.ex_gw_port['binding:host_id'] == self.host)
    -        if not is_this_snat_host:
    +        if not self._is_this_snat_host():
                 return
     
             snat_interface = self._get_snat_int_device_name(sn_port['id'])
    
  • neutron/agent/l3/dvr_local_router.py+12 1 modified
    @@ -137,6 +137,17 @@ def floating_ip_removed_dist(self, fip_cidr):
                     # destroying it.  The two could end up conflicting on
                     # creating/destroying interfaces and such.  I think I'd like a
                     # semaphore to sync creation/deletion of this namespace.
    +
    +                # NOTE (Swami): Since we are deleting the namespace here we
    +                # should be able to delete the floatingip agent gateway port
    +                # for the provided external net since we don't need it anymore.
    +                if self.fip_ns.agent_gateway_port:
    +                    LOG.debug('Removed last floatingip, so requesting the '
    +                              'server to delete Floatingip Agent Gateway port:'
    +                              '%s', self.fip_ns.agent_gateway_port)
    +                    self.agent.plugin_rpc.delete_agent_gateway_port(
    +                        self.agent.context,
    +                        self.fip_ns.agent_gateway_port['network_id'])
                     self.fip_ns.delete()
                     self.fip_ns = None
     
    @@ -303,7 +314,7 @@ def _dvr_internal_network_removed(self, port):
             if not self.ex_gw_port:
                 return
     
    -        sn_port = self.get_snat_port_for_internal_port(port)
    +        sn_port = self.get_snat_port_for_internal_port(port, self.snat_ports)
             if not sn_port:
                 return
     
    
  • neutron/agent/l3/dvr_router_base.py+8 2 modified
    @@ -26,12 +26,18 @@ def __init__(self, agent, host, *args, **kwargs):
             self.agent = agent
             self.host = host
     
    +    def process(self, agent):
    +        super(DvrRouterBase, self).process(agent)
    +        # NOTE:  Keep a copy of the interfaces around for when they are removed
    +        self.snat_ports = self.get_snat_interfaces()
    +
         def get_snat_interfaces(self):
             return self.router.get(l3_constants.SNAT_ROUTER_INTF_KEY, [])
     
    -    def get_snat_port_for_internal_port(self, int_port):
    +    def get_snat_port_for_internal_port(self, int_port, snat_ports=None):
             """Return the SNAT port for the given internal interface port."""
    -        snat_ports = self.get_snat_interfaces()
    +        if snat_ports is None:
    +            snat_ports = self.get_snat_interfaces()
             fixed_ip = int_port['fixed_ips'][0]
             subnet_id = fixed_ip['subnet_id']
             match_port = [p for p in snat_ports
    
  • neutron/agent/l3/ha.py+13 0 modified
    @@ -122,10 +122,23 @@ def enqueue_state_change(self, router_id, state):
                              'possibly deleted concurrently.'), router_id)
                 return
     
    +        self._configure_ipv6_ra_on_ext_gw_port_if_necessary(ri, state)
             self._update_metadata_proxy(ri, router_id, state)
             self._update_radvd_daemon(ri, state)
             self.state_change_notifier.queue_event((router_id, state))
     
    +    def _configure_ipv6_ra_on_ext_gw_port_if_necessary(self, ri, state):
    +        # If ipv6 is enabled on the platform, ipv6_gateway config flag is
    +        # not set and external_network associated to the router does not
    +        # include any IPv6 subnet, enable the gateway interface to accept
    +        # Router Advts from upstream router for default route.
    +        ex_gw_port_id = ri.ex_gw_port and ri.ex_gw_port['id']
    +        if state == 'master' and ex_gw_port_id and ri.use_ipv6:
    +            gateway_ips = ri._get_external_gw_ips(ri.ex_gw_port)
    +            if not ri.is_v6_gateway_set(gateway_ips):
    +                interface_name = ri.get_external_device_name(ex_gw_port_id)
    +                ri.driver.configure_ipv6_ra(ri.ns_name, interface_name)
    +
         def _update_metadata_proxy(self, ri, router_id, state):
             if state == 'master':
                 LOG.debug('Spawning metadata proxy for router %s', router_id)
    
  • neutron/agent/l3/ha_router.py+2 5 modified
    @@ -187,7 +187,7 @@ def routes_updated(self):
     
         def _add_default_gw_virtual_route(self, ex_gw_port, interface_name):
             default_gw_rts = []
    -        gateway_ips, enable_ra_on_gw = self._get_external_gw_ips(ex_gw_port)
    +        gateway_ips = self._get_external_gw_ips(ex_gw_port)
             for gw_ip in gateway_ips:
                     # TODO(Carl) This is repeated everywhere.  A method would
                     # be nice.
    @@ -197,9 +197,6 @@ def _add_default_gw_virtual_route(self, ex_gw_port, interface_name):
                         default_gw, gw_ip, interface_name))
             instance.virtual_routes.gateway_routes = default_gw_rts
     
    -        if enable_ra_on_gw:
    -            self.driver.configure_ipv6_ra(self.ns_name, interface_name)
    -
         def _add_extra_subnet_onlink_routes(self, ex_gw_port, interface_name):
             extra_subnets = ex_gw_port.get('extra_subnets', [])
             instance = self._get_keepalived_instance()
    @@ -362,10 +359,10 @@ def external_gateway_removed(self, ex_gw_port, interface_name):
                                                            interface_name)
     
         def delete(self, agent):
    +        super(HaRouter, self).delete(agent)
             self.destroy_state_change_monitor(self.process_monitor)
             self.ha_network_removed()
             self.disable_keepalived()
    -        super(HaRouter, self).delete(agent)
     
         def process(self, agent):
             super(HaRouter, self).process(agent)
    
  • neutron/agent/l3/router_info.py+16 7 modified
    @@ -202,6 +202,9 @@ def add_floating_ip(self, fip, interface_name, device):
         def remove_floating_ip(self, device, ip_cidr):
             device.delete_addr_and_conntrack_state(ip_cidr)
     
    +    def remove_external_gateway_ip(self, device, ip_cidr):
    +        device.delete_addr_and_conntrack_state(ip_cidr)
    +
         def get_router_cidrs(self, device):
             return set([addr['cidr'] for addr in device.addr.list()])
     
    @@ -475,7 +478,6 @@ def _plug_external_gateway(self, ex_gw_port, interface_name, ns_name):
     
         def _get_external_gw_ips(self, ex_gw_port):
             gateway_ips = []
    -        enable_ra_on_gw = False
             if 'subnets' in ex_gw_port:
                 gateway_ips = [subnet['gateway_ip']
                                for subnet in ex_gw_port['subnets']
    @@ -485,11 +487,7 @@ def _get_external_gw_ips(self, ex_gw_port):
                 if self.agent_conf.ipv6_gateway:
                     # ipv6_gateway configured, use address for default route.
                     gateway_ips.append(self.agent_conf.ipv6_gateway)
    -            else:
    -                # ipv6_gateway is also not configured.
    -                # Use RA for default route.
    -                enable_ra_on_gw = True
    -        return gateway_ips, enable_ra_on_gw
    +        return gateway_ips
     
         def _external_gateway_added(self, ex_gw_port, interface_name,
                                     ns_name, preserve_ips):
    @@ -501,7 +499,12 @@ def _external_gateway_added(self, ex_gw_port, interface_name,
             # will be added to the interface.
             ip_cidrs = common_utils.fixed_ip_cidrs(ex_gw_port['fixed_ips'])
     
    -        gateway_ips, enable_ra_on_gw = self._get_external_gw_ips(ex_gw_port)
    +        gateway_ips = self._get_external_gw_ips(ex_gw_port)
    +        enable_ra_on_gw = False
    +        if self.use_ipv6 and not self.is_v6_gateway_set(gateway_ips):
    +            # There is no IPv6 gw_ip, use RouterAdvt for default route.
    +            enable_ra_on_gw = True
    +
             self.driver.init_router_port(
                 interface_name,
                 ip_cidrs,
    @@ -538,6 +541,12 @@ def external_gateway_updated(self, ex_gw_port, interface_name):
         def external_gateway_removed(self, ex_gw_port, interface_name):
             LOG.debug("External gateway removed: port(%s), interface(%s)",
                       ex_gw_port, interface_name)
    +        device = ip_lib.IPDevice(interface_name, namespace=self.ns_name)
    +        for ip_addr in ex_gw_port['fixed_ips']:
    +            self.remove_external_gateway_ip(device,
    +                                            common_utils.ip_to_cidr(
    +                                                ip_addr['ip_address'],
    +                                                ip_addr['prefixlen']))
             self.driver.unplug(interface_name,
                                bridge=self.agent_conf.external_network_bridge,
                                namespace=self.ns_name,
    
  • neutron/agent/linux/async_process.py+1 1 modified
    @@ -50,7 +50,7 @@ class AsyncProcess(object):
         >>> time.sleep(5)
         >>> proc.stop()
         >>> for line in proc.iter_stdout():
    -    ...     print line
    +    ...     print(line)
         """
     
         def __init__(self, cmd, run_as_root=False, respawn_interval=None,
    
  • neutron/agent/linux/dhcp.py+29 2 modified
    @@ -1030,10 +1030,18 @@ def _setup_existing_dhcp_port(self, network, device_id, dhcp_subnets):
             # the following loop...
             port = None
     
    -        # Look for an existing DHCP for this network.
    +        # Look for an existing DHCP port for this network.
             for port in network.ports:
                 port_device_id = getattr(port, 'device_id', None)
                 if port_device_id == device_id:
    +                # If using gateway IPs on this port, we can skip the
    +                # following code, whose purpose is just to review and
    +                # update the Neutron-allocated IP addresses for the
    +                # port.
    +                if self.driver.use_gateway_ips:
    +                    return port
    +                # Otherwise break out, as we now have the DHCP port
    +                # whose subnets and addresses we need to review.
                     break
             else:
                 return None
    @@ -1090,13 +1098,21 @@ def _setup_new_dhcp_port(self, network, device_id, dhcp_subnets):
             LOG.debug('DHCP port %(device_id)s on network %(network_id)s'
                       ' does not yet exist. Creating new one.',
                       {'device_id': device_id, 'network_id': network.id})
    +
    +        # Make a list of the subnets that need a unique IP address for
    +        # this DHCP port.
    +        if self.driver.use_gateway_ips:
    +            unique_ip_subnets = []
    +        else:
    +            unique_ip_subnets = [dict(subnet_id=s) for s in dhcp_subnets]
    +
             port_dict = dict(
                 name='',
                 admin_state_up=True,
                 device_id=device_id,
                 network_id=network.id,
                 tenant_id=network.tenant_id,
    -            fixed_ips=[dict(subnet_id=s) for s in dhcp_subnets])
    +            fixed_ips=unique_ip_subnets)
             return self.plugin.create_dhcp_port({'port': port_dict})
     
         def setup_dhcp_port(self, network):
    @@ -1168,6 +1184,17 @@ def setup(self, network):
                     ip_cidr = '%s/%s' % (fixed_ip.ip_address, net.prefixlen)
                     ip_cidrs.append(ip_cidr)
     
    +        if self.driver.use_gateway_ips:
    +            # For each DHCP-enabled subnet, add that subnet's gateway
    +            # IP address to the Linux device for the DHCP port.
    +            for subnet in network.subnets:
    +                if not subnet.enable_dhcp:
    +                    continue
    +                gateway = subnet.gateway_ip
    +                if gateway:
    +                    net = netaddr.IPNetwork(subnet.cidr)
    +                    ip_cidrs.append('%s/%s' % (gateway, net.prefixlen))
    +
             if (self.conf.enable_isolated_metadata and
                 self.conf.use_namespaces):
                 ip_cidrs.append(METADATA_DEFAULT_CIDR)
    
  • neutron/agent/linux/ebtables_driver.py+0 290 removed
    @@ -1,290 +0,0 @@
    -# Copyright (c) 2015 OpenStack Foundation.
    -# All Rights Reserved.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -#
    -
    -"""Implement ebtables rules using linux utilities."""
    -
    -import re
    -
    -from retrying import retry
    -
    -from oslo_config import cfg
    -from oslo_log import log as logging
    -
    -from neutron.common import utils
    -
    -ebtables_opts = [
    -    cfg.StrOpt('ebtables_path',
    -               default='$state_path/ebtables-',
    -               help=_('Location of temporary ebtables table files.')),
    -]
    -
    -CONF = cfg.CONF
    -CONF.register_opts(ebtables_opts)
    -
    -LOG = logging.getLogger(__name__)
    -
    -# Collection of regexes to parse ebtables output
    -_RE_FIND_BRIDGE_TABLE_NAME = re.compile(r'^Bridge table:[\s]*([a-z]+)$')
    -# get chain name, nunmber of entries and policy name.
    -_RE_FIND_BRIDGE_CHAIN_INFO = re.compile(
    -    r'^Bridge chain:[\s]*(.*),[\s]*entries:[\s]*[0-9]+,[\s]*'
    -    r'policy:[\s]*([A-Z]+)$')
    -_RE_FIND_BRIDGE_RULE_COUNTERS = re.compile(
    -    r',[\s]*pcnt[\s]*=[\s]*([0-9]+)[\s]*--[\s]*bcnt[\s]*=[\s]*([0-9]+)$')
    -_RE_FIND_COMMIT_STATEMENT = re.compile(r'^COMMIT$')
    -_RE_FIND_COMMENTS_AND_BLANKS = re.compile(r'^#|^$')
    -_RE_FIND_APPEND_RULE = re.compile(r'-A (\S+) ')
    -
    -# Regexes to parse ebtables rule file input
    -_RE_RULES_FIND_TABLE_NAME = re.compile(r'^\*([a-z]+)$')
    -_RE_RULES_FIND_CHAIN_NAME = re.compile(r'^:(.*)[\s]+([A-Z]+)$')
    -_RE_RULES_FIND_RULE_LINE = re.compile(r'^\[([0-9]+):([0-9]+)\]')
    -
    -
    -def _process_ebtables_output(lines):
    -    """Process raw output of ebtables rule listing file.
    -
    -    Empty lines and comments removed, ebtables listing output converted
    -    into ebtables rules.
    -
    -    For example, if the raw ebtables list lines (input to this function) are:
    -
    -        Bridge table: filter
    -        Bridge chain: INPUT, entries: 0, policy: ACCEPT
    -        Bridge chain: FORWARD, entries: 0, policy: ACCEPT
    -        Bridge chain: OUTPUT, entries: 0, policy: ACCEPT
    -
    -    The output then will be:
    -
    -        *filter
    -        :INPUT ACCEPT
    -        :FORWARD ACCEPT
    -        :OUTPUT ACCEPT
    -        COMMIT
    -
    -    Key point: ebtables rules listing output is not the same as the rules
    -               format for setting new rules.
    -
    -    """
    -    table = None
    -    chain = ''
    -    chains = []
    -    rules = []
    -
    -    for line in lines:
    -        if _RE_FIND_COMMENTS_AND_BLANKS.search(line):
    -            continue
    -        match = _RE_FIND_BRIDGE_RULE_COUNTERS.search(line)
    -        if table and match:
    -            rules.append('[%s:%s] -A %s %s' % (match.group(1),
    -                                               match.group(2),
    -                                               chain,
    -                                               line[:match.start()].strip()))
    -        match = _RE_FIND_BRIDGE_CHAIN_INFO.search(line)
    -        if match:
    -            chains.append(':%s %s' % (match.group(1), match.group(2)))
    -            chain = match.group(1)
    -            continue
    -        match = _RE_FIND_BRIDGE_TABLE_NAME.search(line)
    -        if match:
    -            table = '*%s' % match.group(1)
    -            continue
    -    return [table] + chains + rules + ['COMMIT']
    -
    -
    -def _match_rule_line(table, line):
    -    match = _RE_RULES_FIND_RULE_LINE.search(line)
    -    if table and match:
    -        args = line[match.end():].split()
    -        res = [(table, args)]
    -        if int(match.group(1)) > 0 and int(match.group(2)) > 0:
    -            p = _RE_FIND_APPEND_RULE
    -            rule = p.sub(r'-C \1 %s %s ', line[match.end() + 1:])
    -            args = (rule % (match.group(1), match.group(2))).split()
    -            res.append((table, args))
    -        return table, res
    -    else:
    -        return table, None
    -
    -
    -def _match_chain_name(table, tables, line):
    -    match = _RE_RULES_FIND_CHAIN_NAME.search(line)
    -    if table and match:
    -        if match.group(1) not in tables[table]:
    -            args = ['-N', match.group(1), '-P', match.group(2)]
    -        else:
    -            args = ['-P', match.group(1), match.group(2)]
    -        return table, (table, args)
    -    else:
    -        return table, None
    -
    -
    -def _match_table_name(table, line):
    -    match = _RE_RULES_FIND_TABLE_NAME.search(line)
    -    if match:
    -        # Initialize with current kernel table if we just start out
    -        table = match.group(1)
    -        return table, (table, ['--atomic-init'])
    -    else:
    -        return table, None
    -
    -
    -def _match_commit_statement(table, line):
    -    match = _RE_FIND_COMMIT_STATEMENT.search(line)
    -    if table and match:
    -        # Conclude by issuing the commit command
    -        return (table, ['--atomic-commit'])
    -    else:
    -        return None
    -
    -
    -def _process_ebtables_input(lines):
    -    """Import text ebtables rules. Similar to iptables-restore.
    -
    -    Was based on:
    -    http://sourceforge.net/p/ebtables/code/ci/
    -    3730ceb7c0a81781679321bfbf9eaa39cfcfb04e/tree/userspace/ebtables2/
    -    ebtables-save?format=raw
    -
    -    The function prepares and returns a list of tuples, each tuple consisting
    -    of a table name and ebtables arguments. The caller can then repeatedly call
    -    ebtables on that table with those arguments to get the rules applied.
    -
    -    For example, this input:
    -
    -        *filter
    -        :INPUT ACCEPT
    -        :FORWARD ACCEPT
    -        :OUTPUT ACCEPT
    -        :neutron-nwfilter-spoofing-fallb ACCEPT
    -        :neutron-nwfilter-OUTPUT ACCEPT
    -        :neutron-nwfilter-INPUT ACCEPT
    -        :neutron-nwfilter-FORWARD ACCEPT
    -        [0:0] -A INPUT -j neutron-nwfilter-INPUT
    -        [0:0] -A OUTPUT -j neutron-nwfilter-OUTPUT
    -        [0:0] -A FORWARD -j neutron-nwfilter-FORWARD
    -        [0:0] -A neutron-nwfilter-spoofing-fallb -j DROP
    -        COMMIT
    -
    -    ... produces this output:
    -
    -        ('filter', ['--atomic-init'])
    -        ('filter', ['-P', 'INPUT', 'ACCEPT'])
    -        ('filter', ['-P', 'FORWARD', 'ACCEPT'])
    -        ('filter', ['-P', 'OUTPUT', 'ACCEPT'])
    -        ('filter', ['-N', 'neutron-nwfilter-spoofing-fallb', '-P', 'ACCEPT'])
    -        ('filter', ['-N', 'neutron-nwfilter-OUTPUT', '-P', 'ACCEPT'])
    -        ('filter', ['-N', 'neutron-nwfilter-INPUT', '-P', 'ACCEPT'])
    -        ('filter', ['-N', 'neutron-nwfilter-FORWARD', '-P', 'ACCEPT'])
    -        ('filter', ['-A', 'INPUT', '-j', 'neutron-nwfilter-INPUT'])
    -        ('filter', ['-A', 'OUTPUT', '-j', 'neutron-nwfilter-OUTPUT'])
    -        ('filter', ['-A', 'FORWARD', '-j', 'neutron-nwfilter-FORWARD'])
    -        ('filter', ['-A', 'neutron-nwfilter-spoofing-fallb', '-j', 'DROP'])
    -        ('filter', ['--atomic-commit'])
    -
    -    """
    -    tables = {'filter': ['INPUT', 'FORWARD', 'OUTPUT'],
    -              'nat': ['PREROUTING', 'OUTPUT', 'POSTROUTING'],
    -              'broute': ['BROUTING']}
    -    table = None
    -
    -    ebtables_args = list()
    -    for line in lines.splitlines():
    -        if _RE_FIND_COMMENTS_AND_BLANKS.search(line):
    -            continue
    -        table, res = _match_rule_line(table, line)
    -        if res:
    -            ebtables_args.extend(res)
    -            continue
    -        table, res = _match_chain_name(table, tables, line)
    -        if res:
    -            ebtables_args.append(res)
    -            continue
    -        table, res = _match_table_name(table, line)
    -        if res:
    -            ebtables_args.append(res)
    -            continue
    -        res = _match_commit_statement(table, line)
    -        if res:
    -            ebtables_args.append(res)
    -            continue
    -
    -    return ebtables_args
    -
    -
    -@retry(wait_exponential_multiplier=1000, wait_exponential_max=10000,
    -       stop_max_delay=10000)
    -def _cmd_retry(func, *args, **kwargs):
    -    return func(*args, **kwargs)
    -
    -
    -def run_ebtables(namespace, execute, table, args):
    -    """Run ebtables utility, with retry if necessary.
    -
    -    Provide table name and list of additional arguments to ebtables.
    -
    -    """
    -    cmd = ['ebtables', '-t', table]
    -    if CONF.ebtables_path:
    -        f = '%s%s' % (CONF.ebtables_path, table)
    -        cmd += ['--atomic-file', f]
    -    cmd += args
    -    if namespace:
    -        cmd = ['ip', 'netns', 'exec', namespace] + cmd
    -    # TODO(jbrendel): The root helper is used for every ebtables command,
    -    #                 but as we use an atomic file we only need root for
    -    #                 init and commit commands.
    -    #                 But the generated file by init ebtables command is
    -    #                 only readable and writable by root.
    -    #
    -    # We retry the execution of ebtables in case of failure. Known issue:
    -    # See bug:    https://bugs.launchpad.net/nova/+bug/1316621
    -    # See patch:  https://review.openstack.org/#/c/140514/3
    -    return _cmd_retry(execute, cmd, **{"run_as_root": True})
    -
    -
    -def run_ebtables_multiple(namespace, execute, arg_list):
    -    """Run ebtables utility multiple times.
    -
    -    Similar to run(), but runs ebtables for every element in arg_list.
    -    Each arg_list element is a tuple containing the table name and a list
    -    of ebtables arguments.
    -
    -    """
    -    for table, args in arg_list:
    -        run_ebtables(namespace, execute, table, args)
    -
    -
    -@utils.synchronized('ebtables', external=True)
    -def ebtables_save(execute, tables_names, namespace=None):
    -    """Generate text output of the ebtables rules.
    -
    -    Based on:
    -    http://sourceforge.net/p/ebtables/code/ci/master/tree/userspace/ebtables2/
    -    ebtables-save?format=raw
    -
    -    """
    -    raw_outputs = (run_ebtables(namespace, execute,
    -                   t, ['-L', '--Lc']).splitlines() for t in tables_names)
    -    parsed_outputs = (_process_ebtables_output(lines) for lines in raw_outputs)
    -    return '\n'.join(l for lines in parsed_outputs for l in lines)
    -
    -
    -@utils.synchronized('ebtables', external=True)
    -def ebtables_restore(lines, execute, namespace=None):
    -    """Import text ebtables rules and apply."""
    -    ebtables_args = _process_ebtables_input(lines)
    -    run_ebtables_multiple(namespace, execute, ebtables_args)
    
  • neutron/agent/linux/ebtables_manager.py+0 253 removed
    @@ -1,253 +0,0 @@
    -# Copyright (c) 2015 OpenStack Foundation.
    -# All Rights Reserved.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -#
    -
    -"""
    -Implement a manager for ebtables rules.
    -
    -NOTE: The ebtables manager contains a lot of duplicated or very similar code
    -      from the iptables manager. An option would have been to refactor the
    -      iptables manager so that ebtables and iptables manager can share common
    -      code. However, the iptables manager was considered too brittle and
    -      in need for a larger re-work or full replacement in the future.
    -      Therefore, it was decided not to do any refactoring for now and to accept
    -      the code duplication.
    -
    -"""
    -
    -import inspect
    -import os
    -
    -from oslo_log import log as logging
    -
    -from neutron.i18n import _LW
    -
    -
    -LOG = logging.getLogger(__name__)
    -
    -
    -MAX_CHAIN_LEN_EBTABLES = 31
    -# NOTE(jbrendel): ebtables supports chain names of up to 31 characters, and
    -#                 we add up to 12 characters to prefix_chain which is used
    -#                 as a prefix, so we limit it to 19 characters.
    -POSTROUTING_STR = '-POSTROUTING'
    -MAX_LEN_PREFIX_CHAIN = MAX_CHAIN_LEN_EBTABLES - len(POSTROUTING_STR)
    -
    -# When stripping or calculating string lengths, sometimes a '-' which separates
    -# name components needs to be considered.
    -DASH_STR_LEN = 1
    -
    -
    -def binary_name():
    -    """Grab the name of the binary we're running in."""
    -    return os.path.basename(inspect.stack()[-1][1])
    -
    -
    -def _get_prefix_chain(prefix_chain=None):
    -    """Determine the prefix chain."""
    -    if prefix_chain:
    -        return prefix_chain[:MAX_LEN_PREFIX_CHAIN]
    -    else:
    -        return binary_name()[:MAX_LEN_PREFIX_CHAIN]
    -
    -
    -def get_chain_name(chain_name, wrap=True, prefix_chain=None):
    -    """Determine the chain name."""
    -    if wrap:
    -        # Get the possible chain name length in function of the prefix name
    -        # length.
    -        chain_len = (MAX_CHAIN_LEN_EBTABLES -
    -                     (len(_get_prefix_chain(prefix_chain)) + DASH_STR_LEN))
    -        return chain_name[:chain_len]
    -    else:
    -        return chain_name[:MAX_CHAIN_LEN_EBTABLES]
    -
    -
    -class EbtablesRule(object):
    -    """An ebtables rule.
    -
    -    You shouldn't need to use this class directly, it's only used by
    -    EbtablesManager.
    -
    -    """
    -
    -    def __init__(self, chain, rule, wrap=True, top=False,
    -                 prefix_chain=None):
    -        self.prefix_chain = _get_prefix_chain(prefix_chain)
    -        self.chain = get_chain_name(chain, wrap, prefix_chain)
    -        self.rule = rule
    -        self.wrap = wrap
    -        self.top = top
    -
    -    def __eq__(self, other):
    -        return ((self.chain == other.chain) and
    -                (self.rule == other.rule) and
    -                (self.top == other.top) and
    -                (self.wrap == other.wrap))
    -
    -    def __ne__(self, other):
    -        return not self == other
    -
    -    def __str__(self):
    -        if self.wrap:
    -            chain = '%s-%s' % (self.prefix_chain, self.chain)
    -        else:
    -            chain = self.chain
    -        return '-A %s %s' % (chain, self.rule)
    -
    -
    -class EbtablesTable(object):
    -    """An ebtables table."""
    -
    -    def __init__(self, prefix_chain=None):
    -        self.rules = []
    -        self.rules_to_remove = []
    -        self.chains = set()
    -        self.unwrapped_chains = set()
    -        self.chains_to_remove = set()
    -        self.prefix_chain = _get_prefix_chain(prefix_chain)
    -
    -    def add_chain(self, name, wrap=True):
    -        """Adds a named chain to the table.
    -
    -        The chain name is wrapped to be unique for the component creating
    -        it, so different components of Neutron can safely create identically
    -        named chains without interfering with one another.
    -
    -        At the moment, its wrapped name is <prefix chain>-<chain name>,
    -        so if neutron-server creates a chain named 'OUTPUT', it'll actually
    -        end up named 'neutron-server-OUTPUT'.
    -
    -        """
    -        name = get_chain_name(name, wrap, self.prefix_chain)
    -        if wrap:
    -            self.chains.add(name)
    -        else:
    -            self.unwrapped_chains.add(name)
    -
    -    def _select_chain_set(self, wrap):
    -        if wrap:
    -            return self.chains
    -        else:
    -            return self.unwrapped_chains
    -
    -    def ensure_remove_chain(self, name, wrap=True):
    -        """Ensure the chain is removed.
    -
    -        This removal "cascades". All rule in the chain are removed, as are
    -        all rules in other chains that jump to it.
    -        """
    -        self.remove_chain(name, wrap, log_not_found=False)
    -
    -    def remove_chain(self, name, wrap=True, log_not_found=True):
    -        """Remove named chain.
    -
    -        This removal "cascades". All rules in the chain are removed, as are
    -        all rules in other chains that jump to it.
    -
    -        If the chain is not found then this is merely logged.
    -
    -        """
    -        name = get_chain_name(name, wrap, self.prefix_chain)
    -        chain_set = self._select_chain_set(wrap)
    -
    -        if name not in chain_set:
    -            if log_not_found:
    -                LOG.warn(_LW('Attempted to remove chain %s '
    -                             'which does not exist'), name)
    -            return
    -
    -        chain_set.remove(name)
    -
    -        if not wrap:
    -            # non-wrapped chains and rules need to be dealt with specially,
    -            # so we keep a list of them to be iterated over in apply()
    -            self.chains_to_remove.add(name)
    -
    -            # first, add rules to remove that have a matching chain name
    -            self.rules_to_remove += [r for r in self.rules if r.chain == name]
    -
    -        # next, remove rules from list that have a matching chain name
    -        self.rules = [r for r in self.rules if r.chain != name]
    -
    -        if not wrap:
    -            jump_snippet = '-j %s' % name
    -            # next, add rules to remove that have a matching jump chain
    -            self.rules_to_remove += [r for r in self.rules
    -                                     if jump_snippet in r.rule]
    -        else:
    -            jump_snippet = '-j %s-%s' % (self.prefix_chain, name)
    -
    -        # finally, remove rules from list that have a matching jump chain
    -        self.rules = [r for r in self.rules
    -                      if jump_snippet not in r.rule]
    -
    -    def add_rule(self, chain, rule, wrap=True, top=False):
    -        """Add a rule to the table.
    -
    -        This is just like what you'd feed to ebtables, just without
    -        the '-A <chain name>' bit at the start.
    -
    -        However, if you need to jump to one of your wrapped chains,
    -        prepend its name with a '$' which will ensure the wrapping
    -        is applied correctly.
    -
    -        """
    -        chain = get_chain_name(chain, wrap, self.prefix_chain)
    -        if wrap and chain not in self.chains:
    -            raise LookupError(_('Unknown chain: %r') % chain)
    -
    -        if '$' in rule:
    -            rule = ' '.join(map(self._wrap_target_chain, rule.split(' ')))
    -
    -        self.rules.append(EbtablesRule(chain, rule, wrap, top,
    -                                       self.prefix_chain))
    -
    -    def remove_rule(self, chain, rule, wrap=True, top=False):
    -        """Remove a rule from a chain.
    -
    -        However, if the rule jumps to one of your wrapped chains,
    -        prepend its name with a '$' which will ensure the wrapping
    -        is applied correctly.
    -        """
    -        chain = get_chain_name(chain, wrap, self.prefix_chain)
    -        if '$' in rule:
    -            rule = ' '.join(map(self._wrap_target_chain, rule.split(' ')))
    -
    -        try:
    -            self.rules.remove(EbtablesRule(chain, rule, wrap, top,
    -                                           self.prefix_chain))
    -            if not wrap:
    -                self.rules_to_remove.append(
    -                    EbtablesRule(chain, rule, wrap, top,
    -                                 self.prefix_chain))
    -        except ValueError:
    -            LOG.warn(_LW('Tried to remove rule that was not there:'
    -                     ' %(chain)r %(rule)r %(wrap)r %(top)r'),
    -                     {'chain': chain, 'rule': rule,
    -                      'top': top, 'wrap': wrap})
    -
    -    def _wrap_target_chain(self, s):
    -        if s.startswith('$'):
    -            return ('%s-%s' % (self.prefix_chain, s[1:]))
    -        return s
    -
    -    def empty_chain(self, chain, wrap=True):
    -        """Remove all rules from a chain."""
    -        chain = get_chain_name(chain, wrap, self.prefix_chain)
    -        chained_rules = [rule for rule in self.rules
    -                         if rule.chain == chain and rule.wrap == wrap]
    -        for rule in chained_rules:
    -            self.rules.remove(rule)
    
  • neutron/agent/linux/interface.py+48 6 modified
    @@ -64,6 +64,46 @@ def _validate_network_device_mtu(self):
                               'current_mtu': self.conf.network_device_mtu})
                 raise SystemExit(1)
     
    +    @property
    +    def use_gateway_ips(self):
    +        """Whether to use gateway IPs instead of unique IP allocations.
    +
    +        In each place where the DHCP agent runs, and for each subnet for
    +        which DHCP is handling out IP addresses, the DHCP port needs -
    +        at the Linux level - to have an IP address within that subnet.
    +        Generally this needs to be a unique Neutron-allocated IP
    +        address, because the subnet's underlying L2 domain is bridged
    +        across multiple compute hosts and network nodes, and for HA
    +        there may be multiple DHCP agents running on that same bridged
    +        L2 domain.
    +
    +        However, if the DHCP ports - on multiple compute/network nodes
    +        but for the same network - are _not_ bridged to each other,
    +        they do not need each to have a unique IP address.  Instead
    +        they can all share the same address from the relevant subnet.
    +        This works, without creating any ambiguity, because those
    +        ports are not all present on the same L2 domain, and because
    +        no data within the network is ever sent to that address.
    +        (DHCP requests are broadcast, and it is the network's job to
    +        ensure that such a broadcast will reach at least one of the
    +        available DHCP servers.  DHCP responses will be sent _from_
    +        the DHCP port address.)
    +
    +        Specifically, for networking backends where it makes sense,
    +        the DHCP agent allows all DHCP ports to use the subnet's
    +        gateway IP address, and thereby to completely avoid any unique
    +        IP address allocation.  This behaviour is selected by running
    +        the DHCP agent with a configured interface driver whose
    +        'use_gateway_ips' property is True.
    +
    +        When an operator deploys Neutron with an interface driver that
    +        makes use_gateway_ips True, they should also ensure that a
    +        gateway IP address is defined for each DHCP-enabled subnet,
    +        and that the gateway IP address doesn't change during the
    +        subnet's lifetime.
    +        """
    +        return False
    +
         def init_l3(self, device_name, ip_cidrs, namespace=None,
                     preserve_ips=[], gateway_ips=None,
                     clean_connections=False):
    @@ -143,14 +183,16 @@ def init_router_port(self,
             device = ip_lib.IPDevice(device_name, namespace=namespace)
     
             # Manage on-link routes (routes without an associated address)
    -        new_onlink_routes = set(s['cidr'] for s in extra_subnets or [])
    -        existing_onlink_routes = set(
    -            device.route.list_onlink_routes(n_const.IP_VERSION_4) +
    -            device.route.list_onlink_routes(n_const.IP_VERSION_6))
    -        for route in new_onlink_routes - existing_onlink_routes:
    +        new_onlink_cidrs = set(s['cidr'] for s in extra_subnets or [])
    +
    +        v4_onlink = device.route.list_onlink_routes(n_const.IP_VERSION_4)
    +        v6_onlink = device.route.list_onlink_routes(n_const.IP_VERSION_6)
    +        existing_onlink_cidrs = set(r['cidr'] for r in v4_onlink + v6_onlink)
    +
    +        for route in new_onlink_cidrs - existing_onlink_cidrs:
                 LOG.debug("adding onlink route(%s)", route)
                 device.route.add_onlink_route(route)
    -        for route in existing_onlink_routes - new_onlink_routes:
    +        for route in existing_onlink_cidrs - new_onlink_cidrs:
                 LOG.debug("deleting onlink route(%s)", route)
                 device.route.delete_onlink_route(route)
     
    
  • neutron/agent/linux/ip_lib.py+129 45 modified
    @@ -20,6 +20,7 @@
     from oslo_log import log as logging
     from oslo_utils import excutils
     import re
    +import six
     
     from neutron.agent.common import utils
     from neutron.common import constants
    @@ -231,9 +232,10 @@ def delete_addr_and_conntrack_state(self, cidr):
     
             This terminates any active connections through an IP.
     
    -        cidr: the IP address for which state should be removed.  This can be
    -            passed as a string with or without /NN.  A netaddr.IPAddress or
    -            netaddr.Network representing the IP address can also be passed.
    +        :param cidr: the IP address for which state should be removed.
    +            This can be passed as a string with or without /NN.
    +            A netaddr.IPAddress or netaddr.Network representing the IP address
    +            can also be passed.
             """
             self.addr.delete(cidr)
     
    @@ -287,6 +289,67 @@ def __init__(self, namespace=None):
     class IpRuleCommand(IpCommandBase):
         COMMAND = 'rule'
     
    +    @staticmethod
    +    def _make_canonical(ip_version, settings):
    +        """Converts settings to a canonical represention to compare easily"""
    +        def canonicalize_fwmark_string(fwmark_mask):
    +            """Reformats fwmark/mask in to a canonical form
    +
    +            Examples, these are all equivalent:
    +                "0x1"
    +                0x1
    +                "0x1/0xfffffffff"
    +                (0x1, 0xfffffffff)
    +
    +            :param fwmark_mask: The firewall and mask (default 0xffffffff)
    +            :type fwmark_mask: A string with / as delimiter, an iterable, or a
    +                single value.
    +            """
    +            # Turn the value we were passed in to an iterable: fwmark[, mask]
    +            if isinstance(fwmark_mask, six.string_types):
    +                # A / separates the optional mask in a string
    +                iterable = fwmark_mask.split('/')
    +            else:
    +                try:
    +                    iterable = iter(fwmark_mask)
    +                except TypeError:
    +                    # At this point, it must be a single integer
    +                    iterable = [fwmark_mask]
    +
    +            def to_i(s):
    +                if isinstance(s, six.string_types):
    +                    # Passing 0 as "base" arg to "int" causes it to determine
    +                    # the base automatically.
    +                    return int(s, 0)
    +                # s isn't a string, can't specify base argument
    +                return int(s)
    +
    +            integers = [to_i(x) for x in iterable]
    +
    +            # The default mask is all ones, the mask is 32 bits.
    +            if len(integers) == 1:
    +                integers.append(0xffffffff)
    +
    +            # We now have two integers in a list.  Convert to canonical string.
    +            return '/'.join(map(hex, integers))
    +
    +        def canonicalize(item):
    +            k, v = item
    +            # ip rule shows these as 'any'
    +            if k == 'from' and v == 'all':
    +                return k, constants.IP_ANY[ip_version]
    +            # lookup and table are interchangeable.  Use table every time.
    +            if k == 'lookup':
    +                return 'table', v
    +            if k == 'fwmark':
    +                return k, canonicalize_fwmark_string(v)
    +            return k, v
    +
    +        if 'type' not in settings:
    +            settings['type'] = 'unicast'
    +
    +        return {k: str(v) for k, v in map(canonicalize, settings.items())}
    +
         def _parse_line(self, ip_version, line):
             # Typical rules from 'ip rule show':
             # 4030201:  from 1.2.3.4/24 lookup 10203040
    @@ -296,23 +359,21 @@ def _parse_line(self, ip_version, line):
             if not parts:
                 return {}
     
    -        # Format of line is: "priority: <key> <value> ..."
    +        # Format of line is: "priority: <key> <value> ... [<type>]"
             settings = {k: v for k, v in zip(parts[1::2], parts[2::2])}
             settings['priority'] = parts[0][:-1]
    +        if len(parts) % 2 == 0:
    +            # When line has an even number of columns, last one is the type.
    +            settings['type'] = parts[-1]
     
    -        # Canonicalize some arguments
    -        if settings.get('from') == "all":
    -            settings['from'] = constants.IP_ANY[ip_version]
    -        if 'lookup' in settings:
    -            settings['table'] = settings.pop('lookup')
    +        return self._make_canonical(ip_version, settings)
     
    -        return settings
    +    def list_rules(self, ip_version):
    +        lines = self._as_root([ip_version], ['show']).splitlines()
    +        return [self._parse_line(ip_version, line) for line in lines]
     
         def _exists(self, ip_version, **kwargs):
    -        kwargs_strings = {k: str(v) for k, v in kwargs.items()}
    -        lines = self._as_root([ip_version], ['show']).splitlines()
    -        return kwargs_strings in (self._parse_line(ip_version, line)
    -                                  for line in lines)
    +        return kwargs in self.list_rules(ip_version)
     
         def _make__flat_args_tuple(self, *args, **kwargs):
             for kwargs_item in sorted(kwargs.items(), key=lambda i: i[0]):
    @@ -323,17 +384,20 @@ def add(self, ip, **kwargs):
             ip_version = get_ip_version(ip)
     
             kwargs.update({'from': ip})
    +        canonical_kwargs = self._make_canonical(ip_version, kwargs)
     
    -        if not self._exists(ip_version, **kwargs):
    -            args_tuple = self._make__flat_args_tuple('add', **kwargs)
    +        if not self._exists(ip_version, **canonical_kwargs):
    +            args_tuple = self._make__flat_args_tuple('add', **canonical_kwargs)
                 self._as_root([ip_version], args_tuple)
     
         def delete(self, ip, **kwargs):
             ip_version = get_ip_version(ip)
     
             # TODO(Carl) ip ignored in delete, okay in general?
     
    -        args_tuple = self._make__flat_args_tuple('del', **kwargs)
    +        canonical_kwargs = self._make_canonical(ip_version, kwargs)
    +
    +        args_tuple = self._make__flat_args_tuple('del', **canonical_kwargs)
             self._as_root([ip_version], args_tuple)
     
     
    @@ -534,35 +598,47 @@ def delete_gateway(self, gateway, table=None):
                         raise exceptions.DeviceNotFoundError(
                             device_name=self.name)
     
    -    def list_onlink_routes(self, ip_version):
    -        def iterate_routes():
    -            args = ['list']
    -            args += self._dev_args()
    -            args += ['scope', 'link']
    -            args += self._table_args()
    -            output = self._run([ip_version], tuple(args))
    -            for line in output.split('\n'):
    -                line = line.strip()
    -                if line and not line.count('src'):
    -                    yield line
    -
    -        return [x for x in iterate_routes()]
    +    def _parse_routes(self, ip_version, output, **kwargs):
    +        for line in output.splitlines():
    +            parts = line.split()
     
    -    def add_onlink_route(self, cidr):
    -        ip_version = get_ip_version(cidr)
    -        args = ['replace', cidr]
    +            # Format of line is: "<cidr>|default [<key> <value>] ..."
    +            route = {k: v for k, v in zip(parts[1::2], parts[2::2])}
    +            route['cidr'] = parts[0]
    +            # Avoids having to explicitly pass around the IP version
    +            if route['cidr'] == 'default':
    +                route['cidr'] = constants.IP_ANY[ip_version]
    +
    +            # ip route drops things like scope and dev from the output if it
    +            # was specified as a filter.  This allows us to add them back.
    +            if self.name:
    +                route['dev'] = self.name
    +            if self._table:
    +                route['table'] = self._table
    +            # Callers add any filters they use as kwargs
    +            route.update(kwargs)
    +
    +            yield route
    +
    +    def list_routes(self, ip_version, **kwargs):
    +        args = ['list']
             args += self._dev_args()
    -        args += ['scope', 'link']
             args += self._table_args()
    -        self._as_root([ip_version], tuple(args))
    +        for k, v in kwargs.items():
    +            args += [k, v]
    +
    +        output = self._run([ip_version], tuple(args))
    +        return [r for r in self._parse_routes(ip_version, output, **kwargs)]
    +
    +    def list_onlink_routes(self, ip_version):
    +        routes = self.list_routes(ip_version, scope='link')
    +        return [r for r in routes if 'src' not in r]
    +
    +    def add_onlink_route(self, cidr):
    +        self.add_route(cidr, scope='link')
     
         def delete_onlink_route(self, cidr):
    -        ip_version = get_ip_version(cidr)
    -        args = ['del', cidr]
    -        args += self._dev_args()
    -        args += ['scope', 'link']
    -        args += self._table_args()
    -        self._as_root([ip_version], tuple(args))
    +        self.delete_route(cidr, scope='link')
     
         def get_gateway(self, scope=None, filters=None, ip_version=None):
             options = [ip_version] if ip_version else []
    @@ -644,18 +720,26 @@ def pullup_route(self, interface_name, ip_version):
                                        'proto', 'kernel',
                                        'dev', device))
     
    -    def add_route(self, cidr, ip, table=None):
    +    def add_route(self, cidr, via=None, table=None, **kwargs):
             ip_version = get_ip_version(cidr)
    -        args = ['replace', cidr, 'via', ip]
    +        args = ['replace', cidr]
    +        if via:
    +            args += ['via', via]
             args += self._dev_args()
             args += self._table_args(table)
    +        for k, v in kwargs.items():
    +            args += [k, v]
             self._as_root([ip_version], tuple(args))
     
    -    def delete_route(self, cidr, ip, table=None):
    +    def delete_route(self, cidr, via=None, table=None, **kwargs):
             ip_version = get_ip_version(cidr)
    -        args = ['del', cidr, 'via', ip]
    +        args = ['del', cidr]
    +        if via:
    +            args += ['via', via]
             args += self._dev_args()
             args += self._table_args(table)
    +        for k, v in kwargs.items():
    +            args += [k, v]
             self._as_root([ip_version], tuple(args))
     
     
    
  • neutron/agent/linux/ip_monitor.py+1 1 modified
    @@ -64,7 +64,7 @@ class IPMonitor(async_process.AsyncProcess):
             m.start()
             for iterable in m:
                 event = IPMonitorEvent.from_text(iterable)
    -            print event, event.added, event.interface, event.cidr
    +            print(event, event.added, event.interface, event.cidr)
         """
     
         def __init__(self,
    
  • neutron/agent/linux/iptables_firewall.py+1 1 modified
    @@ -559,13 +559,13 @@ def _convert_sg_rule_to_iptables_args(self, sg_rule):
     
         def _convert_sgr_to_iptables_rules(self, security_group_rules):
             iptables_rules = []
    -        self._drop_invalid_packets(iptables_rules)
             self._allow_established(iptables_rules)
             for rule in security_group_rules:
                 args = self._convert_sg_rule_to_iptables_args(rule)
                 if args:
                     iptables_rules += [' '.join(args)]
     
    +        self._drop_invalid_packets(iptables_rules)
             iptables_rules += [comment_rule('-j $sg-fallback',
                                             comment=ic.UNMATCHED)]
             return iptables_rules
    
  • neutron/agent/linux/keepalived.py+25 1 modified
    @@ -79,6 +79,17 @@ def __init__(self, **kwargs):
             super(InvalidAuthenticationTypeException, self).__init__(**kwargs)
     
     
    +class VIPDuplicateAddressException(exceptions.NeutronException):
    +    message = _('Attempted to add duplicate VIP address, '
    +                'existing vips are: %(existing_vips)s, '
    +                'duplicate vip is: %(duplicate_vip)s')
    +
    +    def __init__(self, **kwargs):
    +        kwargs['existing_vips'] = ', '.join(str(vip) for vip in
    +                                            kwargs['existing_vips'])
    +        super(VIPDuplicateAddressException, self).__init__(**kwargs)
    +
    +
     class KeepalivedVipAddress(object):
         """A virtual address entry of a keepalived configuration."""
     
    @@ -87,6 +98,15 @@ def __init__(self, ip_address, interface_name, scope=None):
             self.interface_name = interface_name
             self.scope = scope
     
    +    def __eq__(self, other):
    +        return (isinstance(other, KeepalivedVipAddress) and
    +                self.ip_address == other.ip_address)
    +
    +    def __str__(self):
    +        return '[%s, %s, %s]' % (self.ip_address,
    +                                 self.interface_name,
    +                                 self.scope)
    +
         def build_config(self):
             result = '%s dev %s' % (self.ip_address, self.interface_name)
             if self.scope:
    @@ -183,7 +203,11 @@ def set_authentication(self, auth_type, password):
             self.authentication = (auth_type, password)
     
         def add_vip(self, ip_cidr, interface_name, scope):
    -        self.vips.append(KeepalivedVipAddress(ip_cidr, interface_name, scope))
    +        vip = KeepalivedVipAddress(ip_cidr, interface_name, scope)
    +        if vip in self.vips:
    +            raise VIPDuplicateAddressException(existing_vips=self.vips,
    +                                               duplicate_vip=vip)
    +        self.vips.append(vip)
     
         def remove_vips_vroutes_by_interface(self, interface_name):
             self.vips = [vip for vip in self.vips
    
  • neutron/agent/linux/utils.py+14 8 modified
    @@ -83,6 +83,7 @@ def create_process(cmd, run_as_root=False, addl_env=None):
         cmd = list(map(str, addl_env_args(addl_env) + cmd))
         if run_as_root:
             cmd = shlex.split(config.get_root_helper(cfg.CONF)) + cmd
    +    LOG.debug("Running command: %s", cmd)
         obj = utils.subprocess_popen(cmd, shell=False,
                                      stdin=subprocess.PIPE,
                                      stdout=subprocess.PIPE,
    @@ -98,6 +99,7 @@ def execute_rootwrap_daemon(cmd, process_input, addl_env):
         # In practice, no neutron code should be trying to execute something that
         # would throw those errors, and if it does it should be fixed as opposed to
         # just logging the execution error.
    +    LOG.debug("Running command (rootwrap daemon): %s", cmd)
         client = RootwrapDaemonHelper.get_client()
         return client.execute(cmd, process_input)
     
    @@ -132,20 +134,24 @@ def execute(cmd, process_input=None, addl_env=None,
                     except UnicodeError:
                         pass
     
    -        m = _("\nCommand: {cmd}\nExit code: {code}\n").format(
    -                  cmd=cmd,
    -                  code=returncode)
    +        command_str = {
    +            'cmd': cmd,
    +            'code': returncode
    +        }
    +        m = _("\nCommand: %(cmd)s"
    +              "\nExit code: %(code)d\n") % command_str
     
             extra_ok_codes = extra_ok_codes or []
             if returncode and returncode in extra_ok_codes:
                 returncode = None
     
             if returncode and log_fail_as_error:
    -            m += ("Stdin: {stdin}\n"
    -                  "Stdout: {stdout}\nStderr: {stderr}").format(
    -                stdin=process_input or '',
    -                stdout=_stdout,
    -                stderr=_stderr)
    +            command_str['stdin'] = process_input or ''
    +            command_str['stdout'] = _stdout
    +            command_str['stderr'] = _stderr
    +            m += _("Stdin: %(stdin)s\n"
    +                  "Stdout: %(stdout)s\n"
    +                  "Stderr: %(stderr)s") % command_str
                 LOG.error(m)
             else:
                 LOG.debug(m)
    
  • neutron/agent/ovsdb/api.py+3 3 modified
    @@ -239,7 +239,7 @@ def db_list(self, table, records=None, columns=None, if_exists=False):
             :type records:    list of record ids (names/uuids)
             :param columns:   Limit results to only columns, None means all columns
             :type columns:    list of column names or None
    -        :param if_exists: Do not fail if the bridge does not exist
    +        :param if_exists: Do not fail if the record does not exist
             :type if_exists:  bool
             :returns:         :class:`Command` with [{'column', value}, ...] result
             """
    @@ -313,7 +313,7 @@ def add_port(self, bridge, port, may_exist=True):
             :type bridge:     string
             :param port:      The name of the port
             :type port:       string
    -        :param may_exist: Do not fail if bridge already exists
    +        :param may_exist: Do not fail if the port already exists
             :type may_exist:  bool
             :returns:         :class:`Command` with no result
             """
    @@ -326,7 +326,7 @@ def del_port(self, port, bridge=None, if_exists=True):
             :type port:       string
             :param bridge:    Only delete port if it is attached to this bridge
             :type bridge:     string
    -        :param if_exists: Do not fail if the bridge does not exist
    +        :param if_exists: Do not fail if the port does not exist
             :type if_exists:  bool
             :returns:         :class:`Command` with no result
             """
    
  • neutron/api/rpc/handlers/l3_rpc.py+10 1 modified
    @@ -45,7 +45,8 @@ class L3RpcCallback(object):
         #     since it was unused. The RPC version was not changed
         # 1.5 Added update_ha_routers_states
         # 1.6 Added process_prefix_update to support IPv6 Prefix Delegation
    -    target = oslo_messaging.Target(version='1.6')
    +    # 1.7 Added method delete_agent_gateway_port for DVR Routers
    +    target = oslo_messaging.Target(version='1.7')
     
         @property
         def plugin(self):
    @@ -281,3 +282,11 @@ def process_prefix_update(self, context, **kwargs):
                                             subnet_id,
                                             {'subnet': {'cidr': prefix}}))
             return updated_subnets
    +
    +    def delete_agent_gateway_port(self, context, **kwargs):
    +        """Delete Floatingip agent gateway port."""
    +        network_id = kwargs.get('network_id')
    +        host = kwargs.get('host')
    +        admin_ctx = neutron_context.get_admin_context()
    +        self.l3plugin.delete_floatingip_agent_gateway_port(
    +            admin_ctx, host, network_id)
    
  • neutron/api/rpc/handlers/securitygroups_rpc.py+2 1 modified
    @@ -19,6 +19,7 @@
     from neutron.common import constants
     from neutron.common import rpc as n_rpc
     from neutron.common import topics
    +from neutron.common import utils
     from neutron.i18n import _LW
     from neutron import manager
     
    @@ -80,7 +81,7 @@ def _get_devices_info(self, context, devices):
             return dict(
                 (port['id'], port)
                 for port in self.plugin.get_ports_from_devices(context, devices)
    -            if port and not port['device_owner'].startswith('network:')
    +            if port and not utils.is_port_trusted(port)
             )
     
         def security_group_rules_for_devices(self, context, **kwargs):
    
  • neutron/api/v2/attributes.py+1 1 modified
    @@ -731,7 +731,7 @@ def convert_to_list(data):
                           'is_visible': True},
             'device_owner': {'allow_post': True, 'allow_put': True,
                              'validate': {'type:string': DEVICE_OWNER_MAX_LEN},
    -                         'default': '',
    +                         'default': '', 'enforce_policy': True,
                              'is_visible': True},
             'tenant_id': {'allow_post': True, 'allow_put': False,
                           'validate': {'type:string': TENANT_ID_MAX_LEN},
    
  • neutron/api/v2/base.py+54 29 modified
    @@ -13,6 +13,7 @@
     #    License for the specific language governing permissions and limitations
     #    under the License.
     
    +import collections
     import copy
     
     import netaddr
    @@ -416,13 +417,15 @@ def create(self, request, body=None, **kwargs):
             if self._collection in body:
                 # Have to account for bulk create
                 items = body[self._collection]
    -            deltas = {}
    -            bulk = True
             else:
                 items = [body]
    -            bulk = False
             # Ensure policy engine is initialized
             policy.init()
    +        # Store requested resource amounts grouping them by tenant
    +        # This won't work with multiple resources. However because of the
    +        # current structure of this controller there will hardly be more than
    +        # one resource for which reservations are being made
    +        request_deltas = collections.defaultdict(int)
             for item in items:
                 self._validate_network_tenant_ownership(request,
                                                         item[self._resource])
    @@ -433,30 +436,31 @@ def create(self, request, body=None, **kwargs):
                 if 'tenant_id' not in item[self._resource]:
                     # no tenant_id - no quota check
                     continue
    -            try:
    -                tenant_id = item[self._resource]['tenant_id']
    -                count = quota.QUOTAS.count(request.context, self._resource,
    -                                           self._plugin, tenant_id)
    -                if bulk:
    -                    delta = deltas.get(tenant_id, 0) + 1
    -                    deltas[tenant_id] = delta
    -                else:
    -                    delta = 1
    -                kwargs = {self._resource: count + delta}
    -            except exceptions.QuotaResourceUnknown as e:
    +            tenant_id = item[self._resource]['tenant_id']
    +            request_deltas[tenant_id] += 1
    +        # Quota enforcement
    +        reservations = []
    +        try:
    +            for (tenant, delta) in request_deltas.items():
    +                reservation = quota.QUOTAS.make_reservation(
    +                    request.context,
    +                    tenant,
    +                    {self._resource: delta},
    +                    self._plugin)
    +                reservations.append(reservation)
    +        except exceptions.QuotaResourceUnknown as e:
                     # We don't want to quota this resource
                     LOG.debug(e)
    -            else:
    -                quota.QUOTAS.limit_check(request.context,
    -                                         item[self._resource]['tenant_id'],
    -                                         **kwargs)
     
             def notify(create_result):
                 # Ensure usage trackers for all resources affected by this API
                 # operation are marked as dirty
    -            # TODO(salv-orlando): This operation will happen in a single
    -            # transaction with reservation commit once that is implemented
    -            resource_registry.set_resources_dirty(request.context)
    +            with request.context.session.begin():
    +                # Commit the reservation(s)
    +                for reservation in reservations:
    +                    quota.QUOTAS.commit_reservation(
    +                        request.context, reservation.reservation_id)
    +                resource_registry.set_resources_dirty(request.context)
     
                 notifier_method = self._resource + '.create.end'
                 self._notifier.info(request.context,
    @@ -467,11 +471,35 @@ def notify(create_result):
                                              notifier_method)
                 return create_result
     
    -        kwargs = {self._parent_id_name: parent_id} if parent_id else {}
    +        def do_create(body, bulk=False, emulated=False):
    +            kwargs = {self._parent_id_name: parent_id} if parent_id else {}
    +            if bulk and not emulated:
    +                obj_creator = getattr(self._plugin, "%s_bulk" % action)
    +            else:
    +                obj_creator = getattr(self._plugin, action)
    +            try:
    +                if emulated:
    +                    return self._emulate_bulk_create(obj_creator, request,
    +                                                     body, parent_id)
    +                else:
    +                    if self._collection in body:
    +                        # This is weird but fixing it requires changes to the
    +                        # plugin interface
    +                        kwargs.update({self._collection: body})
    +                    else:
    +                        kwargs.update({self._resource: body})
    +                    return obj_creator(request.context, **kwargs)
    +            except Exception:
    +                # In case of failure the plugin will always raise an
    +                # exception. Cancel the reservation
    +                with excutils.save_and_reraise_exception():
    +                    for reservation in reservations:
    +                        quota.QUOTAS.cancel_reservation(
    +                            request.context, reservation.reservation_id)
    +
             if self._collection in body and self._native_bulk:
                 # plugin does atomic bulk create operations
    -            obj_creator = getattr(self._plugin, "%s_bulk" % action)
    -            objs = obj_creator(request.context, body, **kwargs)
    +            objs = do_create(body, bulk=True)
                 # Use first element of list to discriminate attributes which
                 # should be removed because of authZ policies
                 fields_to_strip = self._exclude_attributes_by_policy(
    @@ -480,15 +508,12 @@ def notify(create_result):
                     request.context, obj, fields_to_strip=fields_to_strip)
                     for obj in objs]})
             else:
    -            obj_creator = getattr(self._plugin, action)
                 if self._collection in body:
                     # Emulate atomic bulk behavior
    -                objs = self._emulate_bulk_create(obj_creator, request,
    -                                                 body, parent_id)
    +                objs = do_create(body, bulk=True, emulated=True)
                     return notify({self._collection: objs})
                 else:
    -                kwargs.update({self._resource: body})
    -                obj = obj_creator(request.context, **kwargs)
    +                obj = do_create(body)
                     self._send_nova_notification(action, {},
                                                  {self._resource: obj})
                     return notify({self._resource: self._view(request.context,
    
  • neutron/callbacks/resources.py+2 0 modified
    @@ -12,9 +12,11 @@
     
     # String literals representing core resources.
     PORT = 'port'
    +PROCESS = 'process'
     ROUTER = 'router'
     ROUTER_GATEWAY = 'router_gateway'
     ROUTER_INTERFACE = 'router_interface'
     SECURITY_GROUP = 'security_group'
     SECURITY_GROUP_RULE = 'security_group_rule'
     SUBNET = 'subnet'
    +SUBNET_GATEWAY = 'subnet_gateway'
    
  • neutron/cmd/sanity_check.py+14 1 modified
    @@ -39,7 +39,7 @@ def setup_conf():
         cfg.CONF.import_group('ml2_sriov',
                               'neutron.plugins.ml2.drivers.mech_sriov.mech_driver.'
                               'mech_driver')
    -    dhcp_agent.register_options()
    +    dhcp_agent.register_options(cfg.CONF)
         cfg.CONF.register_opts(l3_hamode_db.L3_HA_OPTS)
     
     
    @@ -165,6 +165,16 @@ def check_arp_header_match():
         return result
     
     
    +def check_icmpv6_header_match():
    +    result = checks.icmpv6_header_match_supported()
    +    if not result:
    +        LOG.error(_LE('Check for Open vSwitch support of ICMPv6 header '
    +                      'matching failed. ICMPv6 Neighbor Advt spoofing (part '
    +                      'of arp spoofing) suppression will not work. A newer '
    +                      'version of OVS is required.'))
    +    return result
    +
    +
     def check_vf_management():
         result = checks.vf_management_supported()
         if not result:
    @@ -206,6 +216,8 @@ def check_ebtables():
                         help=_('Check for ARP responder support')),
         BoolOptCallback('arp_header_match', check_arp_header_match,
                         help=_('Check for ARP header match support')),
    +    BoolOptCallback('icmpv6_header_match', check_icmpv6_header_match,
    +                    help=_('Check for ICMPv6 header match support')),
         BoolOptCallback('vf_management', check_vf_management,
                         help=_('Check for VF management support')),
         BoolOptCallback('read_netns', check_read_netns,
    @@ -247,6 +259,7 @@ def enable_tests_from_config():
             cfg.CONF.set_override('arp_responder', True)
         if cfg.CONF.AGENT.prevent_arp_spoofing:
             cfg.CONF.set_override('arp_header_match', True)
    +        cfg.CONF.set_override('icmpv6_header_match', True)
         if cfg.CONF.ml2_sriov.agent_required:
             cfg.CONF.set_override('vf_management', True)
         if not cfg.CONF.AGENT.use_helper_for_ns_read:
    
  • neutron/cmd/sanity/checks.py+11 0 modified
    @@ -134,6 +134,17 @@ def arp_header_match_supported():
                                    actions="NORMAL")
     
     
    +def icmpv6_header_match_supported():
    +    return ofctl_arg_supported(cmd='add-flow',
    +                               table=ovs_const.ARP_SPOOF_TABLE,
    +                               priority=1,
    +                               dl_type=n_consts.ETHERTYPE_IPV6,
    +                               nw_proto=n_consts.PROTO_NUM_ICMP_V6,
    +                               icmp_type=n_consts.ICMPV6_TYPE_NA,
    +                               nd_target='fdf8:f53b:82e4::10',
    +                               actions="NORMAL")
    +
    +
     def vf_management_supported():
         is_supported = True
         required_caps = (
    
  • neutron/common/constants.py+5 1 modified
    @@ -41,6 +41,8 @@
     DEVICE_OWNER_LOADBALANCER = "neutron:LOADBALANCER"
     DEVICE_OWNER_LOADBALANCERV2 = "neutron:LOADBALANCERV2"
     
    +DEVICE_OWNER_PREFIXES = ["network:", "neutron:"]
    +
     # Collection used to identify devices owned by router interfaces.
     # DEVICE_OWNER_ROUTER_HA_INTF is a special case and so is not included.
     ROUTER_INTERFACE_OWNERS = (DEVICE_OWNER_ROUTER_INTF,
    @@ -90,7 +92,6 @@
     AGENT_TYPE_DHCP = 'DHCP agent'
     AGENT_TYPE_OVS = 'Open vSwitch agent'
     AGENT_TYPE_LINUXBRIDGE = 'Linux bridge agent'
    -AGENT_TYPE_NEC = 'NEC plugin agent'
     AGENT_TYPE_OFA = 'OFA driver agent'
     AGENT_TYPE_L3 = 'L3 agent'
     AGENT_TYPE_LOADBALANCER = 'Loadbalancer agent'
    @@ -112,6 +113,8 @@
     L3_HA_MODE_EXT_ALIAS = 'l3-ha'
     SUBNET_ALLOCATION_EXT_ALIAS = 'subnet_allocation'
     
    +ETHERTYPE_IPV6 = 0x86DD
    +
     # Protocol names and numbers for Security Groups/Firewalls
     PROTO_NAME_TCP = 'tcp'
     PROTO_NAME_ICMP = 'icmp'
    @@ -130,6 +133,7 @@
     # Neighbor Advertisement (136)
     ICMPV6_ALLOWED_TYPES = [130, 131, 132, 135, 136]
     ICMPV6_TYPE_RA = 134
    +ICMPV6_TYPE_NA = 136
     
     DHCPV6_STATEFUL = 'dhcpv6-stateful'
     DHCPV6_STATELESS = 'dhcpv6-stateless'
    
  • neutron/common/exceptions.py+3 0 modified
    @@ -45,6 +45,9 @@ def __init__(self, **kwargs):
             def __unicode__(self):
                 return unicode(self.msg)
     
    +    def __str__(self):
    +        return self.msg
    +
         def use_fatal_exceptions(self):
             return False
     
    
  • neutron/common/utils.py+10 0 modified
    @@ -368,6 +368,7 @@ def is_dvr_serviced(device_owner):
         indirectly associated with DVR.
         """
         dvr_serviced_device_owners = (n_const.DEVICE_OWNER_LOADBALANCER,
    +                                  n_const.DEVICE_OWNER_LOADBALANCERV2,
                                       n_const.DEVICE_OWNER_DHCP)
         return (device_owner.startswith('compute:') or
                 device_owner in dvr_serviced_device_owners)
    @@ -432,6 +433,15 @@ def ip_version_from_int(ip_version_int):
         raise ValueError(_('Illegal IP version number'))
     
     
    +def is_port_trusted(port):
    +    """Used to determine if port can be trusted not to attack network.
    +
    +    Trust is currently based on the device_owner field starting with 'network:'
    +    since we restrict who can use that in the default policy.json file.
    +    """
    +    return port['device_owner'].startswith('network:')
    +
    +
     class DelayedStringRenderer(object):
         """Takes a callable and its args and calls when __str__ is called
     
    
  • neutron/db/agentschedulers_db.py+1 1 modified
    @@ -40,7 +40,7 @@
     AGENTS_SCHEDULER_OPTS = [
         cfg.StrOpt('network_scheduler_driver',
                    default='neutron.scheduler.'
    -                       'dhcp_agent_scheduler.ChanceScheduler',
    +                       'dhcp_agent_scheduler.WeightScheduler',
                    help=_('Driver to use for scheduling network to DHCP agent')),
         cfg.BoolOpt('network_auto_schedule', default=True,
                     help=_('Allow auto scheduling networks to DHCP agent.')),
    
  • neutron/db/api.py+6 2 modified
    @@ -17,6 +17,7 @@
     
     from oslo_config import cfg
     from oslo_db import api as oslo_db_api
    +from oslo_db import exception as db_exc
     from oslo_db.sqlalchemy import session
     from oslo_utils import uuidutils
     from sqlalchemy import exc
    @@ -28,8 +29,11 @@
     _FACADE = None
     
     MAX_RETRIES = 10
    -retry_db_errors = oslo_db_api.wrap_db_retry(max_retries=MAX_RETRIES,
    -                                            retry_on_deadlock=True)
    +is_deadlock = lambda e: isinstance(e, db_exc.DBDeadlock)
    +retry_db_errors = oslo_db_api.wrap_db_retry(
    +    max_retries=MAX_RETRIES,
    +    exception_checker=is_deadlock
    +)
     
     
     def _create_facade_lazily():
    
  • neutron/db/common_db_mixin.py+32 11 modified
    @@ -129,7 +129,8 @@ def _single_model_query(self, context, model):
             query_filter = None
             if self.model_query_scope(context, model):
                 if hasattr(model, 'rbac_entries'):
    -                rbac_model, join_params = self._get_rbac_query_params(model)
    +                rbac_model, join_params = self._get_rbac_query_params(
    +                    model)[:2]
                     query = query.outerjoin(*join_params)
                     query_filter = (
                         (model.tenant_id == context.tenant_id) |
    @@ -185,16 +186,24 @@ def _get_by_id(self, context, model, id):
     
         @staticmethod
         def _get_rbac_query_params(model):
    -        """Return the class and join params for the rbac relationship."""
    +        """Return the parameters required to query an model's RBAC entries.
    +
    +        Returns a tuple of 3 containing:
    +        1. the relevant RBAC model for a given model
    +        2. the join parameters required to query the RBAC entries for the model
    +        3. the ID column of the passed in model that matches the object_id
    +           in the rbac entries.
    +        """
             try:
                 cls = model.rbac_entries.property.mapper.class_
    -            return (cls, (cls, ))
    +            return (cls, (cls, ), model.id)
             except AttributeError:
                 # an association proxy is being used (e.g. subnets
                 # depends on network's rbac entries)
                 rbac_model = (model.rbac_entries.target_class.
                               rbac_entries.property.mapper.class_)
    -            return (rbac_model, model.rbac_entries.attr)
    +            return (rbac_model, model.rbac_entries.attr,
    +                    model.rbac_entries.remote_attr.class_.id)
     
         def _apply_filters_to_query(self, query, model, filters, context=None):
             if filters:
    @@ -213,17 +222,29 @@ def _apply_filters_to_query(self, query, model, filters, context=None):
                     elif key == 'shared' and hasattr(model, 'rbac_entries'):
                         # translate a filter on shared into a query against the
                         # object's rbac entries
    -                    rbac, join_params = self._get_rbac_query_params(model)
    +                    rbac, join_params, oid_col = self._get_rbac_query_params(
    +                        model)
                         query = query.outerjoin(*join_params, aliased=True)
                         matches = [rbac.target_tenant == '*']
                         if context:
                             matches.append(rbac.target_tenant == context.tenant_id)
    -                    is_shared = and_(
    -                        ~rbac.object_id.is_(None),
    -                        rbac.action == 'access_as_shared',
    -                        or_(*matches)
    -                    )
    -                    query = query.filter(is_shared if value[0] else ~is_shared)
    +                    # any 'access_as_shared' records that match the
    +                    # wildcard or requesting tenant
    +                    is_shared = and_(rbac.action == 'access_as_shared',
    +                                     or_(*matches))
    +                    if not value[0]:
    +                        # NOTE(kevinbenton): we need to find objects that don't
    +                        # have an entry that matches the criteria above so
    +                        # we use a subquery to exclude them.
    +                        # We can't just filter the inverse of the query above
    +                        # because that will still give us a network shared to
    +                        # our tenant (or wildcard) if it's shared to another
    +                        # tenant.
    +                        is_shared = ~oid_col.in_(
    +                            query.session.query(rbac.object_id).
    +                            filter(is_shared)
    +                        )
    +                    query = query.filter(is_shared)
                 for _nam, hooks in six.iteritems(self._model_query_hooks.get(model,
                                                                              {})):
                     result_filter = hooks.get('result_filters', None)
    
  • neutron/db/db_base_plugin_v2.py+19 4 modified
    @@ -715,13 +715,22 @@ def update_subnet(self, context, id, subnet):
                 s['allocation_pools'] = range_pools
     
             # If either gateway_ip or allocation_pools were specified
    -        gateway_ip = s.get('gateway_ip')
    -        if gateway_ip is not None or s.get('allocation_pools') is not None:
    -            if gateway_ip is None:
    -                gateway_ip = db_subnet.gateway_ip
    +        new_gateway_ip = s.get('gateway_ip')
    +        gateway_ip_changed = (new_gateway_ip and
    +                              new_gateway_ip != db_subnet.gateway_ip)
    +        if gateway_ip_changed or s.get('allocation_pools') is not None:
    +            gateway_ip = new_gateway_ip or db_subnet.gateway_ip
                 pools = range_pools if range_pools is not None else db_pools
                 self.ipam.validate_gw_out_of_pools(gateway_ip, pools)
     
    +        if gateway_ip_changed:
    +            # Provide pre-update notification not to break plugins that don't
    +            # support gateway ip change
    +            kwargs = {'context': context, 'subnet_id': id,
    +                      'network_id': db_subnet.network_id}
    +            registry.notify(resources.SUBNET_GATEWAY, events.BEFORE_UPDATE,
    +                            self, **kwargs)
    +
             with context.session.begin(subtransactions=True):
                 subnet, changes = self.ipam.update_db_subnet(context, id, s,
                                                              db_pools)
    @@ -753,6 +762,12 @@ def update_subnet(self, context, id, subnet):
                     l3_rpc_notifier = l3_rpc_agent_api.L3AgentNotifyAPI()
                     l3_rpc_notifier.routers_updated(context, routers)
     
    +        if gateway_ip_changed:
    +            kwargs = {'context': context, 'subnet_id': id,
    +                      'network_id': db_subnet.network_id}
    +            registry.notify(resources.SUBNET_GATEWAY, events.AFTER_UPDATE,
    +                            self, **kwargs)
    +
             return result
     
         def _subnet_check_ip_allocations(self, context, subnet_id):
    
  • neutron/db/flavors_db.py+2 0 modified
    @@ -151,6 +151,7 @@ def _make_flavor_dict(self, flavor_db, fields=None):
             res = {'id': flavor_db['id'],
                    'name': flavor_db['name'],
                    'description': flavor_db['description'],
    +               'service_type': flavor_db['service_type'],
                    'enabled': flavor_db['enabled'],
                    'service_profiles': []}
             if flavor_db.service_profiles:
    @@ -190,6 +191,7 @@ def create_flavor(self, context, flavor):
                 fl_db = Flavor(id=uuidutils.generate_uuid(),
                                name=fl['name'],
                                description=fl['description'],
    +                           service_type=fl['service_type'],
                                enabled=fl['enabled'])
                 context.session.add(fl_db)
             return self._make_flavor_dict(fl_db)
    
  • neutron/db/l3_agentschedulers_db.py+3 1 modified
    @@ -42,7 +42,8 @@
     
     L3_AGENTS_SCHEDULER_OPTS = [
         cfg.StrOpt('router_scheduler_driver',
    -               default='neutron.scheduler.l3_agent_scheduler.ChanceScheduler',
    +               default='neutron.scheduler.l3_agent_scheduler.'
    +                       'LeastRoutersScheduler',
                    help=_('Driver to use for scheduling '
                           'router to a default L3 agent')),
         cfg.BoolOpt('router_auto_schedule', default=True,
    @@ -501,6 +502,7 @@ def get_l3_agent_with_min_routers(self, context, agent_ids):
                 func.count(
                     RouterL3AgentBinding.router_id
                 ).label('count')).outerjoin(RouterL3AgentBinding).group_by(
    +                agents_db.Agent.id,
                     RouterL3AgentBinding.l3_agent_id).order_by('count')
             res = query.filter(agents_db.Agent.id.in_(agent_ids)).first()
             return res[0]
    
  • neutron/db/l3_db.py+24 4 modified
    @@ -181,8 +181,7 @@ def create_router(self, context, router):
                                                 gw_info, router=router_db)
             except Exception:
                 with excutils.save_and_reraise_exception():
    -                LOG.exception(_LE("An exception occurred while creating "
    -                                  "the router: %s"), router)
    +                LOG.debug("Could not update gateway info, deleting router.")
                     self.delete_router(context, router_db.id)
     
             return self._make_router_dict(router_db)
    @@ -851,7 +850,7 @@ def _internal_fip_assoc_data(self, context, fip):
                     if 'id' in fip:
                         data = {'floatingip_id': fip['id'],
                                 'internal_ip': internal_ip_address}
    -                    msg = (_('Floating IP %(floatingip_id) is associated '
    +                    msg = (_('Floating IP %(floatingip_id)s is associated '
                                  'with non-IPv4 address %s(internal_ip)s and '
                                  'therefore cannot be bound.') % data)
                     else:
    @@ -1179,7 +1178,7 @@ def _get_sync_interfaces(self, context, router_ids, device_owners=None):
                 return []
             qry = context.session.query(RouterPort)
             qry = qry.filter(
    -            Router.id.in_(router_ids),
    +            RouterPort.router_id.in_(router_ids),
                 RouterPort.port_type.in_(device_owners)
             )
     
    @@ -1417,11 +1416,32 @@ def _notify_routers_callback(resource, event, trigger, **kwargs):
         l3plugin.notify_routers_updated(context, router_ids)
     
     
    +def _notify_subnet_gateway_ip_update(resource, event, trigger, **kwargs):
    +    l3plugin = manager.NeutronManager.get_service_plugins().get(
    +            constants.L3_ROUTER_NAT)
    +    if not l3plugin:
    +        return
    +    context = kwargs['context']
    +    network_id = kwargs['network_id']
    +    subnet_id = kwargs['subnet_id']
    +    query = context.session.query(models_v2.Port).filter_by(
    +                network_id=network_id,
    +                device_owner=l3_constants.DEVICE_OWNER_ROUTER_GW)
    +    query = query.join(models_v2.Port.fixed_ips).filter(
    +                models_v2.IPAllocation.subnet_id == subnet_id)
    +    router_ids = set(port['device_id'] for port in query)
    +    for router_id in router_ids:
    +        l3plugin.notify_router_updated(context, router_id)
    +
    +
     def subscribe():
         registry.subscribe(
             _prevent_l3_port_delete_callback, resources.PORT, events.BEFORE_DELETE)
         registry.subscribe(
             _notify_routers_callback, resources.PORT, events.AFTER_DELETE)
    +    registry.subscribe(
    +        _notify_subnet_gateway_ip_update, resources.SUBNET_GATEWAY,
    +        events.AFTER_UPDATE)
     
     # NOTE(armax): multiple l3 service plugins (potentially out of tree) inherit
     # from l3_db and may need the callbacks to be processed. Having an implicit
    
  • neutron/db/l3_dvr_db.py+42 85 modified
    @@ -144,12 +144,34 @@ def _update_router_db(self, context, router_id, data, gw_info):
                 return router_db
     
         def _delete_current_gw_port(self, context, router_id, router, new_network):
    +        """
    +        Overriden here to handle deletion of dvr internal ports.
    +
    +        If there is a valid router update with gateway port to be deleted,
    +        then go ahead and delete the csnat ports and the floatingip
    +        agent gateway port associated with the dvr router.
    +        """
    +
    +        gw_ext_net_id = (
    +            router.gw_port['network_id'] if router.gw_port else None)
    +
             super(L3_NAT_with_dvr_db_mixin,
                   self)._delete_current_gw_port(context, router_id,
                                                 router, new_network)
    -        if router.extra_attributes.distributed:
    +        if (is_distributed_router(router) and
    +            gw_ext_net_id != new_network):
                 self.delete_csnat_router_interface_ports(
                     context.elevated(), router)
    +            # NOTE(Swami): Delete the Floatingip agent gateway port
    +            # on all hosts when it is the last gateway port in the
    +            # given external network.
    +            filters = {'network_id': [gw_ext_net_id],
    +                       'device_owner': [l3_const.DEVICE_OWNER_ROUTER_GW]}
    +            ext_net_gw_ports = self._core_plugin.get_ports(
    +                context.elevated(), filters)
    +            if not ext_net_gw_ports:
    +                self.delete_floatingip_agent_gateway_port(
    +                    context.elevated(), None, gw_ext_net_id)
     
         def _create_gw_port(self, context, router_id, router, new_network,
                             ext_ips):
    @@ -184,25 +206,12 @@ def _get_interface_ports_for_network(self, context, network_id):
             )
     
         def _update_fip_assoc(self, context, fip, floatingip_db, external_port):
    -        """Override to create and delete floating agent gw port for DVR.
    +        """Override to create floating agent gw port for DVR.
     
             Floating IP Agent gateway port will be created when a
             floatingIP association happens.
    -        Floating IP Agent gateway port will be deleted when a
    -        floatingIP disassociation happens.
             """
             fip_port = fip.get('port_id')
    -        unused_fip_agent_gw_port = (
    -            fip_port is None and floatingip_db['fixed_port_id'])
    -        if unused_fip_agent_gw_port and floatingip_db.get('router_id'):
    -            admin_ctx = context.elevated()
    -            router_dict = self.get_router(
    -                admin_ctx, floatingip_db['router_id'])
    -            # Check if distributed router and then delete the
    -            # FloatingIP agent gateway port
    -            if router_dict.get('distributed'):
    -                self._clear_unused_fip_agent_gw_port(
    -                    admin_ctx, floatingip_db)
             super(L3_NAT_with_dvr_db_mixin, self)._update_fip_assoc(
                 context, fip, floatingip_db, external_port)
             associate_fip = fip_port and floatingip_db['id']
    @@ -227,54 +236,12 @@ def _update_fip_assoc(self, context, fip, floatingip_db, external_port):
                                 vm_hostid))
                         LOG.debug("FIP Agent gateway port: %s", fip_agent_port)
     
    -    def _clear_unused_fip_agent_gw_port(
    -            self, context, floatingip_db):
    -        """Helper function to check for fip agent gw port and delete.
    -
    -        This function checks on compute nodes to make sure if there
    -        are any VMs using the FIP agent gateway port. If no VMs are
    -        using the FIP agent gateway port, it will go ahead and delete
    -        the FIP agent gateway port. If even a single VM is using the
    -        port it will not delete.
    -        """
    -        fip_hostid = self._get_vm_port_hostid(
    -            context, floatingip_db['fixed_port_id'])
    -        if fip_hostid and self._check_fips_availability_on_host_ext_net(
    -            context, fip_hostid, floatingip_db['floating_network_id']):
    -            LOG.debug('Deleting the Agent GW Port for ext-net: '
    -                      '%s', floatingip_db['floating_network_id'])
    -            self._delete_floatingip_agent_gateway_port(
    -                context, fip_hostid, floatingip_db['floating_network_id'])
    -
    -    def delete_floatingip(self, context, id):
    -        floatingip = self._get_floatingip(context, id)
    -        if floatingip['fixed_port_id']:
    -            admin_ctx = context.elevated()
    -            self._clear_unused_fip_agent_gw_port(
    -                admin_ctx, floatingip)
    -        super(L3_NAT_with_dvr_db_mixin,
    -              self).delete_floatingip(context, id)
    -
         def _get_floatingip_on_port(self, context, port_id=None):
             """Helper function to retrieve the fip associated with port."""
             fip_qry = context.session.query(l3_db.FloatingIP)
             floating_ip = fip_qry.filter_by(fixed_port_id=port_id)
             return floating_ip.first()
     
    -    def disassociate_floatingips(self, context, port_id, do_notify=True):
    -        """Override disassociate floatingips to delete fip agent gw port."""
    -        with context.session.begin(subtransactions=True):
    -            fip = self._get_floatingip_on_port(
    -                context, port_id=port_id)
    -            if fip:
    -                admin_ctx = context.elevated()
    -                self._clear_unused_fip_agent_gw_port(
    -                    admin_ctx, fip)
    -        return super(L3_NAT_with_dvr_db_mixin,
    -                     self).disassociate_floatingips(context,
    -                                                    port_id,
    -                                                    do_notify=do_notify)
    -
         def add_router_interface(self, context, router_id, interface_info):
             add_by_port, add_by_sub = self._validate_interface_info(interface_info)
             router = self._get_router(context, router_id)
    @@ -291,6 +258,20 @@ def add_router_interface(self, context, router_id, interface_info):
                         context, router, interface_info['subnet_id'], device_owner)
     
             if new_port:
    +            if router.extra_attributes.distributed and router.gw_port:
    +                try:
    +                    admin_context = context.elevated()
    +                    self._add_csnat_router_interface_port(
    +                        admin_context, router, port['network_id'],
    +                        port['fixed_ips'][-1]['subnet_id'])
    +                except Exception:
    +                    with excutils.save_and_reraise_exception():
    +                        # we need to preserve the original state prior
    +                        # the request by rolling back the port creation
    +                        # that led to new_port=True
    +                        self._core_plugin.delete_port(
    +                            admin_context, port['id'])
    +
                 with context.session.begin(subtransactions=True):
                     router_port = l3_db.RouterPort(
                         port_id=port['id'],
    @@ -299,11 +280,6 @@ def add_router_interface(self, context, router_id, interface_info):
                     )
                     context.session.add(router_port)
     
    -            if router.extra_attributes.distributed and router.gw_port:
    -                self._add_csnat_router_interface_port(
    -                    context.elevated(), router, port['network_id'],
    -                    port['fixed_ips'][-1]['subnet_id'])
    -
             router_interface_info = self._make_router_interface_info(
                 router_id, port['tenant_id'], port['id'], subnets[-1]['id'],
                 [subnet['id'] for subnet in subnets])
    @@ -511,27 +487,7 @@ def _get_router_ids(self, context):
             query = self._model_query(context, l3_db.Router.id)
             return [row[0] for row in query]
     
    -    def _check_fips_availability_on_host_ext_net(
    -        self, context, host_id, fip_ext_net_id):
    -        """Query all floating_ips and filter on host and external net."""
    -        fip_count_on_host = 0
    -        with context.session.begin(subtransactions=True):
    -            router_ids = self._get_router_ids(context)
    -            floating_ips = self._get_sync_floating_ips(context, router_ids)
    -            # Check for the active floatingip in the host
    -            for fip in floating_ips:
    -                f_host = self._get_vm_port_hostid(context, fip['port_id'])
    -                if (f_host == host_id and
    -                    (fip['floating_network_id'] == fip_ext_net_id)):
    -                    fip_count_on_host += 1
    -            # If fip_count greater than 1 or equal to zero no action taken
    -            # if the fip_count is equal to 1, then this would be last active
    -            # fip in the host, so the agent gateway port can be deleted.
    -            if fip_count_on_host == 1:
    -                return True
    -            return False
    -
    -    def _delete_floatingip_agent_gateway_port(
    +    def delete_floatingip_agent_gateway_port(
             self, context, host_id, ext_net_id):
             """Function to delete FIP gateway port with given ext_net_id."""
             # delete any fip agent gw port
    @@ -540,9 +496,10 @@ def _delete_floatingip_agent_gateway_port(
             ports = self._core_plugin.get_ports(context,
                                                 filters=device_filter)
             for p in ports:
    -            if self._get_vm_port_hostid(context, p['id'], p) == host_id:
    +            if not host_id or p[portbindings.HOST_ID] == host_id:
                     self._core_plugin.ipam.delete_port(context, p['id'])
    -                return
    +                if host_id:
    +                    return
     
         def create_fip_agent_gw_port_if_not_exists(
             self, context, network_id, host):
    
  • neutron/db/l3_dvrscheduler_db.py+43 8 modified
    @@ -106,7 +106,6 @@ def dvr_update_router_addvm(self, context, port):
                 filter_sub = {'fixed_ips': {'subnet_id': [subnet]},
                               'device_owner':
                               [n_const.DEVICE_OWNER_DVR_INTERFACE]}
    -            router_id = None
                 ports = self._core_plugin.get_ports(context, filters=filter_sub)
                 for port in ports:
                     router_id = port['device_id']
    @@ -115,8 +114,7 @@ def dvr_update_router_addvm(self, context, port):
                         payload = {'subnet_id': subnet}
                         self.l3_rpc_notifier.routers_updated(
                             context, [router_id], None, payload)
    -                    break
    -            LOG.debug('DVR: dvr_update_router_addvm %s ', router_id)
    +                    LOG.debug('DVR: dvr_update_router_addvm %s ', router_id)
     
         def get_dvr_routers_by_portid(self, context, port_id):
             """Gets the dvr routers on vmport subnets."""
    @@ -161,12 +159,17 @@ def check_ports_on_host_and_subnet(self, context, host,
                     return True
             return False
     
    -    def dvr_deletens_if_no_port(self, context, port_id):
    +    def dvr_deletens_if_no_port(self, context, port_id, port_host=None):
             """Delete the DVR namespace if no dvr serviced port exists."""
             admin_context = context.elevated()
             router_ids = self.get_dvr_routers_by_portid(admin_context, port_id)
    -        port_host = ml2_db.get_port_binding_host(admin_context.session,
    -                                                 port_id)
    +        if not port_host:
    +            port_host = ml2_db.get_port_binding_host(admin_context.session,
    +                                                     port_id)
    +            if not port_host:
    +                LOG.debug('Host name not found for port %s', port_id)
    +                return []
    +
             if not router_ids:
                 LOG.debug('No namespaces available for this DVR port %(port)s '
                           'on host %(host)s', {'port': port_id,
    @@ -458,16 +461,20 @@ def create_router_to_agent_binding(self, context, agent, router):
                       context, agent, router)
     
         def remove_router_from_l3_agent(self, context, agent_id, router_id):
    +        binding = None
             router = self.get_router(context, router_id)
             if router['external_gateway_info'] and router.get('distributed'):
                 binding = self.unbind_snat(context, router_id, agent_id=agent_id)
    +            # binding only exists when agent mode is dvr_snat
                 if binding:
                     notification_not_sent = self.unbind_router_servicenode(context,
                                                  router_id, binding)
                     if notification_not_sent:
                         self.l3_rpc_notifier.routers_updated(
                             context, [router_id], schedule_routers=False)
    -        else:
    +
    +        # Below Needs to be done when agent mode is legacy or dvr.
    +        if not binding:
                 super(L3_DVRsch_db_mixin,
                       self).remove_router_from_l3_agent(
                         context, agent_id, router_id)
    @@ -504,9 +511,37 @@ def _notify_port_delete(event, resource, trigger, **kwargs):
                 context, router['agent_id'], router['router_id'])
     
     
    +def _notify_l3_agent_port_update(resource, event, trigger, **kwargs):
    +    new_port = kwargs.get('port')
    +    original_port = kwargs.get('original_port')
    +
    +    if new_port and original_port:
    +        original_device_owner = original_port.get('device_owner', '')
    +        if (original_device_owner.startswith('compute') and
    +            not new_port.get('device_owner')):
    +            l3plugin = manager.NeutronManager.get_service_plugins().get(
    +                service_constants.L3_ROUTER_NAT)
    +            context = kwargs['context']
    +            removed_routers = l3plugin.dvr_deletens_if_no_port(
    +                context,
    +                original_port['id'],
    +                port_host=original_port['binding:host_id'])
    +            if removed_routers:
    +                removed_router_args = {
    +                    'context': context,
    +                    'port': original_port,
    +                    'removed_routers': removed_routers,
    +                }
    +                _notify_port_delete(
    +                    event, resource, trigger, **removed_router_args)
    +            return
    +
    +    _notify_l3_agent_new_port(resource, event, trigger, **kwargs)
    +
    +
     def subscribe():
         registry.subscribe(
    -        _notify_l3_agent_new_port, resources.PORT, events.AFTER_UPDATE)
    +        _notify_l3_agent_port_update, resources.PORT, events.AFTER_UPDATE)
         registry.subscribe(
             _notify_l3_agent_new_port, resources.PORT, events.AFTER_CREATE)
         registry.subscribe(
    
  • neutron/db/migration/alembic_migrations/cisco_init_ops.py+1 3 modified
    @@ -18,8 +18,6 @@
     from alembic import op
     import sqlalchemy as sa
     
    -from neutron.plugins.cisco.common import cisco_constants
    -
     segment_type = sa.Enum('vlan', 'overlay', 'trunk', 'multi-segment',
                            name='segment_type')
     profile_type = sa.Enum('network', 'policy', name='profile_type')
    @@ -93,7 +91,7 @@ def upgrade():
             'cisco_n1kv_profile_bindings',
             sa.Column('profile_type', profile_type, nullable=True),
             sa.Column('tenant_id', sa.String(length=36), nullable=False,
    -                  server_default=cisco_constants.TENANT_ID_NOT_SET),
    +                  server_default='TENANT_ID_NOT_SET'),
             sa.Column('profile_id', sa.String(length=36), nullable=False),
             sa.PrimaryKeyConstraint('tenant_id', 'profile_id'))
     
    
  • neutron/db/migration/alembic_migrations/env.py+16 7 modified
    @@ -21,6 +21,7 @@
     from sqlalchemy import event
     
     from neutron.db.migration.alembic_migrations import external
    +from neutron.db.migration import autogen
     from neutron.db.migration.models import head  # noqa
     from neutron.db import model_base
     
    @@ -58,9 +59,13 @@ def set_mysql_engine():
                         model_base.BASEV2.__table_args__['mysql_engine'])
     
     
    -def include_object(object, name, type_, reflected, compare_to):
    +def include_object(object_, name, type_, reflected, compare_to):
         if type_ == 'table' and name in external.TABLES:
             return False
    +    elif type_ == 'index' and reflected and name.startswith("idx_autoinc_"):
    +        # skip indexes created by SQLAlchemy autoincrement=True
    +        # on composite PK integer columns
    +        return False
         else:
             return True
     
    @@ -103,21 +108,25 @@ def run_migrations_online():
     
         """
         set_mysql_engine()
    -    engine = session.create_engine(neutron_config.database.connection)
    -
    -    connection = engine.connect()
    +    connection = config.attributes.get('connection')
    +    new_engine = connection is None
    +    if new_engine:
    +        engine = session.create_engine(neutron_config.database.connection)
    +        connection = engine.connect()
         context.configure(
             connection=connection,
             target_metadata=target_metadata,
    -        include_object=include_object
    +        include_object=include_object,
    +        process_revision_directives=autogen.process_revision_directives
         )
     
         try:
             with context.begin_transaction():
                 context.run_migrations()
         finally:
    -        connection.close()
    -        engine.dispose()
    +        if new_engine:
    +            connection.close()
    +            engine.dispose()
     
     
     if context.is_offline_mode():
    
  • neutron/db/migration/alembic_migrations/external.py+3 0 modified
    @@ -48,6 +48,9 @@
         'ml2_nexus_vxlan_allocations',
         'ml2_nexus_vxlan_mcast_groups',
         'ml2_ucsm_port_profiles',
    +    'cisco_hosting_devices',
    +    'cisco_port_mappings',
    +    'cisco_router_mappings',
     ]
     
     # VMware-NSX models moved to openstack/vmware-nsx
    
  • neutron/db/migration/alembic_migrations/versions/HEADS+1 1 modified
    @@ -1,2 +1,2 @@
    -11926bcfe72d
     34af2b5c5a59
    +4af11ca47297
    
  • neutron/db/migration/alembic_migrations/versions/liberty/contract/4af11ca47297_drop_cisco_monolithic_tables.py+44 0 added
    @@ -0,0 +1,44 @@
    +# Copyright 2015 Cisco Systems, Inc.
    +#
    +#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    +#    not use this file except in compliance with the License. You may obtain
    +#    a copy of the License at
    +#
    +#         http://www.apache.org/licenses/LICENSE-2.0
    +#
    +#    Unless required by applicable law or agreed to in writing, software
    +#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    +#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    +#    License for the specific language governing permissions and limitations
    +#    under the License.
    +#
    +
    +"""Drop cisco monolithic tables
    +
    +Revision ID: 4af11ca47297
    +Revises: 11926bcfe72d
    +Create Date: 2015-08-13 08:01:19.709839
    +
    +"""
    +
    +# revision identifiers, used by Alembic.
    +revision = '4af11ca47297'
    +down_revision = '11926bcfe72d'
    +
    +from alembic import op
    +
    +
    +def upgrade():
    +    op.drop_table('cisco_n1kv_port_bindings')
    +    op.drop_table('cisco_n1kv_network_bindings')
    +    op.drop_table('cisco_n1kv_multi_segments')
    +    op.drop_table('cisco_provider_networks')
    +    op.drop_table('cisco_n1kv_trunk_segments')
    +    op.drop_table('cisco_n1kv_vmnetworks')
    +    op.drop_table('cisco_n1kv_profile_bindings')
    +    op.drop_table('cisco_qos_policies')
    +    op.drop_table('cisco_credentials')
    +    op.drop_table('cisco_n1kv_vlan_allocations')
    +    op.drop_table('cisco_n1kv_vxlan_allocations')
    +    op.drop_table('cisco_network_profiles')
    +    op.drop_table('cisco_policy_profiles')
    
  • neutron/db/migration/autogen.py+123 0 added
    @@ -0,0 +1,123 @@
    +# Copyright (c) 2015 Red Hat
    +#
    +#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    +#    not use this file except in compliance with the License. You may obtain
    +#    a copy of the License at
    +#
    +#         http://www.apache.org/licenses/LICENSE-2.0
    +#
    +#    Unless required by applicable law or agreed to in writing, software
    +#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    +#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    +#    License for the specific language governing permissions and limitations
    +#    under the License.
    +
    +from alembic.operations import ops
    +from alembic.util import Dispatcher
    +from alembic.util import rev_id as new_rev_id
    +
    +from neutron.db.migration import cli
    +
    +_ec_dispatcher = Dispatcher()
    +
    +
    +def process_revision_directives(context, revision, directives):
    +    if cli._use_separate_migration_branches(context.config):
    +        directives[:] = [
    +            directive for directive in _assign_directives(context, directives)
    +        ]
    +
    +
    +def _assign_directives(context, directives, phase=None):
    +    for directive in directives:
    +        decider = _ec_dispatcher.dispatch(directive)
    +        if phase is None:
    +            phases = cli.MIGRATION_BRANCHES
    +        else:
    +            phases = (phase,)
    +        for phase in phases:
    +            decided = decider(context, directive, phase)
    +            if decided:
    +                yield decided
    +
    +
    +@_ec_dispatcher.dispatch_for(ops.MigrationScript)
    +def _migration_script_ops(context, directive, phase):
    +    """Generate a new ops.MigrationScript() for a given phase.
    +
    +    E.g. given an ops.MigrationScript() directive from a vanilla autogenerate
    +    and an expand/contract phase name, produce a new ops.MigrationScript()
    +    which contains only those sub-directives appropriate to "expand" or
    +    "contract".  Also ensure that the branch directory exists and that
    +    the correct branch labels/depends_on/head revision are set up.
    +
    +    """
    +    version_path = cli._get_version_branch_path(context.config, phase)
    +    autogen_kwargs = {}
    +    cli._check_bootstrap_new_branch(phase, version_path, autogen_kwargs)
    +
    +    op = ops.MigrationScript(
    +        new_rev_id(),
    +        ops.UpgradeOps(ops=[
    +            d for d in _assign_directives(
    +                context, directive.upgrade_ops.ops, phase)
    +        ]),
    +        ops.DowngradeOps(ops=[]),
    +        message=directive.message,
    +        **autogen_kwargs
    +    )
    +
    +    if not op.upgrade_ops.is_empty():
    +        return op
    +
    +
    +@_ec_dispatcher.dispatch_for(ops.AddConstraintOp)
    +@_ec_dispatcher.dispatch_for(ops.CreateIndexOp)
    +@_ec_dispatcher.dispatch_for(ops.CreateTableOp)
    +@_ec_dispatcher.dispatch_for(ops.AddColumnOp)
    +def _expands(context, directive, phase):
    +    if phase == 'expand':
    +        return directive
    +    else:
    +        return None
    +
    +
    +@_ec_dispatcher.dispatch_for(ops.DropConstraintOp)
    +@_ec_dispatcher.dispatch_for(ops.DropIndexOp)
    +@_ec_dispatcher.dispatch_for(ops.DropTableOp)
    +@_ec_dispatcher.dispatch_for(ops.DropColumnOp)
    +def _contracts(context, directive, phase):
    +    if phase == 'contract':
    +        return directive
    +    else:
    +        return None
    +
    +
    +@_ec_dispatcher.dispatch_for(ops.AlterColumnOp)
    +def _alter_column(context, directive, phase):
    +    is_expand = phase == 'expand'
    +
    +    if is_expand and (
    +        directive.modify_nullable is True
    +    ):
    +        return directive
    +    elif not is_expand and (
    +        directive.modify_nullable is False
    +    ):
    +        return directive
    +    else:
    +        raise NotImplementedError(
    +            "Don't know if operation is an expand or "
    +            "contract at the moment: %s" % directive)
    +
    +
    +@_ec_dispatcher.dispatch_for(ops.ModifyTableOps)
    +def _modify_table_ops(context, directive, phase):
    +    op = ops.ModifyTableOps(
    +        directive.table_name,
    +        ops=[
    +            d for d in _assign_directives(context, directive.ops, phase)
    +        ],
    +        schema=directive.schema)
    +    if not op.is_empty():
    +        return op
    
  • neutron/db/migration/cli.py+21 36 modified
    @@ -47,29 +47,22 @@
     VALID_SERVICES = ['fwaas', 'lbaas', 'vpnaas']
     INSTALLED_SERVICES = [service_ for service_ in VALID_SERVICES
                           if 'neutron-%s' % service_ in migration_entrypoints]
    -INSTALLED_SERVICE_PROJECTS = ['neutron-%s' % service_
    -                              for service_ in INSTALLED_SERVICES]
    -INSTALLED_SUBPROJECTS = [project_ for project_ in migration_entrypoints
    -                         if project_ not in INSTALLED_SERVICE_PROJECTS]
    -
    -service_help = (
    -    _("Can be one of '%s'.") % "', '".join(INSTALLED_SERVICES)
    -    if INSTALLED_SERVICES else
    -    _("(No services are currently installed).")
    -)
    +INSTALLED_SUBPROJECTS = [project_ for project_ in migration_entrypoints]
     
     _core_opts = [
         cfg.StrOpt('core_plugin',
                    default='',
                    help=_('Neutron plugin provider module')),
         cfg.StrOpt('service',
                    choices=INSTALLED_SERVICES,
    -               help=(_("The advanced service to execute the command against. ")
    -                     + service_help)),
    +               help=(_("(Deprecated. Use '--subproject neutron-SERVICE' "
    +                       "instead.) The advanced service to execute the "
    +                       "command against."))),
         cfg.StrOpt('subproject',
                    choices=INSTALLED_SUBPROJECTS,
                    help=(_("The subproject to execute the command against. "
    -                       "Can be one of %s.") % INSTALLED_SUBPROJECTS)),
    +                       "Can be one of: '%s'.")
    +                     % "', '".join(INSTALLED_SUBPROJECTS))),
         cfg.BoolOpt('split_branches',
                     default=False,
                     help=_("Enforce using split branches file structure."))
    @@ -193,29 +186,21 @@ def _get_branch_head(branch):
         return '%s@head' % branch
     
     
    -def do_revision(config, cmd):
    -    '''Generate new revision files, one per branch.'''
    -    addn_kwargs = {
    -        'message': CONF.command.message,
    -        'autogenerate': CONF.command.autogenerate,
    -        'sql': CONF.command.sql,
    -    }
    -
    -    if _use_separate_migration_branches(config):
    -        for branch in MIGRATION_BRANCHES:
    -            version_path = _get_version_branch_path(config, branch)
    -            addn_kwargs['version_path'] = version_path
    -            addn_kwargs['head'] = _get_branch_head(branch)
    +def _check_bootstrap_new_branch(branch, version_path, addn_kwargs):
    +    addn_kwargs['version_path'] = version_path
    +    addn_kwargs['head'] = _get_branch_head(branch)
    +    if not os.path.exists(version_path):
    +        # Bootstrap initial directory structure
    +        utils.ensure_dir(version_path)
    +        addn_kwargs['branch_label'] = branch
     
    -            if not os.path.exists(version_path):
    -                # Bootstrap initial directory structure
    -                utils.ensure_dir(version_path)
    -                # Mark the very first revision in the new branch with its label
    -                addn_kwargs['branch_label'] = branch
     
    -            do_alembic_command(config, cmd, **addn_kwargs)
    -    else:
    -        do_alembic_command(config, cmd, **addn_kwargs)
    +def do_revision(config, cmd):
    +    '''Generate new revision files, one per branch.'''
    +    do_alembic_command(config, cmd,
    +                       message=CONF.command.message,
    +                       autogenerate=CONF.command.autogenerate,
    +                       sql=CONF.command.sql)
         update_heads_file(config)
     
     
    @@ -478,8 +463,8 @@ def get_alembic_configs():
         # Get the script locations for the specified or installed projects.
         # Which projects to get script locations for is determined by the CLI
         # options as follows:
    -    #     --service X       # only subproject neutron-X
    -    #     --subproject Y    # only subproject Y
    +    #     --service X       # only subproject neutron-X (deprecated)
    +    #     --subproject Y    # only subproject Y (where Y can be neutron)
         #     (none specified)  # neutron and all installed subprojects
         script_locations = {}
         if CONF.service:
    
  • neutron/db/migration/__init__.py+12 0 modified
    @@ -15,12 +15,24 @@
     import contextlib
     import functools
     
    +import alembic
     from alembic import context
     from alembic import op
     import sqlalchemy as sa
     from sqlalchemy.engine import reflection
     
     
    +CREATION_OPERATIONS = (sa.sql.ddl.AddConstraint,
    +                       sa.sql.ddl.CreateIndex,
    +                       sa.sql.ddl.CreateTable,
    +                       sa.sql.ddl.CreateColumn,
    +                       )
    +DROP_OPERATIONS = (sa.sql.ddl.DropConstraint,
    +                   sa.sql.ddl.DropIndex,
    +                   sa.sql.ddl.DropTable,
    +                   alembic.ddl.base.DropColumn)
    +
    +
     def skip_if_offline(func):
         """Decorator for skipping migrations in offline mode."""
         @functools.wraps(func)
    
  • neutron/db/migration/models/head.py+0 3 modified
    @@ -50,9 +50,6 @@
     from neutron.plugins.bigswitch.db import consistency_db  # noqa
     from neutron.plugins.bigswitch import routerrule_db  # noqa
     from neutron.plugins.brocade.db import models as brocade_models  # noqa
    -from neutron.plugins.cisco.db.l3 import l3_models  # noqa
    -from neutron.plugins.cisco.db import n1kv_models_v2  # noqa
    -from neutron.plugins.cisco.db import network_models_v2  # noqa
     from neutron.plugins.ml2.drivers.brocade.db import (  # noqa
         models as ml2_brocade_models)
     from neutron.plugins.ml2.drivers import type_flat  # noqa
    
  • neutron/db/portsecurity_db.py+2 2 modified
    @@ -13,6 +13,7 @@
     #    under the License.
     
     from neutron.api.v2 import attributes as attrs
    +from neutron.common import utils
     from neutron.db import db_base_plugin_v2
     from neutron.db import portsecurity_db_common
     from neutron.extensions import portsecurity as psec
    @@ -40,8 +41,7 @@ def _determine_port_security_and_has_ip(self, context, port):
             """
             has_ip = self._ip_on_port(port)
             # we don't apply security groups for dhcp, router
    -        if (port.get('device_owner') and
    -                port['device_owner'].startswith('network:')):
    +        if port.get('device_owner') and utils.is_port_trusted(port):
                 return (False, has_ip)
     
             if attrs.is_attr_set(port.get(psec.PORTSECURITY)):
    
  • neutron/db/quota/api.py+12 28 modified
    @@ -19,6 +19,7 @@
     from sqlalchemy.orm import exc as orm_exc
     from sqlalchemy import sql
     
    +from neutron.db import api as db_api
     from neutron.db import common_db_mixin as common_db_api
     from neutron.db.quota import models as quota_models
     
    @@ -29,12 +30,8 @@ def utcnow():
     
     
     class QuotaUsageInfo(collections.namedtuple(
    -    'QuotaUsageInfo', ['resource', 'tenant_id', 'used', 'reserved', 'dirty'])):
    -
    -    @property
    -    def total(self):
    -        """Total resource usage (reserved and used)."""
    -        return self.reserved + self.used
    +    'QuotaUsageInfo', ['resource', 'tenant_id', 'used', 'dirty'])):
    +    """Information about resource quota usage."""
     
     
     class ReservationInfo(collections.namedtuple(
    @@ -66,7 +63,6 @@ def get_quota_usage_by_resource_and_tenant(context, resource, tenant_id,
         return QuotaUsageInfo(result.resource,
                               result.tenant_id,
                               result.in_use,
    -                          result.reserved,
                               result.dirty)
     
     
    @@ -76,7 +72,6 @@ def get_quota_usage_by_resource(context, resource):
         return [QuotaUsageInfo(item.resource,
                                item.tenant_id,
                                item.in_use,
    -                           item.reserved,
                                item.dirty) for item in query]
     
     
    @@ -86,12 +81,11 @@ def get_quota_usage_by_tenant_id(context, tenant_id):
         return [QuotaUsageInfo(item.resource,
                                item.tenant_id,
                                item.in_use,
    -                           item.reserved,
                                item.dirty) for item in query]
     
     
     def set_quota_usage(context, resource, tenant_id,
    -                    in_use=None, reserved=None, delta=False):
    +                    in_use=None, delta=False):
         """Set resource quota usage.
     
         :param context: instance of neutron context with db session
    @@ -100,15 +94,14 @@ def set_quota_usage(context, resource, tenant_id,
                           being set
         :param in_use: integer specifying the new quantity of used resources,
                        or a delta to apply to current used resource
    -    :param reserved: integer specifying the new quantity of reserved resources,
    -                     or a delta to apply to current reserved resources
    -    :param delta: Specififies whether in_use or reserved are absolute numbers
    -                  or deltas (default to False)
    +    :param delta: Specifies whether in_use is an absolute number
    +                  or a delta (default to False)
         """
    -    query = common_db_api.model_query(context, quota_models.QuotaUsage)
    -    query = query.filter_by(resource=resource).filter_by(tenant_id=tenant_id)
    -    usage_data = query.first()
    -    with context.session.begin(subtransactions=True):
    +    with db_api.autonested_transaction(context.session):
    +        query = common_db_api.model_query(context, quota_models.QuotaUsage)
    +        query = query.filter_by(resource=resource).filter_by(
    +            tenant_id=tenant_id)
    +        usage_data = query.first()
             if not usage_data:
                 # Must create entry
                 usage_data = quota_models.QuotaUsage(
    @@ -120,16 +113,11 @@ def set_quota_usage(context, resource, tenant_id,
                 if delta:
                     in_use = usage_data.in_use + in_use
                 usage_data.in_use = in_use
    -        if reserved is not None:
    -            if delta:
    -                reserved = usage_data.reserved + reserved
    -            usage_data.reserved = reserved
             # After an explicit update the dirty bit should always be reset
             usage_data.dirty = False
         return QuotaUsageInfo(usage_data.resource,
                               usage_data.tenant_id,
                               usage_data.in_use,
    -                          usage_data.reserved,
                               usage_data.dirty)
     
     
    @@ -188,10 +176,6 @@ def create_reservation(context, tenant_id, deltas, expiration=None):
                     quota_models.ResourceDelta(resource=resource,
                                                amount=delta,
                                                reservation=resv))
    -        # quota_usage for all resources involved in this reservation must
    -        # be marked as dirty
    -        set_resources_quota_usage_dirty(
    -            context, deltas.keys(), tenant_id)
         return ReservationInfo(resv['id'],
                                resv['tenant_id'],
                                resv['expiration'],
    @@ -263,7 +247,7 @@ def get_reservations_for_resources(context, tenant_id, resources,
             quota_models.ResourceDelta.resource,
             quota_models.Reservation.expiration)
         return dict((resource, total_reserved)
    -           for (resource, exp, total_reserved) in resv_query)
    +            for (resource, exp, total_reserved) in resv_query)
     
     
     def remove_expired_reservations(context, tenant_id=None):
    
  • neutron/db/quota/driver.py+22 32 modified
    @@ -126,29 +126,18 @@ def _get_quotas(self, context, tenant_id, resources):
     
             return dict((k, v) for k, v in quotas.items())
     
    -    def _handle_expired_reservations(self, context, tenant_id,
    -                                     resource, expired_amount):
    -        LOG.debug(("Adjusting usage for resource %(resource)s: "
    -                   "removing %(expired)d reserved items"),
    -                  {'resource': resource,
    -                   'expired': expired_amount})
    -        # TODO(salv-orlando): It should be possible to do this
    -        # operation for all resources with a single query.
    -        # Update reservation usage
    -        quota_api.set_quota_usage(
    -            context,
    -            resource,
    -            tenant_id,
    -            reserved=-expired_amount,
    -            delta=True)
    +    def _handle_expired_reservations(self, context, tenant_id):
    +        LOG.debug("Deleting expired reservations for tenant:%s" % tenant_id)
             # Delete expired reservations (we don't want them to accrue
             # in the database)
             quota_api.remove_expired_reservations(
                 context, tenant_id=tenant_id)
     
         @oslo_db_api.wrap_db_retry(max_retries=db_api.MAX_RETRIES,
    +                               retry_interval=0.1,
    +                               inc_retry_interval=True,
                                    retry_on_request=True,
    -                               retry_on_deadlock=True)
    +                               exception_checker=db_api.is_deadlock)
         def make_reservation(self, context, tenant_id, resources, deltas, plugin):
             # Lock current reservation table
             # NOTE(salv-orlando): This routine uses DB write locks.
    @@ -163,7 +152,21 @@ def make_reservation(self, context, tenant_id, resources, deltas, plugin):
             # locks should be ok to use when support for sending "hotspot" writes
             # to a single node will be avaialable.
             requested_resources = deltas.keys()
    -        with context.session.begin():
    +        with db_api.autonested_transaction(context.session):
    +            # get_tenant_quotes needs in input a dictionary mapping resource
    +            # name to BaseResosurce instances so that the default quota can be
    +            # retrieved
    +            current_limits = self.get_tenant_quotas(
    +                context, resources, tenant_id)
    +            unlimited_resources = set([resource for (resource, limit) in
    +                                       current_limits.items() if limit < 0])
    +            # Do not even bother counting resources and calculating headroom
    +            # for resources with unlimited quota
    +            LOG.debug(("Resources %s have unlimited quota limit. It is not "
    +                       "required to calculated headroom "),
    +                      ",".join(unlimited_resources))
    +            requested_resources = (set(requested_resources) -
    +                                   unlimited_resources)
                 # Gather current usage information
                 # TODO(salv-orlando): calling count() for every resource triggers
                 # multiple queries on quota usage. This should be improved, however
    @@ -173,13 +176,8 @@ def make_reservation(self, context, tenant_id, resources, deltas, plugin):
                 # instances
                 current_usages = dict(
                     (resource, resources[resource].count(
    -                    context, plugin, tenant_id)) for
    +                    context, plugin, tenant_id, resync_usage=False)) for
                     resource in requested_resources)
    -            # get_tenant_quotes needs in inout a dictionary mapping resource
    -            # name to BaseResosurce instances so that the default quota can be
    -            # retrieved
    -            current_limits = self.get_tenant_quotas(
    -                context, resources, tenant_id)
                 # Adjust for expired reservations. Apparently it is cheaper than
                 # querying everytime for active reservations and counting overall
                 # quantity of resources reserved
    @@ -190,13 +188,6 @@ def make_reservation(self, context, tenant_id, resources, deltas, plugin):
                 for resource in requested_resources:
                     expired_reservations = expired_deltas.get(resource, 0)
                     total_usage = current_usages[resource] - expired_reservations
    -                # A negative quota limit means infinite
    -                if current_limits[resource] < 0:
    -                    LOG.debug(("Resource %(resource)s has unlimited quota "
    -                               "limit. It is possible to allocate %(delta)s "
    -                               "items."), {'resource': resource,
    -                                           'delta': deltas[resource]})
    -                    continue
                     res_headroom = current_limits[resource] - total_usage
                     LOG.debug(("Attempting to reserve %(delta)d items for "
                                "resource %(resource)s. Total usage: %(total)d; "
    @@ -209,8 +200,7 @@ def make_reservation(self, context, tenant_id, resources, deltas, plugin):
                     if res_headroom < deltas[resource]:
                         resources_over_limit.append(resource)
                     if expired_reservations:
    -                    self._handle_expired_reservations(
    -                        context, tenant_id, resource, expired_reservations)
    +                    self._handle_expired_reservations(context, tenant_id)
     
                 if resources_over_limit:
                     raise exceptions.OverQuota(overs=sorted(resources_over_limit))
    
  • neutron/db/securitygroups_db.py+11 12 modified
    @@ -438,7 +438,7 @@ def _validate_port_range(self, rule):
             elif ip_proto == constants.PROTO_NUM_ICMP:
                 for attr, field in [('port_range_min', 'type'),
                                     ('port_range_max', 'code')]:
    -                if rule[attr] is not None and rule[attr] > 255:
    +                if rule[attr] is not None and not (0 <= rule[attr] <= 255):
                         raise ext_sg.SecurityGroupInvalidIcmpValue(
                             field=field, attr=attr, value=rule[attr])
                 if (rule['port_range_min'] is None and
    @@ -686,15 +686,15 @@ def _get_security_groups_on_port(self, context, port):
     
             :returns: all security groups IDs on port belonging to tenant.
             """
    -        p = port['port']
    -        if not attributes.is_attr_set(p.get(ext_sg.SECURITYGROUPS)):
    +        port = port['port']
    +        if not attributes.is_attr_set(port.get(ext_sg.SECURITYGROUPS)):
                 return
    -        if p.get('device_owner') and p['device_owner'].startswith('network:'):
    +        if port.get('device_owner') and utils.is_port_trusted(port):
                 return
     
    -        port_sg = p.get(ext_sg.SECURITYGROUPS, [])
    +        port_sg = port.get(ext_sg.SECURITYGROUPS, [])
             filters = {'id': port_sg}
    -        tenant_id = p.get('tenant_id')
    +        tenant_id = port.get('tenant_id')
             if tenant_id:
                 filters['tenant_id'] = [tenant_id]
             valid_groups = set(g['id'] for g in
    @@ -710,14 +710,13 @@ def _get_security_groups_on_port(self, context, port):
     
         def _ensure_default_security_group_on_port(self, context, port):
             # we don't apply security groups for dhcp, router
    -        if (port['port'].get('device_owner') and
    -                port['port']['device_owner'].startswith('network:')):
    +        port = port['port']
    +        if port.get('device_owner') and utils.is_port_trusted(port):
                 return
    -        tenant_id = self._get_tenant_id_for_create(context,
    -                                                   port['port'])
    +        tenant_id = self._get_tenant_id_for_create(context, port)
             default_sg = self._ensure_default_security_group(context, tenant_id)
    -        if not attributes.is_attr_set(port['port'].get(ext_sg.SECURITYGROUPS)):
    -            port['port'][ext_sg.SECURITYGROUPS] = [default_sg]
    +        if not attributes.is_attr_set(port.get(ext_sg.SECURITYGROUPS)):
    +            port[ext_sg.SECURITYGROUPS] = [default_sg]
     
         def _check_update_deletes_security_groups(self, port):
             """Return True if port has as a security group and it's value
    
  • neutron/db/servicetype_db.py+0 10 modified
    @@ -45,16 +45,6 @@ def get_instance(cls):
     
         def __init__(self):
             self.config = {}
    -        # TODO(armax): remove these as soon as *-aaS start using
    -        # the newly introduced add_provider_configuration API
    -        self.config['LOADBALANCER'] = (
    -            pconf.ProviderConfiguration('neutron_lbaas'))
    -        self.config['LOADBALANCERV2'] = (
    -            pconf.ProviderConfiguration('neutron_lbaas'))
    -        self.config['FIREWALL'] = (
    -            pconf.ProviderConfiguration('neutron_fwaas'))
    -        self.config['VPN'] = (
    -            pconf.ProviderConfiguration('neutron_vpnaas'))
     
         def add_provider_configuration(self, service_type, configuration):
             """Add or update the provider configuration for the service type."""
    
  • neutron/debug/debug_agent.py+1 0 modified
    @@ -38,6 +38,7 @@ class NeutronDebugAgent(object):
         OPTS = [
             # Needed for drivers
             cfg.StrOpt('external_network_bridge', default='br-ex',
    +                   deprecated_for_removal=True,
                        help=_("Name of bridge used for external network "
                               "traffic.")),
         ]
    
  • neutron/extensions/allowedaddresspairs.py+7 3 modified
    @@ -49,9 +49,13 @@ class AllowedAddressPairExhausted(nexception.BadRequest):
     
     def _validate_allowed_address_pairs(address_pairs, valid_values=None):
         unique_check = {}
    -    if len(address_pairs) > cfg.CONF.max_allowed_address_pair:
    -        raise AllowedAddressPairExhausted(
    -            quota=cfg.CONF.max_allowed_address_pair)
    +    try:
    +        if len(address_pairs) > cfg.CONF.max_allowed_address_pair:
    +            raise AllowedAddressPairExhausted(
    +                quota=cfg.CONF.max_allowed_address_pair)
    +    except TypeError:
    +        raise webob.exc.HTTPBadRequest(
    +            _("Allowed address pairs must be a list."))
     
         for address_pair in address_pairs:
             # mac_address is optional, if not set we use the mac on the port
    
  • neutron/extensions/dns.py+1 1 modified
    @@ -69,7 +69,7 @@ def _validate_dns_format(data, max_len=FQDN_MAX_LEN):
                 raise TypeError(_("TLD '%s' must not be all numeric") % names[-1])
         except TypeError as e:
             msg = _("'%(data)s' not a valid PQDN or FQDN. Reason: %(reason)s") % {
    -            'data': data, 'reason': e.message}
    +            'data': data, 'reason': str(e)}
             return msg
     
     
    
  • neutron/extensions/flavors.py+6 1 modified
    @@ -84,7 +84,12 @@
                        'member_name': 'flavor'},
             'parameters': {'id': {'allow_post': True, 'allow_put': False,
                                   'validate': {'type:uuid': None},
    -                              'is_visible': True}}
    +                              'is_visible': True},
    +                       'tenant_id': {'allow_post': True, 'allow_put': False,
    +                                     'required_by_policy': True,
    +                                     'validate': {'type:string':
    +                                                  attr.TENANT_ID_MAX_LEN},
    +                                     'is_visible': True}}
         }
     }
     
    
  • neutron/manager.py+5 0 modified
    @@ -246,3 +246,8 @@ def get_service_plugins(cls):
             service_plugins = cls.get_instance().service_plugins
             return dict((x, weakref.proxy(y))
                         for x, y in six.iteritems(service_plugins))
    +
    +    @classmethod
    +    def get_unique_service_plugins(cls):
    +        service_plugins = cls.get_instance().service_plugins
    +        return tuple(weakref.proxy(x) for x in set(service_plugins.values()))
    
  • neutron/neutron_plugin_base_v2.py+9 0 modified
    @@ -389,3 +389,12 @@ def rpc_workers_supported(self):
             """
             return (self.__class__.start_rpc_listeners !=
                     NeutronPluginBaseV2.start_rpc_listeners)
    +
    +    def get_workers(self):
    +        """Returns a collection NeutronWorker instances
    +
    +        If a plugin needs to define worker processes outside of API/RPC workers
    +        then it will override this and return a collection of NeutronWorker
    +        instances
    +        """
    +        return ()
    
  • neutron/notifiers/nova.py+6 11 modified
    @@ -20,7 +20,6 @@
     from novaclient import exceptions as nova_exceptions
     from oslo_config import cfg
     from oslo_log import log as logging
    -from oslo_utils import importutils
     from oslo_utils import uuidutils
     from sqlalchemy.orm import attributes as sql_attr
     
    @@ -94,18 +93,14 @@ def __init__(self):
                                                                 'nova',
                                                                 auth=auth)
     
    -        # NOTE(andreykurilin): novaclient.v1_1 was renamed to v2 and there is
    -        # no way to import the contrib module directly without referencing v2,
    -        # which would only work for novaclient >= 2.21.0.
    -        novaclient_cls = nova_client.get_client_class(NOVA_API_VERSION)
    -        server_external_events = importutils.import_module(
    -            novaclient_cls.__module__.replace(
    -                ".client", ".contrib.server_external_events"))
    -
    -        self.nclient = novaclient_cls(
    +        extensions = [
    +            ext for ext in nova_client.discover_extensions(NOVA_API_VERSION)
    +            if ext.name == "server_external_events"]
    +        self.nclient = nova_client.Client(
    +            NOVA_API_VERSION,
                 session=session,
                 region_name=cfg.CONF.nova.region_name,
    -            extensions=[server_external_events])
    +            extensions=extensions)
             self.batch_notifier = batch_notifier.BatchNotifier(
                 cfg.CONF.send_events_interval, self.send_events)
     
    
  • neutron/objects/qos/rule.py+17 0 modified
    @@ -20,6 +20,7 @@
     from oslo_versionedobjects import fields as obj_fields
     import six
     
    +from neutron.common import constants
     from neutron.common import utils
     from neutron.db import api as db_api
     from neutron.db.qos import models as qos_db_model
    @@ -57,6 +58,22 @@ def to_dict(self):
             dict_['type'] = self.rule_type
             return dict_
     
    +    def should_apply_to_port(self, port):
    +        """Check whether a rule can be applied to a specific port.
    +
    +        This function has the logic to decide whether a rule should
    +        be applied to a port or not, depending on the source of the
    +        policy (network, or port). Eventually rules could override
    +        this method, or we could make it abstract to allow different
    +        rule behaviour.
    +        """
    +        is_network_rule = self.qos_policy_id != port[qos_consts.QOS_POLICY_ID]
    +        is_network_device_port = any(port['device_owner'].startswith(prefix)
    +                                     for prefix
    +                                     in constants.DEVICE_OWNER_PREFIXES)
    +
    +        return not (is_network_rule and is_network_device_port)
    +
     
     @obj_base.VersionedObjectRegistry.register
     class QosBandwidthLimitRule(QosRule):
    
  • neutron/plugins/cisco/common/cisco_constants.py+0 118 removed
    @@ -1,118 +0,0 @@
    -# Copyright 2011 Cisco Systems, Inc.  All rights reserved.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -
    -# Attachment attributes
    -INSTANCE_ID = 'instance_id'
    -TENANT_ID = 'tenant_id'
    -TENANT_NAME = 'tenant_name'
    -HOST_NAME = 'host_name'
    -
    -# Network attributes
    -NET_ID = 'id'
    -NET_NAME = 'name'
    -NET_VLAN_ID = 'vlan_id'
    -NET_VLAN_NAME = 'vlan_name'
    -NET_PORTS = 'ports'
    -
    -CREDENTIAL_ID = 'credential_id'
    -CREDENTIAL_NAME = 'credential_name'
    -CREDENTIAL_USERNAME = 'user_name'
    -CREDENTIAL_PASSWORD = 'password'
    -CREDENTIAL_TYPE = 'type'
    -MASKED_PASSWORD = '********'
    -
    -USERNAME = 'username'
    -PASSWORD = 'password'
    -
    -LOGGER_COMPONENT_NAME = "cisco_plugin"
    -
    -VSWITCH_PLUGIN = 'vswitch_plugin'
    -
    -DEVICE_IP = 'device_ip'
    -
    -NETWORK_ADMIN = 'network_admin'
    -
    -NETWORK = 'network'
    -PORT = 'port'
    -BASE_PLUGIN_REF = 'base_plugin_ref'
    -CONTEXT = 'context'
    -SUBNET = 'subnet'
    -
    -#### N1Kv CONSTANTS
    -# Special vlan_id value in n1kv_vlan_allocations table indicating flat network
    -FLAT_VLAN_ID = -1
    -
    -# Topic for tunnel notifications between the plugin and agent
    -TUNNEL = 'tunnel'
    -
    -# Maximum VXLAN range configurable for one network profile.
    -MAX_VXLAN_RANGE = 1000000
    -
    -# Values for network_type
    -NETWORK_TYPE_FLAT = 'flat'
    -NETWORK_TYPE_VLAN = 'vlan'
    -NETWORK_TYPE_VXLAN = 'vxlan'
    -NETWORK_TYPE_LOCAL = 'local'
    -NETWORK_TYPE_NONE = 'none'
    -NETWORK_TYPE_TRUNK = 'trunk'
    -NETWORK_TYPE_MULTI_SEGMENT = 'multi-segment'
    -
    -# Values for network sub_type
    -NETWORK_TYPE_OVERLAY = 'overlay'
    -NETWORK_SUBTYPE_NATIVE_VXLAN = 'native_vxlan'
    -NETWORK_SUBTYPE_TRUNK_VLAN = NETWORK_TYPE_VLAN
    -NETWORK_SUBTYPE_TRUNK_VXLAN = NETWORK_TYPE_OVERLAY
    -
    -# Prefix for VM Network name
    -VM_NETWORK_NAME_PREFIX = 'vmn_'
    -
    -SET = 'set'
    -INSTANCE = 'instance'
    -PROPERTIES = 'properties'
    -NAME = 'name'
    -ID = 'id'
    -POLICY = 'policy'
    -TENANT_ID_NOT_SET = 'TENANT_ID_NOT_SET'
    -ENCAPSULATIONS = 'encapsulations'
    -STATE = 'state'
    -ONLINE = 'online'
    -MAPPINGS = 'mappings'
    -MAPPING = 'mapping'
    -SEGMENTS = 'segments'
    -SEGMENT = 'segment'
    -BRIDGE_DOMAIN_SUFFIX = '_bd'
    -LOGICAL_NETWORK_SUFFIX = '_log_net'
    -ENCAPSULATION_PROFILE_SUFFIX = '_profile'
    -
    -UUID_LENGTH = 36
    -
    -# N1KV vlan and vxlan segment range
    -N1KV_VLAN_RESERVED_MIN = 3968
    -N1KV_VLAN_RESERVED_MAX = 4047
    -N1KV_VXLAN_MIN = 4096
    -N1KV_VXLAN_MAX = 16000000
    -
    -# Type and topic for Cisco cfg agent
    -# ==================================
    -AGENT_TYPE_CFG = 'Cisco cfg agent'
    -
    -# Topic for Cisco configuration agent
    -CFG_AGENT = 'cisco_cfg_agent'
    -# Topic for routing service helper in Cisco configuration agent
    -CFG_AGENT_L3_ROUTING = 'cisco_cfg_agent_l3_routing'
    -
    -# Values for network profile fields
    -ADD_TENANTS = 'add_tenants'
    -REMOVE_TENANTS = 'remove_tenants'
    
  • neutron/plugins/cisco/common/cisco_credentials_v2.py+0 53 removed
    @@ -1,53 +0,0 @@
    -# Copyright 2012 Cisco Systems, Inc.  All rights reserved.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -
    -from neutron.plugins.cisco.common import cisco_constants as const
    -from neutron.plugins.cisco.common import cisco_exceptions as cexc
    -from neutron.plugins.cisco.common import config
    -from neutron.plugins.cisco.db import network_db_v2 as cdb
    -
    -
    -class Store(object):
    -    """Credential Store."""
    -
    -    @staticmethod
    -    def initialize():
    -        dev_dict = config.get_device_dictionary()
    -        for key in dev_dict:
    -            dev_id, dev_ip, dev_key = key
    -            if dev_key == const.USERNAME:
    -                try:
    -                    cdb.add_credential(
    -                        dev_ip,
    -                        dev_dict[dev_id, dev_ip, const.USERNAME],
    -                        dev_dict[dev_id, dev_ip, const.PASSWORD],
    -                        dev_id)
    -                except cexc.CredentialAlreadyExists:
    -                    # We are quietly ignoring this, since it only happens
    -                    # if this class module is loaded more than once, in
    -                    # which case, the credentials are already populated
    -                    pass
    -
    -    @staticmethod
    -    def get_username(cred_name):
    -        """Get the username."""
    -        credential = cdb.get_credential_name(cred_name)
    -        return credential[const.CREDENTIAL_USERNAME]
    -
    -    @staticmethod
    -    def get_password(cred_name):
    -        """Get the password."""
    -        credential = cdb.get_credential_name(cred_name)
    -        return credential[const.CREDENTIAL_PASSWORD]
    
  • neutron/plugins/cisco/common/cisco_exceptions.py+0 236 removed
    @@ -1,236 +0,0 @@
    -# Copyright 2011 Cisco Systems, Inc.  All rights reserved.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -"""Exceptions used by the Cisco plugin."""
    -
    -from neutron.common import exceptions
    -
    -
    -class NetworkSegmentIDNotFound(exceptions.NeutronException):
    -    """Segmentation ID for network is not found."""
    -    message = _("Segmentation ID for network %(net_id)s is not found.")
    -
    -
    -class NoMoreNics(exceptions.NeutronException):
    -    """No more dynamic NICs are available in the system."""
    -    message = _("Unable to complete operation. No more dynamic NICs are "
    -                "available in the system.")
    -
    -
    -class NetworkVlanBindingAlreadyExists(exceptions.NeutronException):
    -    """Binding cannot be created, since it already exists."""
    -    message = _("NetworkVlanBinding for %(vlan_id)s and network "
    -                "%(network_id)s already exists.")
    -
    -
    -class VlanIDNotFound(exceptions.NeutronException):
    -    """VLAN ID cannot be found."""
    -    message = _("Vlan ID %(vlan_id)s not found.")
    -
    -
    -class VlanIDOutsidePool(exceptions.NeutronException):
    -    """VLAN ID cannot be allocated, since it is outside the configured pool."""
    -    message = _("Unable to complete operation. VLAN ID exists outside of the "
    -                "configured network segment range.")
    -
    -
    -class VlanIDNotAvailable(exceptions.NeutronException):
    -    """No VLAN ID available."""
    -    message = _("No Vlan ID available.")
    -
    -
    -class QosNotFound(exceptions.NeutronException):
    -    """QoS level with this ID cannot be found."""
    -    message = _("QoS level %(qos_id)s could not be found "
    -                "for tenant %(tenant_id)s.")
    -
    -
    -class QosNameAlreadyExists(exceptions.NeutronException):
    -    """QoS Name already exists."""
    -    message = _("QoS level with name %(qos_name)s already exists "
    -                "for tenant %(tenant_id)s.")
    -
    -
    -class CredentialNotFound(exceptions.NeutronException):
    -    """Credential with this ID cannot be found."""
    -    message = _("Credential %(credential_id)s could not be found.")
    -
    -
    -class CredentialNameNotFound(exceptions.NeutronException):
    -    """Credential Name could not be found."""
    -    message = _("Credential %(credential_name)s could not be found.")
    -
    -
    -class CredentialAlreadyExists(exceptions.NeutronException):
    -    """Credential already exists."""
    -    message = _("Credential %(credential_name)s already exists.")
    -
    -
    -class ProviderNetworkExists(exceptions.NeutronException):
    -    """Provider network already exists."""
    -    message = _("Provider network %s already exists")
    -
    -
    -class NexusComputeHostNotConfigured(exceptions.NeutronException):
    -    """Connection to compute host is not configured."""
    -    message = _("Connection to %(host)s is not configured.")
    -
    -
    -class NexusConnectFailed(exceptions.NeutronException):
    -    """Failed to connect to Nexus switch."""
    -    message = _("Unable to connect to Nexus %(nexus_host)s. Reason: %(exc)s.")
    -
    -
    -class NexusConfigFailed(exceptions.NeutronException):
    -    """Failed to configure Nexus switch."""
    -    message = _("Failed to configure Nexus: %(config)s. Reason: %(exc)s.")
    -
    -
    -class NexusPortBindingNotFound(exceptions.NeutronException):
    -    """NexusPort Binding is not present."""
    -    message = _("Nexus Port Binding (%(filters)s) is not present.")
    -
    -    def __init__(self, **kwargs):
    -        filters = ','.join('%s=%s' % i for i in kwargs.items())
    -        super(NexusPortBindingNotFound, self).__init__(filters=filters)
    -
    -
    -class NoNexusSviSwitch(exceptions.NeutronException):
    -    """No usable nexus switch found."""
    -    message = _("No usable Nexus switch found to create SVI interface.")
    -
    -
    -class PortVnicBindingAlreadyExists(exceptions.NeutronException):
    -    """PortVnic Binding already exists."""
    -    message = _("PortVnic Binding %(port_id)s already exists.")
    -
    -
    -class PortVnicNotFound(exceptions.NeutronException):
    -    """PortVnic Binding is not present."""
    -    message = _("PortVnic Binding %(port_id)s is not present.")
    -
    -
    -class SubnetNotSpecified(exceptions.NeutronException):
    -    """Subnet id not specified."""
    -    message = _("No subnet_id specified for router gateway.")
    -
    -
    -class SubnetInterfacePresent(exceptions.NeutronException):
    -    """Subnet SVI interface already exists."""
    -    message = _("Subnet %(subnet_id)s has an interface on %(router_id)s.")
    -
    -
    -class PortIdForNexusSvi(exceptions.NeutronException):
    -        """Port Id specified for Nexus SVI."""
    -        message = _('Nexus hardware router gateway only uses Subnet Ids.')
    -
    -
    -class InvalidDetach(exceptions.NeutronException):
    -    message = _("Unable to unplug the attachment %(att_id)s from port "
    -                "%(port_id)s for network %(net_id)s. The attachment "
    -                "%(att_id)s does not exist.")
    -
    -
    -class PolicyProfileAlreadyExists(exceptions.NeutronException):
    -    """Policy Profile cannot be created since it already exists."""
    -    message = _("Policy Profile %(profile_id)s "
    -                "already exists.")
    -
    -
    -class PolicyProfileIdNotFound(exceptions.NotFound):
    -    """Policy Profile with the given UUID cannot be found."""
    -    message = _("Policy Profile %(profile_id)s could not be found.")
    -
    -
    -class PolicyProfileNameNotFound(exceptions.NotFound):
    -    """Policy Profile with the given name cannot be found."""
    -    message = _("Policy Profile %(profile_name)s could not be found.")
    -
    -
    -class NetworkProfileAlreadyExists(exceptions.NeutronException):
    -    """Network Profile cannot be created since it already exists."""
    -    message = _("Network Profile %(profile_id)s "
    -                "already exists.")
    -
    -
    -class NetworkProfileNotFound(exceptions.NotFound):
    -    """Network Profile with the given UUID/name cannot be found."""
    -    message = _("Network Profile %(profile)s could not be found.")
    -
    -
    -class NetworkProfileInUse(exceptions.InUse):
    -    """Network Profile with the given UUID is in use."""
    -    message = _("One or more network segments belonging to network "
    -                "profile %(profile)s is in use.")
    -
    -
    -class NoMoreNetworkSegments(exceptions.NoNetworkAvailable):
    -    """Network segments exhausted for the given network profile."""
    -    message = _("No more segments available in network segment pool "
    -                "%(network_profile_name)s.")
    -
    -
    -class VMNetworkNotFound(exceptions.NotFound):
    -    """VM Network with the given name cannot be found."""
    -    message = _("VM Network %(name)s could not be found.")
    -
    -
    -class VxlanIDInUse(exceptions.InUse):
    -    """VXLAN ID is in use."""
    -    message = _("Unable to create the network. "
    -                "The VXLAN ID %(vxlan_id)s is in use.")
    -
    -
    -class VxlanIDNotFound(exceptions.NotFound):
    -    """VXLAN ID cannot be found."""
    -    message = _("Vxlan ID %(vxlan_id)s not found.")
    -
    -
    -class VxlanIDOutsidePool(exceptions.NeutronException):
    -    """VXLAN ID cannot be allocated, as it is outside the configured pool."""
    -    message = _("Unable to complete operation. VXLAN ID exists outside of the "
    -                "configured network segment range.")
    -
    -
    -class VSMConnectionFailed(exceptions.ServiceUnavailable):
    -    """Connection to VSM failed."""
    -    message = _("Connection to VSM failed: %(reason)s.")
    -
    -
    -class VSMError(exceptions.NeutronException):
    -    """Error has occurred on the VSM."""
    -    message = _("Internal VSM Error: %(reason)s.")
    -
    -
    -class NetworkBindingNotFound(exceptions.NotFound):
    -    """Network Binding for network cannot be found."""
    -    message = _("Network Binding for network %(network_id)s could "
    -                "not be found.")
    -
    -
    -class PortBindingNotFound(exceptions.NotFound):
    -    """Port Binding for port cannot be found."""
    -    message = _("Port Binding for port %(port_id)s could "
    -                "not be found.")
    -
    -
    -class ProfileTenantBindingNotFound(exceptions.NotFound):
    -    """Profile to Tenant binding for given profile ID cannot be found."""
    -    message = _("Profile-Tenant binding for profile %(profile_id)s could "
    -                "not be found.")
    -
    -
    -class NoClusterFound(exceptions.NotFound):
    -    """No service cluster found to perform multi-segment bridging."""
    -    message = _("No service cluster found to perform multi-segment bridging.")
    
  • neutron/plugins/cisco/common/cisco_faults.py+0 134 removed
    @@ -1,134 +0,0 @@
    -# Copyright 2011 Cisco Systems, Inc.  All rights reserved.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -import webob.dec
    -
    -from neutron import wsgi
    -
    -
    -class Fault(webob.exc.HTTPException):
    -    """Error codes for API faults."""
    -
    -    _fault_names = {
    -        400: "malformedRequest",
    -        401: "unauthorized",
    -        451: "CredentialNotFound",
    -        452: "QoSNotFound",
    -        453: "NovatenantNotFound",
    -        454: "MultiportNotFound",
    -        470: "serviceUnavailable",
    -        471: "pluginFault"
    -    }
    -
    -    def __init__(self, exception):
    -        """Create a Fault for the given webob.exc.exception."""
    -        self.wrapped_exc = exception
    -
    -    @webob.dec.wsgify(RequestClass=wsgi.Request)
    -    def __call__(self, req):
    -        """Generate a WSGI response.
    -
    -        Response is generated based on the exception passed to constructor.
    -        """
    -        # Replace the body with fault details.
    -        code = self.wrapped_exc.status_int
    -        fault_name = self._fault_names.get(code, "neutronServiceFault")
    -        fault_data = {
    -            fault_name: {
    -                'code': code,
    -                'message': self.wrapped_exc.explanation}}
    -        # 'code' is an attribute on the fault tag itself
    -        content_type = req.best_match_content_type()
    -        self.wrapped_exc.body = wsgi.Serializer().serialize(
    -            fault_data, content_type)
    -        self.wrapped_exc.content_type = content_type
    -        return self.wrapped_exc
    -
    -
    -class PortNotFound(webob.exc.HTTPClientError):
    -    """PortNotFound exception.
    -
    -    subclass of :class:`~HTTPClientError`
    -
    -    This indicates that the server did not find the port specified
    -    in the HTTP request for a given network
    -
    -    code: 430, title: Port not Found
    -    """
    -    code = 430
    -    title = _('Port not Found')
    -    explanation = _('Unable to find a port with the specified identifier.')
    -
    -
    -class CredentialNotFound(webob.exc.HTTPClientError):
    -    """CredentialNotFound exception.
    -
    -    subclass of :class:`~HTTPClientError`
    -
    -    This indicates that the server did not find the Credential specified
    -    in the HTTP request
    -
    -    code: 451, title: Credential not Found
    -    """
    -    code = 451
    -    title = _('Credential Not Found')
    -    explanation = _('Unable to find a Credential with'
    -                    ' the specified identifier.')
    -
    -
    -class QosNotFound(webob.exc.HTTPClientError):
    -    """QosNotFound exception.
    -
    -    subclass of :class:`~HTTPClientError`
    -
    -    This indicates that the server did not find the QoS specified
    -    in the HTTP request
    -
    -    code: 452, title: QoS not Found
    -    """
    -    code = 452
    -    title = _('QoS Not Found')
    -    explanation = _('Unable to find a QoS with'
    -                    ' the specified identifier.')
    -
    -
    -class NovatenantNotFound(webob.exc.HTTPClientError):
    -    """NovatenantNotFound exception.
    -
    -    subclass of :class:`~HTTPClientError`
    -
    -    This indicates that the server did not find the Novatenant specified
    -    in the HTTP request
    -
    -    code: 453, title: Nova tenant not Found
    -    """
    -    code = 453
    -    title = _('Nova tenant Not Found')
    -    explanation = _('Unable to find a Novatenant with'
    -                    ' the specified identifier.')
    -
    -
    -class RequestedStateInvalid(webob.exc.HTTPClientError):
    -    """RequestedStateInvalid exception.
    -
    -    subclass of :class:`~HTTPClientError`
    -
    -    This indicates that the server could not update the port state
    -    to the request value
    -
    -    code: 431, title: Requested State Invalid
    -    """
    -    code = 431
    -    title = _('Requested State Invalid')
    -    explanation = _('Unable to update port state with specified value.')
    
  • neutron/plugins/cisco/common/config.py+0 138 removed
    @@ -1,138 +0,0 @@
    -# Copyright 2013 Cisco Systems, Inc.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -from oslo_config import cfg
    -
    -
    -cisco_opts = [
    -    cfg.StrOpt('vlan_name_prefix', default='q-',
    -               help=_("VLAN Name prefix")),
    -    cfg.StrOpt('provider_vlan_name_prefix', default='p-',
    -               help=_("VLAN Name prefix for provider vlans")),
    -    cfg.BoolOpt('provider_vlan_auto_create', default=True,
    -                help=_('Provider VLANs are automatically created as needed '
    -                       'on the Nexus switch')),
    -    cfg.BoolOpt('provider_vlan_auto_trunk', default=True,
    -                help=_('Provider VLANs are automatically trunked as needed '
    -                       'on the ports of the Nexus switch')),
    -    cfg.BoolOpt('nexus_l3_enable', default=False,
    -                help=_("Enable L3 support on the Nexus switches")),
    -    cfg.BoolOpt('svi_round_robin', default=False,
    -                help=_("Distribute SVI interfaces over all switches")),
    -    cfg.StrOpt('model_class',
    -               default='neutron.plugins.cisco.models.virt_phy_sw_v2.'
    -                       'VirtualPhysicalSwitchModelV2',
    -               help=_("Model Class")),
    -]
    -
    -cisco_n1k_opts = [
    -    cfg.StrOpt('integration_bridge', default='br-int',
    -               help=_("N1K Integration Bridge")),
    -    cfg.BoolOpt('enable_tunneling', default=True,
    -                help=_("N1K Enable Tunneling")),
    -    cfg.StrOpt('tunnel_bridge', default='br-tun',
    -               help=_("N1K Tunnel Bridge")),
    -    cfg.StrOpt('local_ip', default='10.0.0.3',
    -               help=_("N1K Local IP")),
    -    cfg.StrOpt('tenant_network_type', default='local',
    -               help=_("N1K Tenant Network Type")),
    -    cfg.StrOpt('bridge_mappings', default='',
    -               help=_("N1K Bridge Mappings")),
    -    cfg.StrOpt('vxlan_id_ranges', default='5000:10000',
    -               help=_("N1K VXLAN ID Ranges")),
    -    cfg.StrOpt('network_vlan_ranges', default='vlan:1:4095',
    -               help=_("N1K Network VLAN Ranges")),
    -    cfg.StrOpt('default_network_profile', default='default_network_profile',
    -               help=_("N1K default network profile")),
    -    cfg.StrOpt('default_policy_profile', default='service_profile',
    -               help=_("N1K default policy profile")),
    -    cfg.StrOpt('network_node_policy_profile', default='dhcp_pp',
    -               help=_("N1K policy profile for network node")),
    -    cfg.IntOpt('poll_duration', default=60,
    -               help=_("N1K Policy profile polling duration in seconds")),
    -    cfg.BoolOpt('restrict_policy_profiles', default=False,
    -               help=_("Restrict the visibility of policy profiles to the "
    -                      "tenants")),
    -    cfg.IntOpt('http_pool_size', default=4,
    -               help=_("Number of threads to use to make HTTP requests")),
    -    cfg.IntOpt('http_timeout', default=15,
    -               help=_("N1K http timeout duration in seconds")),
    -    cfg.BoolOpt('restrict_network_profiles', default=True,
    -               help=_("Restrict tenants from accessing network profiles "
    -                      "belonging to some other tenant")),
    -
    -]
    -
    -cfg.CONF.register_opts(cisco_opts, "CISCO")
    -cfg.CONF.register_opts(cisco_n1k_opts, "CISCO_N1K")
    -
    -# shortcuts
    -CONF = cfg.CONF
    -CISCO = cfg.CONF.CISCO
    -CISCO_N1K = cfg.CONF.CISCO_N1K
    -
    -#
    -# device_dictionary - Contains all external device configuration.
    -#
    -# When populated the device dictionary format is:
    -# {('<device ID>', '<device ipaddr>', '<keyword>'): '<value>', ...}
    -#
    -# Example:
    -# {('NEXUS_SWITCH', '1.1.1.1', 'username'): 'admin',
    -#  ('NEXUS_SWITCH', '1.1.1.1', 'password'): 'mySecretPassword',
    -#  ('NEXUS_SWITCH', '1.1.1.1', 'compute1'): '1/1', ...}
    -#
    -device_dictionary = {}
    -
    -#
    -# first_device_ip - IP address of first switch discovered in config
    -#
    -# Used for SVI placement when round-robin placement is disabled
    -#
    -first_device_ip = None
    -
    -
    -class CiscoConfigOptions(object):
    -    """Cisco Configuration Options Class."""
    -
    -    def __init__(self):
    -        self._create_device_dictionary()
    -
    -    def _create_device_dictionary(self):
    -        """
    -        Create the device dictionary from the cisco_plugins.ini
    -        device supported sections. Ex. NEXUS_SWITCH, N1KV.
    -        """
    -
    -        global first_device_ip
    -
    -        multi_parser = cfg.MultiConfigParser()
    -        read_ok = multi_parser.read(CONF.config_file)
    -
    -        if len(read_ok) != len(CONF.config_file):
    -            raise cfg.Error(_("Some config files were not parsed properly"))
    -
    -        first_device_ip = None
    -        for parsed_file in multi_parser.parsed:
    -            for parsed_item in parsed_file.keys():
    -                dev_id, sep, dev_ip = parsed_item.partition(':')
    -                if dev_id.lower() == 'n1kv':
    -                    for dev_key, value in parsed_file[parsed_item].items():
    -                        if dev_ip and not first_device_ip:
    -                            first_device_ip = dev_ip
    -                        device_dictionary[dev_id, dev_ip, dev_key] = value[0]
    -
    -
    -def get_device_dictionary():
    -    return device_dictionary
    
  • neutron/plugins/cisco/db/__init__.py+0 0 removed
  • neutron/plugins/cisco/db/l3/__init__.py+0 0 removed
  • neutron/plugins/cisco/db/l3/l3_models.py+0 97 removed
    @@ -1,97 +0,0 @@
    -# Copyright 2014 Cisco Systems, Inc.  All rights reserved.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -import sqlalchemy as sa
    -from sqlalchemy import orm
    -
    -from neutron.db import agents_db
    -from neutron.db import l3_db
    -from neutron.db import model_base
    -from neutron.db import models_v2
    -
    -
    -class HostingDevice(model_base.BASEV2, models_v2.HasId, models_v2.HasTenant):
    -    """Represents an appliance hosting Neutron router(s).
    -
    -       When the hosting device is a Nova VM 'id' is uuid of that VM.
    -    """
    -    __tablename__ = 'cisco_hosting_devices'
    -
    -    # complementary id to enable identification of associated Neutron resources
    -    complementary_id = sa.Column(sa.String(36))
    -    # manufacturer id of the device, e.g., its serial number
    -    device_id = sa.Column(sa.String(255))
    -    admin_state_up = sa.Column(sa.Boolean, nullable=False, default=True)
    -    # 'management_port_id' is the Neutron Port used for management interface
    -    management_port_id = sa.Column(sa.String(36),
    -                                   sa.ForeignKey('ports.id',
    -                                                 ondelete="SET NULL"))
    -    management_port = orm.relationship(models_v2.Port)
    -    # 'protocol_port' is udp/tcp port of hosting device. May be empty.
    -    protocol_port = sa.Column(sa.Integer)
    -    cfg_agent_id = sa.Column(sa.String(36),
    -                             sa.ForeignKey('agents.id'),
    -                             nullable=True)
    -    cfg_agent = orm.relationship(agents_db.Agent)
    -    # Service VMs take time to boot so we store creation time
    -    # so we can give preference to older ones when scheduling
    -    created_at = sa.Column(sa.DateTime, nullable=False)
    -    status = sa.Column(sa.String(16))
    -
    -
    -class HostedHostingPortBinding(model_base.BASEV2):
    -    """Represents binding of logical resource's port to its hosting port."""
    -    __tablename__ = 'cisco_port_mappings'
    -
    -    logical_resource_id = sa.Column(sa.String(36), primary_key=True)
    -    logical_port_id = sa.Column(sa.String(36),
    -                                sa.ForeignKey('ports.id',
    -                                              ondelete="CASCADE"),
    -                                primary_key=True)
    -    logical_port = orm.relationship(
    -        models_v2.Port,
    -        primaryjoin='Port.id==HostedHostingPortBinding.logical_port_id',
    -        backref=orm.backref('hosting_info', cascade='all', uselist=False))
    -    # type of hosted port, e.g., router_interface, ..._gateway, ..._floatingip
    -    port_type = sa.Column(sa.String(32))
    -    # type of network the router port belongs to
    -    network_type = sa.Column(sa.String(32))
    -    hosting_port_id = sa.Column(sa.String(36),
    -                                sa.ForeignKey('ports.id',
    -                                              ondelete='CASCADE'))
    -    hosting_port = orm.relationship(
    -        models_v2.Port,
    -        primaryjoin='Port.id==HostedHostingPortBinding.hosting_port_id')
    -    # VLAN tag for trunk ports
    -    segmentation_id = sa.Column(sa.Integer, autoincrement=False)
    -
    -
    -class RouterHostingDeviceBinding(model_base.BASEV2):
    -    """Represents binding between Neutron routers and their hosting devices."""
    -    __tablename__ = 'cisco_router_mappings'
    -
    -    router_id = sa.Column(sa.String(36),
    -                          sa.ForeignKey('routers.id', ondelete='CASCADE'),
    -                          primary_key=True)
    -    router = orm.relationship(
    -        l3_db.Router,
    -        backref=orm.backref('hosting_info', cascade='all', uselist=False))
    -    # If 'auto_schedule' is True then router is automatically scheduled
    -    # if it lacks a hosting device or its hosting device fails.
    -    auto_schedule = sa.Column(sa.Boolean, default=True, nullable=False)
    -    # id of hosting device hosting this router, None/NULL if unscheduled.
    -    hosting_device_id = sa.Column(sa.String(36),
    -                                  sa.ForeignKey('cisco_hosting_devices.id',
    -                                                ondelete='SET NULL'))
    -    hosting_device = orm.relationship(HostingDevice)
    
  • neutron/plugins/cisco/db/n1kv_db_v2.py+0 1673 removed
    @@ -1,1673 +0,0 @@
    -# Copyright 2013 Cisco Systems, Inc.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -import re
    -
    -import netaddr
    -from oslo_log import log as logging
    -from sqlalchemy.orm import exc
    -from sqlalchemy import sql
    -
    -from neutron.api.v2 import attributes
    -from neutron.common import exceptions as n_exc
    -import neutron.db.api as db
    -from neutron.db import models_v2
    -from neutron.i18n import _LW
    -from neutron.plugins.cisco.common import cisco_constants as c_const
    -from neutron.plugins.cisco.common import cisco_exceptions as c_exc
    -from neutron.plugins.cisco.common import config as c_conf
    -from neutron.plugins.cisco.db import n1kv_models_v2
    -from neutron.plugins.common import constants as p_const
    -
    -
    -LOG = logging.getLogger(__name__)
    -
    -
    -def del_trunk_segment_binding(db_session, trunk_segment_id, segment_pairs):
    -    """
    -    Delete a trunk network binding.
    -
    -    :param db_session: database session
    -    :param trunk_segment_id: UUID representing the trunk network
    -    :param segment_pairs: List of segment UUIDs in pair
    -                          representing the segments that are trunked
    -    """
    -    with db_session.begin(subtransactions=True):
    -        for (segment_id, dot1qtag) in segment_pairs:
    -            (db_session.query(n1kv_models_v2.N1kvTrunkSegmentBinding).
    -             filter_by(trunk_segment_id=trunk_segment_id,
    -                       segment_id=segment_id,
    -                       dot1qtag=dot1qtag).delete())
    -        alloc = (db_session.query(n1kv_models_v2.
    -                 N1kvTrunkSegmentBinding).
    -                 filter_by(trunk_segment_id=trunk_segment_id).first())
    -        if not alloc:
    -            binding = get_network_binding(db_session, trunk_segment_id)
    -            binding.physical_network = None
    -
    -
    -def del_multi_segment_binding(db_session, multi_segment_id, segment_pairs):
    -    """
    -    Delete a multi-segment network binding.
    -
    -    :param db_session: database session
    -    :param multi_segment_id: UUID representing the multi-segment network
    -    :param segment_pairs: List of segment UUIDs in pair
    -                          representing the segments that are bridged
    -    """
    -    with db_session.begin(subtransactions=True):
    -        for (segment1_id, segment2_id) in segment_pairs:
    -            (db_session.query(n1kv_models_v2.
    -             N1kvMultiSegmentNetworkBinding).filter_by(
    -                 multi_segment_id=multi_segment_id,
    -                 segment1_id=segment1_id,
    -                 segment2_id=segment2_id).delete())
    -
    -
    -def add_trunk_segment_binding(db_session, trunk_segment_id, segment_pairs):
    -    """
    -    Create a trunk network binding.
    -
    -    :param db_session: database session
    -    :param trunk_segment_id: UUID representing the multi-segment network
    -    :param segment_pairs: List of segment UUIDs in pair
    -                          representing the segments to be trunked
    -    """
    -    with db_session.begin(subtransactions=True):
    -        binding = get_network_binding(db_session, trunk_segment_id)
    -        for (segment_id, tag) in segment_pairs:
    -            if not binding.physical_network:
    -                member_seg_binding = get_network_binding(db_session,
    -                                                         segment_id)
    -                binding.physical_network = member_seg_binding.physical_network
    -            trunk_segment_binding = (
    -                n1kv_models_v2.N1kvTrunkSegmentBinding(
    -                    trunk_segment_id=trunk_segment_id,
    -                    segment_id=segment_id, dot1qtag=tag))
    -            db_session.add(trunk_segment_binding)
    -
    -
    -def add_multi_segment_binding(db_session, multi_segment_id, segment_pairs):
    -    """
    -    Create a multi-segment network binding.
    -
    -    :param db_session: database session
    -    :param multi_segment_id: UUID representing the multi-segment network
    -    :param segment_pairs: List of segment UUIDs in pair
    -                          representing the segments to be bridged
    -    """
    -    with db_session.begin(subtransactions=True):
    -        for (segment1_id, segment2_id) in segment_pairs:
    -            multi_segment_binding = (
    -                n1kv_models_v2.N1kvMultiSegmentNetworkBinding(
    -                    multi_segment_id=multi_segment_id,
    -                    segment1_id=segment1_id,
    -                    segment2_id=segment2_id))
    -            db_session.add(multi_segment_binding)
    -
    -
    -def add_multi_segment_encap_profile_name(db_session, multi_segment_id,
    -                                         segment_pair, profile_name):
    -    """
    -    Add the encapsulation profile name to the multi-segment network binding.
    -
    -    :param db_session: database session
    -    :param multi_segment_id: UUID representing the multi-segment network
    -    :param segment_pair: set containing the segment UUIDs that are bridged
    -    """
    -    with db_session.begin(subtransactions=True):
    -        binding = get_multi_segment_network_binding(db_session,
    -                                                    multi_segment_id,
    -                                                    segment_pair)
    -        binding.encap_profile_name = profile_name
    -
    -
    -def get_multi_segment_network_binding(db_session,
    -                                      multi_segment_id, segment_pair):
    -    """
    -    Retrieve multi-segment network binding.
    -
    -    :param db_session: database session
    -    :param multi_segment_id: UUID representing the trunk network whose binding
    -                             is to fetch
    -    :param segment_pair: set containing the segment UUIDs that are bridged
    -    :returns: binding object
    -    """
    -    try:
    -        (segment1_id, segment2_id) = segment_pair
    -        return (db_session.query(
    -                n1kv_models_v2.N1kvMultiSegmentNetworkBinding).
    -                filter_by(multi_segment_id=multi_segment_id,
    -                          segment1_id=segment1_id,
    -                          segment2_id=segment2_id)).one()
    -    except exc.NoResultFound:
    -        raise c_exc.NetworkBindingNotFound(network_id=multi_segment_id)
    -
    -
    -def get_multi_segment_members(db_session, multi_segment_id):
    -    """
    -    Retrieve all the member segments of a multi-segment network.
    -
    -    :param db_session: database session
    -    :param multi_segment_id: UUID representing the multi-segment network
    -    :returns: a list of tuples representing the mapped segments
    -    """
    -    with db_session.begin(subtransactions=True):
    -        allocs = (db_session.query(
    -                  n1kv_models_v2.N1kvMultiSegmentNetworkBinding).
    -                  filter_by(multi_segment_id=multi_segment_id))
    -        return [(a.segment1_id, a.segment2_id) for a in allocs]
    -
    -
    -def get_multi_segment_encap_dict(db_session, multi_segment_id):
    -    """
    -    Retrieve the encapsulation profiles for every segment pairs bridged.
    -
    -    :param db_session: database session
    -    :param multi_segment_id: UUID representing the multi-segment network
    -    :returns: a dictionary of lists containing the segment pairs in sets
    -    """
    -    with db_session.begin(subtransactions=True):
    -        encap_dict = {}
    -        allocs = (db_session.query(
    -                  n1kv_models_v2.N1kvMultiSegmentNetworkBinding).
    -                  filter_by(multi_segment_id=multi_segment_id))
    -        for alloc in allocs:
    -            if alloc.encap_profile_name not in encap_dict:
    -                encap_dict[alloc.encap_profile_name] = []
    -            seg_pair = (alloc.segment1_id, alloc.segment2_id)
    -            encap_dict[alloc.encap_profile_name].append(seg_pair)
    -        return encap_dict
    -
    -
    -def get_trunk_network_binding(db_session, trunk_segment_id, segment_pair):
    -    """
    -    Retrieve trunk network binding.
    -
    -    :param db_session: database session
    -    :param trunk_segment_id: UUID representing the trunk network whose binding
    -                             is to fetch
    -    :param segment_pair: set containing the segment_id and dot1qtag
    -    :returns: binding object
    -    """
    -    try:
    -        (segment_id, dot1qtag) = segment_pair
    -        return (db_session.query(n1kv_models_v2.N1kvTrunkSegmentBinding).
    -                filter_by(trunk_segment_id=trunk_segment_id,
    -                          segment_id=segment_id,
    -                          dot1qtag=dot1qtag)).one()
    -    except exc.NoResultFound:
    -        raise c_exc.NetworkBindingNotFound(network_id=trunk_segment_id)
    -
    -
    -def get_trunk_members(db_session, trunk_segment_id):
    -    """
    -    Retrieve all the member segments of a trunk network.
    -
    -    :param db_session: database session
    -    :param trunk_segment_id: UUID representing the trunk network
    -    :returns: a list of tuples representing the segment and their
    -              corresponding dot1qtag
    -    """
    -    with db_session.begin(subtransactions=True):
    -        allocs = (db_session.query(n1kv_models_v2.N1kvTrunkSegmentBinding).
    -                  filter_by(trunk_segment_id=trunk_segment_id))
    -        return [(a.segment_id, a.dot1qtag) for a in allocs]
    -
    -
    -def is_trunk_member(db_session, segment_id):
    -    """
    -    Checks if a segment is a member of a trunk segment.
    -
    -    :param db_session: database session
    -    :param segment_id: UUID of the segment to be checked
    -    :returns: boolean
    -    """
    -    with db_session.begin(subtransactions=True):
    -        ret = (db_session.query(n1kv_models_v2.N1kvTrunkSegmentBinding).
    -               filter_by(segment_id=segment_id).first())
    -        return bool(ret)
    -
    -
    -def is_multi_segment_member(db_session, segment_id):
    -    """
    -    Checks if a segment is a member of a multi-segment network.
    -
    -    :param db_session: database session
    -    :param segment_id: UUID of the segment to be checked
    -    :returns: boolean
    -    """
    -    with db_session.begin(subtransactions=True):
    -        ret1 = (db_session.query(
    -                n1kv_models_v2.N1kvMultiSegmentNetworkBinding).
    -                filter_by(segment1_id=segment_id).first())
    -        ret2 = (db_session.query(
    -                n1kv_models_v2.N1kvMultiSegmentNetworkBinding).
    -                filter_by(segment2_id=segment_id).first())
    -        return bool(ret1 or ret2)
    -
    -
    -def get_network_binding(db_session, network_id):
    -    """
    -    Retrieve network binding.
    -
    -    :param db_session: database session
    -    :param network_id: UUID representing the network whose binding is
    -                       to fetch
    -    :returns: binding object
    -    """
    -    try:
    -        return (db_session.query(n1kv_models_v2.N1kvNetworkBinding).
    -                filter_by(network_id=network_id).
    -                one())
    -    except exc.NoResultFound:
    -        raise c_exc.NetworkBindingNotFound(network_id=network_id)
    -
    -
    -def add_network_binding(db_session, network_id, network_type,
    -                        physical_network, segmentation_id,
    -                        multicast_ip, network_profile_id, add_segments):
    -    """
    -    Create network binding.
    -
    -    :param db_session: database session
    -    :param network_id: UUID representing the network
    -    :param network_type: string representing type of network (VLAN, OVERLAY,
    -                         MULTI_SEGMENT or TRUNK)
    -    :param physical_network: Only applicable for VLAN networks. It
    -                             represents a L2 Domain
    -    :param segmentation_id: integer representing VLAN or VXLAN ID
    -    :param multicast_ip: Native VXLAN technology needs a multicast IP to be
    -                         associated with every VXLAN ID to deal with broadcast
    -                         packets. A single multicast IP can be shared by
    -                         multiple VXLAN IDs.
    -    :param network_profile_id: network profile ID based on which this network
    -                               is created
    -    :param add_segments: List of segment UUIDs in pairs to be added to either a
    -                         multi-segment or trunk network
    -    """
    -    with db_session.begin(subtransactions=True):
    -        binding = n1kv_models_v2.N1kvNetworkBinding(
    -            network_id=network_id,
    -            network_type=network_type,
    -            physical_network=physical_network,
    -            segmentation_id=segmentation_id,
    -            multicast_ip=multicast_ip,
    -            profile_id=network_profile_id)
    -        db_session.add(binding)
    -        if add_segments is None:
    -            pass
    -        elif network_type == c_const.NETWORK_TYPE_MULTI_SEGMENT:
    -            add_multi_segment_binding(db_session, network_id, add_segments)
    -        elif network_type == c_const.NETWORK_TYPE_TRUNK:
    -            add_trunk_segment_binding(db_session, network_id, add_segments)
    -
    -
    -def get_segment_range(network_profile):
    -    """
    -    Get the segment range min and max for a network profile.
    -
    -    :params network_profile: object of type network profile
    -    :returns: integer values representing minimum and maximum segment
    -              range value
    -    """
    -    # Sort the range to ensure min, max is in order
    -    seg_min, seg_max = sorted(
    -        int(i) for i in network_profile.segment_range.split('-'))
    -    LOG.debug("seg_min %(seg_min)s, seg_max %(seg_max)s",
    -              {'seg_min': seg_min, 'seg_max': seg_max})
    -    return seg_min, seg_max
    -
    -
    -def get_multicast_ip(network_profile):
    -    """
    -    Retrieve a multicast ip from the defined pool.
    -
    -    :params network_profile: object of type network profile
    -    :returns: string representing multicast IP
    -    """
    -    # Round robin multicast ip allocation
    -    min_ip, max_ip = _get_multicast_ip_range(network_profile)
    -    addr_list = list((netaddr.iter_iprange(min_ip, max_ip)))
    -    mul_ip_str = str(addr_list[network_profile.multicast_ip_index])
    -
    -    network_profile.multicast_ip_index += 1
    -    if network_profile.multicast_ip_index == len(addr_list):
    -        network_profile.multicast_ip_index = 0
    -    return mul_ip_str
    -
    -
    -def _get_multicast_ip_range(network_profile):
    -    """
    -    Helper method to retrieve minimum and maximum multicast ip.
    -
    -    :params network_profile: object of type network profile
    -    :returns: two strings representing minimum multicast ip and
    -              maximum multicast ip
    -    """
    -    # Assumption: ip range belongs to the same subnet
    -    # Assumption: ip range is already sorted
    -    return network_profile.multicast_ip_range.split('-')
    -
    -
    -def get_port_binding(db_session, port_id):
    -    """
    -    Retrieve port binding.
    -
    -    :param db_session: database session
    -    :param port_id: UUID representing the port whose binding is to fetch
    -    :returns: port binding object
    -    """
    -    try:
    -        return (db_session.query(n1kv_models_v2.N1kvPortBinding).
    -                filter_by(port_id=port_id).
    -                one())
    -    except exc.NoResultFound:
    -        raise c_exc.PortBindingNotFound(port_id=port_id)
    -
    -
    -def add_port_binding(db_session, port_id, policy_profile_id):
    -    """
    -    Create port binding.
    -
    -    Bind the port with policy profile.
    -    :param db_session: database session
    -    :param port_id: UUID of the port
    -    :param policy_profile_id: UUID of the policy profile
    -    """
    -    with db_session.begin(subtransactions=True):
    -        binding = n1kv_models_v2.N1kvPortBinding(port_id=port_id,
    -                                                 profile_id=policy_profile_id)
    -        db_session.add(binding)
    -
    -
    -def delete_segment_allocations(db_session, net_p):
    -    """
    -    Delete the segment allocation entry from the table.
    -
    -    :params db_session: database session
    -    :params net_p: network profile object
    -    """
    -    with db_session.begin(subtransactions=True):
    -        seg_min, seg_max = get_segment_range(net_p)
    -        if net_p['segment_type'] == c_const.NETWORK_TYPE_VLAN:
    -            db_session.query(n1kv_models_v2.N1kvVlanAllocation).filter(
    -                (n1kv_models_v2.N1kvVlanAllocation.physical_network ==
    -                 net_p['physical_network']),
    -                (n1kv_models_v2.N1kvVlanAllocation.vlan_id >= seg_min),
    -                (n1kv_models_v2.N1kvVlanAllocation.vlan_id <=
    -                 seg_max)).delete()
    -        elif net_p['segment_type'] == c_const.NETWORK_TYPE_OVERLAY:
    -            db_session.query(n1kv_models_v2.N1kvVxlanAllocation).filter(
    -                (n1kv_models_v2.N1kvVxlanAllocation.vxlan_id >= seg_min),
    -                (n1kv_models_v2.N1kvVxlanAllocation.vxlan_id <=
    -                 seg_max)).delete()
    -
    -
    -def sync_vlan_allocations(db_session, net_p):
    -    """
    -    Synchronize vlan_allocations table with configured VLAN ranges.
    -
    -    Sync the network profile range with the vlan_allocations table for each
    -    physical network.
    -    :param db_session: database session
    -    :param net_p: network profile dictionary
    -    """
    -    with db_session.begin(subtransactions=True):
    -        seg_min, seg_max = get_segment_range(net_p)
    -        for vlan_id in range(seg_min, seg_max + 1):
    -            try:
    -                get_vlan_allocation(db_session,
    -                                    net_p['physical_network'],
    -                                    vlan_id)
    -            except c_exc.VlanIDNotFound:
    -                alloc = n1kv_models_v2.N1kvVlanAllocation(
    -                    physical_network=net_p['physical_network'],
    -                    vlan_id=vlan_id,
    -                    network_profile_id=net_p['id'])
    -                db_session.add(alloc)
    -
    -
    -def get_vlan_allocation(db_session, physical_network, vlan_id):
    -    """
    -    Retrieve vlan allocation.
    -
    -    :param db_session: database session
    -    :param physical network: string name for the physical network
    -    :param vlan_id: integer representing the VLAN ID.
    -    :returns: allocation object for given physical network and VLAN ID
    -    """
    -    try:
    -        return (db_session.query(n1kv_models_v2.N1kvVlanAllocation).
    -                filter_by(physical_network=physical_network,
    -                          vlan_id=vlan_id).one())
    -    except exc.NoResultFound:
    -        raise c_exc.VlanIDNotFound(vlan_id=vlan_id)
    -
    -
    -def reserve_vlan(db_session, network_profile):
    -    """
    -    Reserve a VLAN ID within the range of the network profile.
    -
    -    :param db_session: database session
    -    :param network_profile: network profile object
    -    """
    -    seg_min, seg_max = get_segment_range(network_profile)
    -    segment_type = c_const.NETWORK_TYPE_VLAN
    -
    -    with db_session.begin(subtransactions=True):
    -        alloc = (db_session.query(n1kv_models_v2.N1kvVlanAllocation).
    -                 filter(sql.and_(
    -                        n1kv_models_v2.N1kvVlanAllocation.vlan_id >= seg_min,
    -                        n1kv_models_v2.N1kvVlanAllocation.vlan_id <= seg_max,
    -                        n1kv_models_v2.N1kvVlanAllocation.physical_network ==
    -                        network_profile['physical_network'],
    -                        n1kv_models_v2.N1kvVlanAllocation.allocated ==
    -                        sql.false())
    -                        )).first()
    -        if alloc:
    -            segment_id = alloc.vlan_id
    -            physical_network = alloc.physical_network
    -            alloc.allocated = True
    -            return (physical_network, segment_type, segment_id, "0.0.0.0")
    -        raise c_exc.NoMoreNetworkSegments(
    -            network_profile_name=network_profile.name)
    -
    -
    -def reserve_vxlan(db_session, network_profile):
    -    """
    -    Reserve a VXLAN ID within the range of the network profile.
    -
    -    :param db_session: database session
    -    :param network_profile: network profile object
    -    """
    -    seg_min, seg_max = get_segment_range(network_profile)
    -    segment_type = c_const.NETWORK_TYPE_OVERLAY
    -    physical_network = ""
    -
    -    with db_session.begin(subtransactions=True):
    -        alloc = (db_session.query(n1kv_models_v2.N1kvVxlanAllocation).
    -                 filter(sql.and_(
    -                        n1kv_models_v2.N1kvVxlanAllocation.vxlan_id >=
    -                        seg_min,
    -                        n1kv_models_v2.N1kvVxlanAllocation.vxlan_id <=
    -                        seg_max,
    -                        n1kv_models_v2.N1kvVxlanAllocation.allocated ==
    -                        sql.false())
    -                        ).first())
    -        if alloc:
    -            segment_id = alloc.vxlan_id
    -            alloc.allocated = True
    -            if network_profile.sub_type == (c_const.
    -                                            NETWORK_SUBTYPE_NATIVE_VXLAN):
    -                return (physical_network, segment_type,
    -                        segment_id, get_multicast_ip(network_profile))
    -            else:
    -                return (physical_network, segment_type, segment_id, "0.0.0.0")
    -        raise n_exc.NoNetworkAvailable()
    -
    -
    -def alloc_network(db_session, network_profile_id, tenant_id):
    -    """
    -    Allocate network using first available free segment ID in segment range.
    -
    -    :param db_session: database session
    -    :param network_profile_id: UUID representing the network profile
    -    """
    -    with db_session.begin(subtransactions=True):
    -        network_profile = get_network_profile(db_session,
    -                                              network_profile_id, tenant_id)
    -        if network_profile.segment_type == c_const.NETWORK_TYPE_VLAN:
    -            return reserve_vlan(db_session, network_profile)
    -        if network_profile.segment_type == c_const.NETWORK_TYPE_OVERLAY:
    -            return reserve_vxlan(db_session, network_profile)
    -        return (None, network_profile.segment_type, 0, "0.0.0.0")
    -
    -
    -def reserve_specific_vlan(db_session, physical_network, vlan_id):
    -    """
    -    Reserve a specific VLAN ID for the network.
    -
    -    :param db_session: database session
    -    :param physical_network: string representing the name of physical network
    -    :param vlan_id: integer value of the segmentation ID to be reserved
    -    """
    -    with db_session.begin(subtransactions=True):
    -        try:
    -            alloc = (db_session.query(n1kv_models_v2.N1kvVlanAllocation).
    -                     filter_by(physical_network=physical_network,
    -                               vlan_id=vlan_id).
    -                     one())
    -            if alloc.allocated:
    -                if vlan_id == c_const.FLAT_VLAN_ID:
    -                    raise n_exc.FlatNetworkInUse(
    -                        physical_network=physical_network)
    -                else:
    -                    raise n_exc.VlanIdInUse(vlan_id=vlan_id,
    -                                            physical_network=physical_network)
    -            LOG.debug("Reserving specific vlan %(vlan)s on physical network "
    -                      "%(network)s from pool",
    -                      {"vlan": vlan_id, "network": physical_network})
    -            alloc.allocated = True
    -            db_session.add(alloc)
    -        except exc.NoResultFound:
    -            raise c_exc.VlanIDOutsidePool()
    -
    -
    -def release_vlan(db_session, physical_network, vlan_id):
    -    """
    -    Release a given VLAN ID.
    -
    -    :param db_session: database session
    -    :param physical_network: string representing the name of physical network
    -    :param vlan_id: integer value of the segmentation ID to be released
    -    """
    -    with db_session.begin(subtransactions=True):
    -        try:
    -            alloc = (db_session.query(n1kv_models_v2.N1kvVlanAllocation).
    -                     filter_by(physical_network=physical_network,
    -                               vlan_id=vlan_id).
    -                     one())
    -            alloc.allocated = False
    -        except exc.NoResultFound:
    -            LOG.warning(_LW("vlan_id %(vlan)s on physical network %(network)s "
    -                          "not found"),
    -                        {"vlan": vlan_id, "network": physical_network})
    -
    -
    -def sync_vxlan_allocations(db_session, net_p):
    -    """
    -    Synchronize vxlan_allocations table with configured vxlan ranges.
    -
    -    :param db_session: database session
    -    :param net_p: network profile dictionary
    -    """
    -    seg_min, seg_max = get_segment_range(net_p)
    -    if seg_max + 1 - seg_min > c_const.MAX_VXLAN_RANGE:
    -        msg = (_("Unreasonable vxlan ID range %(vxlan_min)s - %(vxlan_max)s") %
    -               {"vxlan_min": seg_min, "vxlan_max": seg_max})
    -        raise n_exc.InvalidInput(error_message=msg)
    -    with db_session.begin(subtransactions=True):
    -        for vxlan_id in range(seg_min, seg_max + 1):
    -            try:
    -                get_vxlan_allocation(db_session, vxlan_id)
    -            except c_exc.VxlanIDNotFound:
    -                alloc = n1kv_models_v2.N1kvVxlanAllocation(
    -                    network_profile_id=net_p['id'], vxlan_id=vxlan_id)
    -                db_session.add(alloc)
    -
    -
    -def get_vxlan_allocation(db_session, vxlan_id):
    -    """
    -    Retrieve VXLAN allocation for the given VXLAN ID.
    -
    -    :param db_session: database session
    -    :param vxlan_id: integer value representing the segmentation ID
    -    :returns: allocation object
    -    """
    -    try:
    -        return (db_session.query(n1kv_models_v2.N1kvVxlanAllocation).
    -                filter_by(vxlan_id=vxlan_id).one())
    -    except exc.NoResultFound:
    -        raise c_exc.VxlanIDNotFound(vxlan_id=vxlan_id)
    -
    -
    -def reserve_specific_vxlan(db_session, vxlan_id):
    -    """
    -    Reserve a specific VXLAN ID.
    -
    -    :param db_session: database session
    -    :param vxlan_id: integer value representing the segmentation ID
    -    """
    -    with db_session.begin(subtransactions=True):
    -        try:
    -            alloc = (db_session.query(n1kv_models_v2.N1kvVxlanAllocation).
    -                     filter_by(vxlan_id=vxlan_id).
    -                     one())
    -            if alloc.allocated:
    -                raise c_exc.VxlanIDInUse(vxlan_id=vxlan_id)
    -            LOG.debug("Reserving specific vxlan %s from pool", vxlan_id)
    -            alloc.allocated = True
    -            db_session.add(alloc)
    -        except exc.NoResultFound:
    -            raise c_exc.VxlanIDOutsidePool()
    -
    -
    -def release_vxlan(db_session, vxlan_id):
    -    """
    -    Release a given VXLAN ID.
    -
    -    :param db_session: database session
    -    :param vxlan_id: integer value representing the segmentation ID
    -    """
    -    with db_session.begin(subtransactions=True):
    -        try:
    -            alloc = (db_session.query(n1kv_models_v2.N1kvVxlanAllocation).
    -                     filter_by(vxlan_id=vxlan_id).
    -                     one())
    -            alloc.allocated = False
    -        except exc.NoResultFound:
    -            LOG.warning(_LW("vxlan_id %s not found"), vxlan_id)
    -
    -
    -def set_port_status(port_id, status):
    -    """
    -    Set the status of the port.
    -
    -    :param port_id: UUID representing the port
    -    :param status: string representing the new status
    -    """
    -    db_session = db.get_session()
    -    try:
    -        port = db_session.query(models_v2.Port).filter_by(id=port_id).one()
    -        port.status = status
    -    except exc.NoResultFound:
    -        raise n_exc.PortNotFound(port_id=port_id)
    -
    -
    -def get_vm_network(db_session, policy_profile_id, network_id):
    -    """
    -    Retrieve a vm_network based on policy profile and network id.
    -
    -    :param db_session: database session
    -    :param policy_profile_id: UUID representing policy profile
    -    :param network_id: UUID representing network
    -    :returns: VM network object
    -    """
    -    try:
    -        return (db_session.query(n1kv_models_v2.N1kVmNetwork).
    -                filter_by(profile_id=policy_profile_id,
    -                          network_id=network_id).one())
    -    except exc.NoResultFound:
    -        name = (c_const.VM_NETWORK_NAME_PREFIX + policy_profile_id
    -                + "_" + network_id)
    -        raise c_exc.VMNetworkNotFound(name=name)
    -
    -
    -def add_vm_network(db_session,
    -                   name,
    -                   policy_profile_id,
    -                   network_id,
    -                   port_count):
    -    """
    -    Create a VM network.
    -
    -    Add a VM network for a unique combination of network and
    -    policy profile. All ports having the same policy profile
    -    on one network will be associated with one VM network.
    -    :param db_session: database session
    -    :param name: string representing the name of the VM network
    -    :param policy_profile_id: UUID representing policy profile
    -    :param network_id: UUID representing a network
    -    :param port_count: integer representing the number of ports on vm network
    -    """
    -    with db_session.begin(subtransactions=True):
    -        vm_network = n1kv_models_v2.N1kVmNetwork(
    -            name=name,
    -            profile_id=policy_profile_id,
    -            network_id=network_id,
    -            port_count=port_count)
    -        db_session.add(vm_network)
    -        return vm_network
    -
    -
    -def update_vm_network_port_count(db_session, name, port_count):
    -    """
    -    Update a VM network with new port count.
    -
    -    :param db_session: database session
    -    :param name: string representing the name of the VM network
    -    :param port_count: integer representing the number of ports on VM network
    -    """
    -    try:
    -        with db_session.begin(subtransactions=True):
    -            vm_network = (db_session.query(n1kv_models_v2.N1kVmNetwork).
    -                          filter_by(name=name).one())
    -            if port_count is not None:
    -                vm_network.port_count = port_count
    -            return vm_network
    -    except exc.NoResultFound:
    -        raise c_exc.VMNetworkNotFound(name=name)
    -
    -
    -def delete_vm_network(db_session, policy_profile_id, network_id):
    -    """
    -    Delete a VM network.
    -
    -    :param db_session: database session
    -    :param policy_profile_id: UUID representing a policy profile
    -    :param network_id: UUID representing a network
    -    :returns: deleted VM network object
    -    """
    -    with db_session.begin(subtransactions=True):
    -        try:
    -            vm_network = get_vm_network(db_session,
    -                                        policy_profile_id,
    -                                        network_id)
    -            db_session.delete(vm_network)
    -            db_session.query(n1kv_models_v2.N1kVmNetwork).filter_by(
    -                name=vm_network["name"]).delete()
    -            return vm_network
    -        except exc.NoResultFound:
    -            name = (c_const.VM_NETWORK_NAME_PREFIX + policy_profile_id +
    -                    "_" + network_id)
    -            raise c_exc.VMNetworkNotFound(name=name)
    -
    -
    -def create_network_profile(db_session, network_profile):
    -    """Create a network profile."""
    -    LOG.debug("create_network_profile()")
    -    with db_session.begin(subtransactions=True):
    -        kwargs = {"name": network_profile["name"],
    -                  "segment_type": network_profile["segment_type"]}
    -        if network_profile["segment_type"] == c_const.NETWORK_TYPE_VLAN:
    -            kwargs["physical_network"] = network_profile["physical_network"]
    -            kwargs["segment_range"] = network_profile["segment_range"]
    -        elif network_profile["segment_type"] == c_const.NETWORK_TYPE_OVERLAY:
    -            kwargs["multicast_ip_index"] = 0
    -            kwargs["multicast_ip_range"] = network_profile[
    -                "multicast_ip_range"]
    -            kwargs["segment_range"] = network_profile["segment_range"]
    -            kwargs["sub_type"] = network_profile["sub_type"]
    -        elif network_profile["segment_type"] == c_const.NETWORK_TYPE_TRUNK:
    -            kwargs["sub_type"] = network_profile["sub_type"]
    -        net_profile = n1kv_models_v2.NetworkProfile(**kwargs)
    -        db_session.add(net_profile)
    -        return net_profile
    -
    -
    -def delete_network_profile(db_session, id, tenant_id=None):
    -    """Delete Network Profile."""
    -    LOG.debug("delete_network_profile()")
    -    with db_session.begin(subtransactions=True):
    -        try:
    -            network_profile = get_network_profile(db_session, id, tenant_id)
    -            db_session.delete(network_profile)
    -            (db_session.query(n1kv_models_v2.ProfileBinding).
    -             filter_by(profile_id=id).delete())
    -            return network_profile
    -        except exc.NoResultFound:
    -            raise c_exc.ProfileTenantBindingNotFound(profile_id=id)
    -
    -
    -def update_network_profile(db_session, id, network_profile, tenant_id=None):
    -    """Update Network Profile."""
    -    LOG.debug("update_network_profile()")
    -    with db_session.begin(subtransactions=True):
    -        profile = get_network_profile(db_session, id, tenant_id)
    -        profile.update(network_profile)
    -        return profile
    -
    -
    -def get_network_profile(db_session, id, tenant_id=None):
    -    """Get Network Profile."""
    -    LOG.debug("get_network_profile()")
    -    if tenant_id and c_conf.CISCO_N1K.restrict_network_profiles:
    -        if _profile_binding_exists(db_session=db_session,
    -                                   tenant_id=tenant_id,
    -                                   profile_id=id,
    -                                   profile_type=c_const.NETWORK) is None:
    -            raise c_exc.ProfileTenantBindingNotFound(profile_id=id)
    -    try:
    -        return db_session.query(n1kv_models_v2.NetworkProfile).filter_by(
    -                   id=id).one()
    -    except exc.NoResultFound:
    -        raise c_exc.NetworkProfileNotFound(profile=id)
    -
    -
    -def _get_network_profiles(db_session=None, physical_network=None):
    -    """
    -    Retrieve all network profiles.
    -
    -    Get Network Profiles on a particular physical network, if physical
    -    network is specified. If no physical network is specified, return
    -    all network profiles.
    -    """
    -    db_session = db_session or db.get_session()
    -    if physical_network:
    -        return (db_session.query(n1kv_models_v2.NetworkProfile).
    -                filter_by(physical_network=physical_network))
    -    return db_session.query(n1kv_models_v2.NetworkProfile)
    -
    -
    -def create_policy_profile(policy_profile):
    -    """Create Policy Profile."""
    -    LOG.debug("create_policy_profile()")
    -    db_session = db.get_session()
    -    with db_session.begin(subtransactions=True):
    -        p_profile = n1kv_models_v2.PolicyProfile(id=policy_profile["id"],
    -                                                 name=policy_profile["name"])
    -        db_session.add(p_profile)
    -        return p_profile
    -
    -
    -def delete_policy_profile(id):
    -    """Delete Policy Profile."""
    -    LOG.debug("delete_policy_profile()")
    -    db_session = db.get_session()
    -    with db_session.begin(subtransactions=True):
    -        policy_profile = get_policy_profile(db_session, id)
    -        db_session.delete(policy_profile)
    -
    -
    -def update_policy_profile(db_session, id, policy_profile):
    -    """Update a policy profile."""
    -    LOG.debug("update_policy_profile()")
    -    with db_session.begin(subtransactions=True):
    -        _profile = get_policy_profile(db_session, id)
    -        _profile.update(policy_profile)
    -        return _profile
    -
    -
    -def get_policy_profile(db_session, id):
    -    """Get Policy Profile."""
    -    LOG.debug("get_policy_profile()")
    -    try:
    -        return db_session.query(
    -            n1kv_models_v2.PolicyProfile).filter_by(id=id).one()
    -    except exc.NoResultFound:
    -        raise c_exc.PolicyProfileIdNotFound(profile_id=id)
    -
    -
    -def get_policy_profiles():
    -    """Retrieve all policy profiles."""
    -    db_session = db.get_session()
    -    with db_session.begin(subtransactions=True):
    -        return db_session.query(n1kv_models_v2.PolicyProfile)
    -
    -
    -def create_profile_binding(db_session, tenant_id, profile_id, profile_type):
    -    """Create Network/Policy Profile association with a tenant."""
    -    db_session = db_session or db.get_session()
    -    if profile_type not in ["network", "policy"]:
    -        raise n_exc.NeutronException(_("Invalid profile type"))
    -
    -    if _profile_binding_exists(db_session,
    -                               tenant_id,
    -                               profile_id,
    -                               profile_type):
    -        return get_profile_binding(db_session, tenant_id, profile_id)
    -
    -    with db_session.begin(subtransactions=True):
    -        binding = n1kv_models_v2.ProfileBinding(profile_type=profile_type,
    -                                                profile_id=profile_id,
    -                                                tenant_id=tenant_id)
    -        db_session.add(binding)
    -        return binding
    -
    -
    -def _profile_binding_exists(db_session, tenant_id, profile_id, profile_type):
    -    """Check if the profile-tenant binding exists."""
    -    LOG.debug("_profile_binding_exists()")
    -    db_session = db_session or db.get_session()
    -    return (db_session.query(n1kv_models_v2.ProfileBinding).
    -            filter_by(tenant_id=tenant_id, profile_id=profile_id,
    -                      profile_type=profile_type).first())
    -
    -
    -def get_profile_binding(db_session, tenant_id, profile_id):
    -    """Get Network/Policy Profile - Tenant binding."""
    -    LOG.debug("get_profile_binding()")
    -    try:
    -        return (db_session.query(n1kv_models_v2.ProfileBinding).filter_by(
    -            tenant_id=tenant_id, profile_id=profile_id).one())
    -    except exc.NoResultFound:
    -        raise c_exc.ProfileTenantBindingNotFound(profile_id=profile_id)
    -
    -
    -def delete_profile_binding(db_session, tenant_id, profile_id):
    -    """Delete Policy Binding."""
    -    LOG.debug("delete_profile_binding()")
    -    db_session = db_session or db.get_session()
    -    try:
    -        binding = get_profile_binding(db_session, tenant_id, profile_id)
    -        with db_session.begin(subtransactions=True):
    -            db_session.delete(binding)
    -    except c_exc.ProfileTenantBindingNotFound:
    -        LOG.debug("Profile-Tenant binding missing for profile ID "
    -                  "%(profile_id)s and tenant ID %(tenant_id)s",
    -                  {"profile_id": profile_id, "tenant_id": tenant_id})
    -        return
    -
    -
    -def update_profile_binding(db_session, profile_id, tenants, profile_type):
    -    """Updating Profile Binding."""
    -    LOG.debug('update_profile_binding()')
    -    if profile_type not in ("network", "policy"):
    -        raise n_exc.NeutronException(_("Invalid profile type"))
    -    db_session = db_session or db.get_session()
    -    with db_session.begin(subtransactions=True):
    -        db_session.query(n1kv_models_v2.ProfileBinding).filter_by(
    -            profile_id=profile_id, profile_type=profile_type).delete()
    -        new_tenants_set = set(tenants)
    -        for tenant_id in new_tenants_set:
    -            tenant = n1kv_models_v2.ProfileBinding(profile_type=profile_type,
    -                                                   tenant_id=tenant_id,
    -                                                   profile_id=profile_id)
    -            db_session.add(tenant)
    -
    -
    -def _get_profile_bindings(db_session, profile_type=None):
    -    """
    -    Retrieve a list of profile bindings.
    -
    -    Get all profile-tenant bindings based on profile type.
    -    If profile type is None, return profile-tenant binding for all
    -    profile types.
    -    """
    -    if profile_type:
    -        return (db_session.query(n1kv_models_v2.ProfileBinding).
    -                filter_by(profile_type=profile_type))
    -    return db_session.query(n1kv_models_v2.ProfileBinding)
    -
    -
    -def _get_profile_bindings_by_uuid(db_session, profile_id):
    -    """
    -    Retrieve a list of profile bindings.
    -
    -    Get all profile-tenant bindings based on profile UUID.
    -    """
    -    return (db_session.query(n1kv_models_v2.ProfileBinding).
    -            filter_by(profile_id=profile_id))
    -
    -
    -class NetworkProfile_db_mixin(object):
    -
    -    """Network Profile Mixin."""
    -
    -    def _replace_fake_tenant_id_with_real(self, context):
    -        """
    -        Replace default tenant-id with admin tenant-ids.
    -
    -        Default tenant-ids are populated in profile bindings when plugin is
    -        initialized. Replace these tenant-ids with admin's tenant-id.
    -        :param context: neutron api request context
    -        """
    -        if context.is_admin and context.tenant_id:
    -            tenant_id = context.tenant_id
    -            db_session = context.session
    -            with db_session.begin(subtransactions=True):
    -                (db_session.query(n1kv_models_v2.ProfileBinding).
    -                 filter_by(tenant_id=c_const.TENANT_ID_NOT_SET).
    -                 update({'tenant_id': tenant_id}))
    -
    -    def _get_network_collection_for_tenant(self, db_session, model, tenant_id):
    -        net_profile_ids = (db_session.query(n1kv_models_v2.ProfileBinding.
    -                                            profile_id).
    -                           filter_by(tenant_id=tenant_id).
    -                           filter_by(profile_type=c_const.NETWORK).all())
    -        if not net_profile_ids:
    -            return []
    -        network_profiles = (db_session.query(model).filter(model.id.in_(
    -            pid[0] for pid in net_profile_ids)))
    -        return [self._make_network_profile_dict(p) for p in network_profiles]
    -
    -    def _make_profile_bindings_dict(self, profile_binding, fields=None):
    -        res = {"profile_id": profile_binding["profile_id"],
    -               "tenant_id": profile_binding["tenant_id"]}
    -        return self._fields(res, fields)
    -
    -    def _make_network_profile_dict(self, network_profile, fields=None):
    -        res = {"id": network_profile["id"],
    -               "name": network_profile["name"],
    -               "segment_type": network_profile["segment_type"],
    -               "sub_type": network_profile["sub_type"],
    -               "segment_range": network_profile["segment_range"],
    -               "multicast_ip_index": network_profile["multicast_ip_index"],
    -               "multicast_ip_range": network_profile["multicast_ip_range"],
    -               "physical_network": network_profile["physical_network"]}
    -        return self._fields(res, fields)
    -
    -    def _segment_in_use(self, db_session, network_profile):
    -        """Verify whether a segment is allocated for given network profile."""
    -        with db_session.begin(subtransactions=True):
    -            return (db_session.query(n1kv_models_v2.N1kvNetworkBinding).
    -                    filter_by(profile_id=network_profile['id'])).first()
    -
    -    def get_network_profile_bindings(self, context, filters=None, fields=None):
    -        """
    -        Retrieve a list of profile bindings for network profiles.
    -
    -        :param context: neutron api request context
    -        :param filters: a dictionary with keys that are valid keys for a
    -                        profile bindings object. Values in this dictiontary are
    -                        an iterable containing values that will be used for an
    -                        exact match comparison for that value. Each result
    -                        returned by this function will have matched one of the
    -                        values for each key in filters
    -        :params fields: a list of strings that are valid keys in a profile
    -                        bindings dictionary. Only these fields will be returned
    -        :returns: list of profile bindings
    -        """
    -        if context.is_admin:
    -            profile_bindings = _get_profile_bindings(
    -                context.session,
    -                profile_type=c_const.NETWORK)
    -            return [self._make_profile_bindings_dict(pb)
    -                    for pb in profile_bindings]
    -
    -    def create_network_profile(self, context, network_profile):
    -        """
    -        Create a network profile.
    -
    -        :param context: neutron api request context
    -        :param network_profile: network profile dictionary
    -        :returns: network profile dictionary
    -        """
    -        self._replace_fake_tenant_id_with_real(context)
    -        p = network_profile["network_profile"]
    -        self._validate_network_profile_args(context, p)
    -        with context.session.begin(subtransactions=True):
    -            net_profile = create_network_profile(context.session, p)
    -            if net_profile.segment_type == c_const.NETWORK_TYPE_VLAN:
    -                sync_vlan_allocations(context.session, net_profile)
    -            elif net_profile.segment_type == c_const.NETWORK_TYPE_OVERLAY:
    -                sync_vxlan_allocations(context.session, net_profile)
    -            create_profile_binding(context.session,
    -                                   context.tenant_id,
    -                                   net_profile.id,
    -                                   c_const.NETWORK)
    -            if p.get(c_const.ADD_TENANTS):
    -                for tenant in p[c_const.ADD_TENANTS]:
    -                    self.add_network_profile_tenant(context.session,
    -                                                    net_profile.id,
    -                                                    tenant)
    -        return self._make_network_profile_dict(net_profile)
    -
    -    def delete_network_profile(self, context, id):
    -        """
    -        Delete a network profile.
    -
    -        :param context: neutron api request context
    -        :param id: UUID representing network profile to delete
    -        :returns: deleted network profile dictionary
    -        """
    -        # Check whether the network profile is in use.
    -        if self._segment_in_use(context.session,
    -                                get_network_profile(context.session, id,
    -                                                    context.tenant_id)):
    -            raise c_exc.NetworkProfileInUse(profile=id)
    -        # Delete and return the network profile if it is not in use.
    -        _profile = delete_network_profile(context.session, id,
    -                                          context.tenant_id)
    -        return self._make_network_profile_dict(_profile)
    -
    -    def update_network_profile(self, context, id, network_profile):
    -        """
    -        Update a network profile.
    -
    -        Add/remove network profile to tenant-id binding for the corresponding
    -        options and if user is admin.
    -        :param context: neutron api request context
    -        :param id: UUID representing network profile to update
    -        :param network_profile: network profile dictionary
    -        :returns: updated network profile dictionary
    -        """
    -        # Flag to check whether network profile is updated or not.
    -        is_updated = False
    -        p = network_profile["network_profile"]
    -        original_net_p = get_network_profile(context.session, id,
    -                                             context.tenant_id)
    -        # Update network profile to tenant id binding.
    -        if context.is_admin and c_const.ADD_TENANTS in p:
    -            profile_bindings = _get_profile_bindings_by_uuid(context.session,
    -                                                             profile_id=id)
    -            for bindings in profile_bindings:
    -                p[c_const.ADD_TENANTS].append(bindings.tenant_id)
    -            update_profile_binding(context.session, id,
    -                                   p[c_const.ADD_TENANTS], c_const.NETWORK)
    -            is_updated = True
    -        if context.is_admin and c_const.REMOVE_TENANTS in p:
    -            for remove_tenant in p[c_const.REMOVE_TENANTS]:
    -                if remove_tenant == context.tenant_id:
    -                    continue
    -                delete_profile_binding(context.session, remove_tenant, id)
    -            is_updated = True
    -        if original_net_p.segment_type == c_const.NETWORK_TYPE_TRUNK:
    -            #TODO(abhraut): Remove check when Trunk supports segment range.
    -            if p.get('segment_range'):
    -                msg = _("segment_range not required for TRUNK")
    -                LOG.error(msg)
    -                raise n_exc.InvalidInput(error_message=msg)
    -        if original_net_p.segment_type in [c_const.NETWORK_TYPE_VLAN,
    -                                           c_const.NETWORK_TYPE_TRUNK]:
    -            if p.get("multicast_ip_range"):
    -                msg = _("multicast_ip_range not required")
    -                LOG.error(msg)
    -                raise n_exc.InvalidInput(error_message=msg)
    -        # Update segment range if network profile is not in use.
    -        if (p.get("segment_range") and
    -            p.get("segment_range") != original_net_p.segment_range):
    -            if not self._segment_in_use(context.session, original_net_p):
    -                delete_segment_allocations(context.session, original_net_p)
    -                updated_net_p = update_network_profile(context.session, id, p,
    -                                                       context.tenant_id)
    -                self._validate_segment_range_uniqueness(context,
    -                                                        updated_net_p, id)
    -                if original_net_p.segment_type == c_const.NETWORK_TYPE_VLAN:
    -                    sync_vlan_allocations(context.session, updated_net_p)
    -                if original_net_p.segment_type == c_const.NETWORK_TYPE_OVERLAY:
    -                    sync_vxlan_allocations(context.session, updated_net_p)
    -                is_updated = True
    -            else:
    -                raise c_exc.NetworkProfileInUse(profile=id)
    -        if (p.get('multicast_ip_range') and
    -            (p.get("multicast_ip_range") !=
    -             original_net_p.get("multicast_ip_range"))):
    -            self._validate_multicast_ip_range(p)
    -            if not self._segment_in_use(context.session, original_net_p):
    -                is_updated = True
    -            else:
    -                raise c_exc.NetworkProfileInUse(profile=id)
    -        # Update network profile if name is updated and the network profile
    -        # is not yet updated.
    -        if "name" in p and not is_updated:
    -            is_updated = True
    -        # Return network profile if it is successfully updated.
    -        if is_updated:
    -            return self._make_network_profile_dict(
    -                update_network_profile(context.session, id, p,
    -                                       context.tenant_id))
    -
    -    def get_network_profile(self, context, id, fields=None):
    -        """
    -        Retrieve a network profile.
    -
    -        :param context: neutron api request context
    -        :param id: UUID representing the network profile to retrieve
    -        :params fields: a list of strings that are valid keys in a  network
    -                        profile dictionary. Only these fields will be returned
    -        :returns: network profile dictionary
    -        """
    -        profile = get_network_profile(context.session, id, context.tenant_id)
    -        return self._make_network_profile_dict(profile, fields)
    -
    -    def get_network_profiles(self, context, filters=None, fields=None):
    -        """
    -        Retrieve a list of all network profiles.
    -
    -        Retrieve all network profiles if tenant is admin. For a non-admin
    -        tenant, retrieve all network profiles belonging to this tenant only.
    -        :param context: neutron api request context
    -        :param filters: a dictionary with keys that are valid keys for a
    -                        network profile object. Values in this dictiontary are
    -                        an iterable containing values that will be used for an
    -                        exact match comparison for that value. Each result
    -                        returned by this function will have matched one of the
    -                        values for each key in filters
    -        :params fields: a list of strings that are valid keys in a  network
    -                        profile dictionary. Only these fields will be returned
    -        :returns: list of all network profiles
    -        """
    -        if context.is_admin:
    -            return self._get_collection(context, n1kv_models_v2.NetworkProfile,
    -                                        self._make_network_profile_dict,
    -                                        filters=filters, fields=fields)
    -        return self._get_network_collection_for_tenant(context.session,
    -                                                       n1kv_models_v2.
    -                                                       NetworkProfile,
    -                                                       context.tenant_id)
    -
    -    def add_network_profile_tenant(self,
    -                                   db_session,
    -                                   network_profile_id,
    -                                   tenant_id):
    -        """
    -        Add a tenant to a network profile.
    -
    -        :param db_session: database session
    -        :param network_profile_id: UUID representing network profile
    -        :param tenant_id: UUID representing the tenant
    -        :returns: profile binding object
    -        """
    -        return create_profile_binding(db_session,
    -                                      tenant_id,
    -                                      network_profile_id,
    -                                      c_const.NETWORK)
    -
    -    def network_profile_exists(self, context, id):
    -        """
    -        Verify whether a network profile for given id exists.
    -
    -        :param context: neutron api request context
    -        :param id: UUID representing network profile
    -        :returns: true if network profile exist else False
    -        """
    -        try:
    -            get_network_profile(context.session, id, context.tenant_id)
    -            return True
    -        except c_exc.NetworkProfileNotFound(profile=id):
    -            return False
    -
    -    def _get_segment_range(self, data):
    -        return (int(seg) for seg in data.split("-")[:2])
    -
    -    def _validate_network_profile_args(self, context, p):
    -        """
    -        Validate completeness of Nexus1000V network profile arguments.
    -
    -        :param context: neutron api request context
    -        :param p: network profile object
    -        """
    -        self._validate_network_profile(p)
    -        segment_type = p['segment_type'].lower()
    -        if segment_type != c_const.NETWORK_TYPE_TRUNK:
    -            self._validate_segment_range_uniqueness(context, p)
    -
    -    def _validate_segment_range(self, network_profile):
    -        """
    -        Validate segment range values.
    -
    -        :param network_profile: network profile object
    -        """
    -        if not re.match(r"(\d+)\-(\d+)", network_profile["segment_range"]):
    -            msg = _("Invalid segment range. example range: 500-550")
    -            raise n_exc.InvalidInput(error_message=msg)
    -
    -    def _validate_multicast_ip_range(self, network_profile):
    -        """
    -        Validate multicast ip range values.
    -
    -        :param network_profile: network profile object
    -        """
    -        try:
    -            min_ip, max_ip = (network_profile
    -                              ['multicast_ip_range'].split('-', 1))
    -        except ValueError:
    -            msg = _("Invalid multicast ip address range. "
    -                    "example range: 224.1.1.1-224.1.1.10")
    -            LOG.error(msg)
    -            raise n_exc.InvalidInput(error_message=msg)
    -        for ip in [min_ip, max_ip]:
    -            try:
    -                if not netaddr.IPAddress(ip).is_multicast():
    -                    msg = _("%s is not a valid multicast ip address") % ip
    -                    LOG.error(msg)
    -                    raise n_exc.InvalidInput(error_message=msg)
    -                if netaddr.IPAddress(ip) <= netaddr.IPAddress('224.0.0.255'):
    -                    msg = _("%s is reserved multicast ip address") % ip
    -                    LOG.error(msg)
    -                    raise n_exc.InvalidInput(error_message=msg)
    -            except netaddr.AddrFormatError:
    -                msg = _("%s is not a valid ip address") % ip
    -                LOG.error(msg)
    -                raise n_exc.InvalidInput(error_message=msg)
    -        if netaddr.IPAddress(min_ip) > netaddr.IPAddress(max_ip):
    -            msg = (_("Invalid multicast IP range '%(min_ip)s-%(max_ip)s':"
    -                     " Range should be from low address to high address") %
    -                   {'min_ip': min_ip, 'max_ip': max_ip})
    -            LOG.error(msg)
    -            raise n_exc.InvalidInput(error_message=msg)
    -
    -    def _validate_network_profile(self, net_p):
    -        """
    -        Validate completeness of a network profile arguments.
    -
    -        :param net_p: network profile object
    -        """
    -        if net_p["segment_type"] == "":
    -            msg = _("Arguments segment_type missing"
    -                    " for network profile")
    -            LOG.error(msg)
    -            raise n_exc.InvalidInput(error_message=msg)
    -        segment_type = net_p["segment_type"].lower()
    -        if segment_type not in [c_const.NETWORK_TYPE_VLAN,
    -                                c_const.NETWORK_TYPE_OVERLAY,
    -                                c_const.NETWORK_TYPE_TRUNK,
    -                                c_const.NETWORK_TYPE_MULTI_SEGMENT]:
    -            msg = _("segment_type should either be vlan, overlay, "
    -                    "multi-segment or trunk")
    -            LOG.error(msg)
    -            raise n_exc.InvalidInput(error_message=msg)
    -        if segment_type == c_const.NETWORK_TYPE_VLAN:
    -            if "physical_network" not in net_p:
    -                msg = _("Argument physical_network missing "
    -                        "for network profile")
    -                LOG.error(msg)
    -                raise n_exc.InvalidInput(error_message=msg)
    -        if segment_type == c_const.NETWORK_TYPE_TRUNK:
    -            if net_p["segment_range"]:
    -                msg = _("segment_range not required for trunk")
    -                LOG.error(msg)
    -                raise n_exc.InvalidInput(error_message=msg)
    -        if segment_type in [c_const.NETWORK_TYPE_TRUNK,
    -                            c_const.NETWORK_TYPE_OVERLAY]:
    -            if not attributes.is_attr_set(net_p.get("sub_type")):
    -                msg = _("Argument sub_type missing "
    -                        "for network profile")
    -                LOG.error(msg)
    -                raise n_exc.InvalidInput(error_message=msg)
    -        if segment_type in [c_const.NETWORK_TYPE_VLAN,
    -                            c_const.NETWORK_TYPE_OVERLAY]:
    -            if "segment_range" not in net_p:
    -                msg = _("Argument segment_range missing "
    -                        "for network profile")
    -                LOG.error(msg)
    -                raise n_exc.InvalidInput(error_message=msg)
    -            self._validate_segment_range(net_p)
    -        if segment_type == c_const.NETWORK_TYPE_OVERLAY:
    -            if net_p['sub_type'] != c_const.NETWORK_SUBTYPE_NATIVE_VXLAN:
    -                net_p['multicast_ip_range'] = '0.0.0.0'
    -            else:
    -                multicast_ip_range = net_p.get("multicast_ip_range")
    -                if not attributes.is_attr_set(multicast_ip_range):
    -                    msg = _("Argument multicast_ip_range missing"
    -                            " for VXLAN multicast network profile")
    -                    LOG.error(msg)
    -                    raise n_exc.InvalidInput(error_message=msg)
    -                self._validate_multicast_ip_range(net_p)
    -        else:
    -            net_p['multicast_ip_range'] = '0.0.0.0'
    -
    -    def _validate_segment_range_uniqueness(self, context, net_p, id=None):
    -        """
    -        Validate that segment range doesn't overlap.
    -
    -        :param context: neutron api request context
    -        :param net_p: network profile dictionary
    -        :param id: UUID representing the network profile being updated
    -        """
    -        segment_type = net_p["segment_type"].lower()
    -        seg_min, seg_max = self._get_segment_range(net_p['segment_range'])
    -        if segment_type == c_const.NETWORK_TYPE_VLAN:
    -            if not ((seg_min <= seg_max) and
    -                    ((seg_min in range(p_const.MIN_VLAN_TAG,
    -                                       c_const.N1KV_VLAN_RESERVED_MIN) and
    -                      seg_max in range(p_const.MIN_VLAN_TAG,
    -                                       c_const.N1KV_VLAN_RESERVED_MIN)) or
    -                     (seg_min in range(c_const.N1KV_VLAN_RESERVED_MAX + 1,
    -                                       p_const.MAX_VLAN_TAG) and
    -                      seg_max in range(c_const.N1KV_VLAN_RESERVED_MAX + 1,
    -                                       p_const.MAX_VLAN_TAG)))):
    -                msg = (_("Segment range is invalid, select from "
    -                         "%(min)s-%(nmin)s, %(nmax)s-%(max)s") %
    -                       {"min": p_const.MIN_VLAN_TAG,
    -                        "nmin": c_const.N1KV_VLAN_RESERVED_MIN - 1,
    -                        "nmax": c_const.N1KV_VLAN_RESERVED_MAX + 1,
    -                        "max": p_const.MAX_VLAN_TAG - 1})
    -                LOG.error(msg)
    -                raise n_exc.InvalidInput(error_message=msg)
    -            profiles = _get_network_profiles(
    -                db_session=context.session,
    -                physical_network=net_p["physical_network"]
    -            )
    -        elif segment_type in [c_const.NETWORK_TYPE_OVERLAY,
    -                              c_const.NETWORK_TYPE_MULTI_SEGMENT,
    -                              c_const.NETWORK_TYPE_TRUNK]:
    -            if (seg_min > seg_max or
    -                seg_min < c_const.N1KV_VXLAN_MIN or
    -                seg_max > c_const.N1KV_VXLAN_MAX):
    -                msg = (_("segment range is invalid. Valid range is : "
    -                         "%(min)s-%(max)s") %
    -                       {"min": c_const.N1KV_VXLAN_MIN,
    -                        "max": c_const.N1KV_VXLAN_MAX})
    -                LOG.error(msg)
    -                raise n_exc.InvalidInput(error_message=msg)
    -            profiles = _get_network_profiles(db_session=context.session)
    -        if profiles:
    -            for profile in profiles:
    -                if id and profile.id == id:
    -                    continue
    -                name = profile.name
    -                segment_range = profile.segment_range
    -                if net_p["name"] == name:
    -                    msg = (_("NetworkProfile name %s already exists") %
    -                           net_p["name"])
    -                    LOG.error(msg)
    -                    raise n_exc.InvalidInput(error_message=msg)
    -                if (c_const.NETWORK_TYPE_MULTI_SEGMENT in
    -                    [profile.segment_type, net_p["segment_type"]] or
    -                    c_const.NETWORK_TYPE_TRUNK in
    -                    [profile.segment_type, net_p["segment_type"]]):
    -                    continue
    -                seg_min, seg_max = self._get_segment_range(
    -                    net_p["segment_range"])
    -                profile_seg_min, profile_seg_max = self._get_segment_range(
    -                    segment_range)
    -                if ((profile_seg_min <= seg_min <= profile_seg_max) or
    -                    (profile_seg_min <= seg_max <= profile_seg_max) or
    -                    ((seg_min <= profile_seg_min) and
    -                     (seg_max >= profile_seg_max))):
    -                    msg = _("Segment range overlaps with another profile")
    -                    LOG.error(msg)
    -                    raise n_exc.InvalidInput(error_message=msg)
    -
    -    def _get_network_profile_by_name(self, db_session, name):
    -        """
    -        Retrieve network profile based on name.
    -
    -        :param db_session: database session
    -        :param name: string representing the name for the network profile
    -        :returns: network profile object
    -        """
    -        with db_session.begin(subtransactions=True):
    -            try:
    -                return (db_session.query(n1kv_models_v2.NetworkProfile).
    -                        filter_by(name=name).one())
    -            except exc.NoResultFound:
    -                raise c_exc.NetworkProfileNotFound(profile=name)
    -
    -
    -class PolicyProfile_db_mixin(object):
    -
    -    """Policy Profile Mixin."""
    -
    -    def _get_policy_collection_for_tenant(self, db_session, model, tenant_id):
    -        profile_ids = (db_session.query(n1kv_models_v2.
    -                       ProfileBinding.profile_id)
    -                       .filter_by(tenant_id=tenant_id).
    -                       filter_by(profile_type=c_const.POLICY).all())
    -        if not profile_ids:
    -            return []
    -        profiles = db_session.query(model).filter(model.id.in_(
    -            pid[0] for pid in profile_ids))
    -        return [self._make_policy_profile_dict(p) for p in profiles]
    -
    -    def _make_policy_profile_dict(self, policy_profile, fields=None):
    -        res = {"id": policy_profile["id"], "name": policy_profile["name"]}
    -        return self._fields(res, fields)
    -
    -    def _make_profile_bindings_dict(self, profile_binding, fields=None):
    -        res = {"profile_id": profile_binding["profile_id"],
    -               "tenant_id": profile_binding["tenant_id"]}
    -        return self._fields(res, fields)
    -
    -    def _policy_profile_exists(self, id):
    -        db_session = db.get_session()
    -        return (db_session.query(n1kv_models_v2.PolicyProfile).
    -                filter_by(id=id).first())
    -
    -    def get_policy_profile(self, context, id, fields=None):
    -        """
    -        Retrieve a policy profile for the given UUID.
    -
    -        :param context: neutron api request context
    -        :param id: UUID representing policy profile to fetch
    -        :params fields: a list of strings that are valid keys in a policy
    -                        profile dictionary. Only these fields will be returned
    -        :returns: policy profile dictionary
    -        """
    -        profile = get_policy_profile(context.session, id)
    -        return self._make_policy_profile_dict(profile, fields)
    -
    -    def get_policy_profiles(self, context, filters=None, fields=None):
    -        """
    -        Retrieve a list of policy profiles.
    -
    -        Retrieve all policy profiles if tenant is admin. For a non-admin
    -        tenant, retrieve all policy profiles belonging to this tenant only.
    -        :param context: neutron api request context
    -        :param filters: a dictionary with keys that are valid k
    ... [truncated]
    
  • neutron/plugins/cisco/db/n1kv_models_v2.py+0 185 removed
    @@ -1,185 +0,0 @@
    -# Copyright 2013 Cisco Systems, Inc.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -from oslo_log import log as logging
    -import sqlalchemy as sa
    -from sqlalchemy import sql
    -
    -from neutron.db import model_base
    -from neutron.db import models_v2
    -from neutron.plugins.cisco.common import cisco_constants
    -
    -
    -LOG = logging.getLogger(__name__)
    -
    -
    -class N1kvVlanAllocation(model_base.BASEV2):
    -
    -    """Represents allocation state of vlan_id on physical network."""
    -    __tablename__ = 'cisco_n1kv_vlan_allocations'
    -
    -    physical_network = sa.Column(sa.String(64),
    -                                 nullable=False,
    -                                 primary_key=True)
    -    vlan_id = sa.Column(sa.Integer, nullable=False, primary_key=True,
    -                        autoincrement=False)
    -    allocated = sa.Column(sa.Boolean, nullable=False, default=False,
    -                          server_default=sql.false())
    -    network_profile_id = sa.Column(sa.String(36),
    -                                   sa.ForeignKey('cisco_network_profiles.id',
    -                                                 ondelete="CASCADE"),
    -                                   nullable=False)
    -
    -
    -class N1kvVxlanAllocation(model_base.BASEV2):
    -
    -    """Represents allocation state of vxlan_id."""
    -    __tablename__ = 'cisco_n1kv_vxlan_allocations'
    -
    -    vxlan_id = sa.Column(sa.Integer, nullable=False, primary_key=True,
    -                         autoincrement=False)
    -    allocated = sa.Column(sa.Boolean, nullable=False, default=False,
    -                          server_default=sql.false())
    -    network_profile_id = sa.Column(sa.String(36),
    -                                   sa.ForeignKey('cisco_network_profiles.id',
    -                                                 ondelete="CASCADE"),
    -                                   nullable=False)
    -
    -
    -class N1kvPortBinding(model_base.BASEV2):
    -
    -    """Represents binding of ports to policy profile."""
    -    __tablename__ = 'cisco_n1kv_port_bindings'
    -
    -    port_id = sa.Column(sa.String(36),
    -                        sa.ForeignKey('ports.id', ondelete="CASCADE"),
    -                        primary_key=True)
    -    profile_id = sa.Column(sa.String(36),
    -                           sa.ForeignKey('cisco_policy_profiles.id'))
    -
    -
    -class N1kvNetworkBinding(model_base.BASEV2):
    -
    -    """Represents binding of virtual network to physical realization."""
    -    __tablename__ = 'cisco_n1kv_network_bindings'
    -
    -    network_id = sa.Column(sa.String(36),
    -                           sa.ForeignKey('networks.id', ondelete="CASCADE"),
    -                           primary_key=True)
    -    network_type = sa.Column(sa.String(32), nullable=False)
    -    physical_network = sa.Column(sa.String(64))
    -    segmentation_id = sa.Column(sa.Integer)
    -    multicast_ip = sa.Column(sa.String(32))
    -    profile_id = sa.Column(sa.String(36),
    -                           sa.ForeignKey('cisco_network_profiles.id'))
    -
    -
    -class N1kVmNetwork(model_base.BASEV2):
    -
    -    """Represents VM Network information."""
    -    __tablename__ = 'cisco_n1kv_vmnetworks'
    -
    -    name = sa.Column(sa.String(80), primary_key=True)
    -    profile_id = sa.Column(sa.String(36),
    -                           sa.ForeignKey('cisco_policy_profiles.id'))
    -    network_id = sa.Column(sa.String(36))
    -    port_count = sa.Column(sa.Integer)
    -
    -
    -class NetworkProfile(model_base.BASEV2, models_v2.HasId):
    -
    -    """
    -    Nexus1000V Network Profiles
    -
    -        segment_type - VLAN, OVERLAY, TRUNK, MULTI_SEGMENT
    -        sub_type - TRUNK_VLAN, TRUNK_VXLAN, native_vxlan, enhanced_vxlan
    -        segment_range - '<integer>-<integer>'
    -        multicast_ip_index - <integer>
    -        multicast_ip_range - '<ip>-<ip>'
    -        physical_network - Name for the physical network
    -    """
    -    __tablename__ = 'cisco_network_profiles'
    -
    -    name = sa.Column(sa.String(255))
    -    segment_type = sa.Column(sa.Enum(cisco_constants.NETWORK_TYPE_VLAN,
    -                                     cisco_constants.NETWORK_TYPE_OVERLAY,
    -                                     cisco_constants.NETWORK_TYPE_TRUNK,
    -                                     cisco_constants.
    -                                     NETWORK_TYPE_MULTI_SEGMENT,
    -                                     name='segment_type'),
    -                             nullable=False)
    -    sub_type = sa.Column(sa.String(255))
    -    segment_range = sa.Column(sa.String(255))
    -    multicast_ip_index = sa.Column(sa.Integer, default=0,
    -                                   server_default='0')
    -    multicast_ip_range = sa.Column(sa.String(255))
    -    physical_network = sa.Column(sa.String(255))
    -
    -
    -class PolicyProfile(model_base.BASEV2):
    -
    -    """
    -    Nexus1000V Network Profiles
    -
    -        Both 'id' and 'name' are coming from Nexus1000V switch
    -    """
    -    __tablename__ = 'cisco_policy_profiles'
    -
    -    id = sa.Column(sa.String(36), primary_key=True)
    -    name = sa.Column(sa.String(255))
    -
    -
    -class ProfileBinding(model_base.BASEV2):
    -
    -    """
    -    Represents a binding of Network Profile
    -    or Policy Profile to tenant_id
    -    """
    -    __tablename__ = 'cisco_n1kv_profile_bindings'
    -
    -    profile_type = sa.Column(sa.Enum(cisco_constants.NETWORK,
    -                                     cisco_constants.POLICY,
    -                                     name='profile_type'))
    -    tenant_id = sa.Column(sa.String(36),
    -                          primary_key=True,
    -                          default=cisco_constants.TENANT_ID_NOT_SET,
    -                          server_default=cisco_constants.TENANT_ID_NOT_SET)
    -    profile_id = sa.Column(sa.String(36), primary_key=True)
    -
    -
    -class N1kvTrunkSegmentBinding(model_base.BASEV2):
    -
    -    """Represents binding of segments in trunk networks."""
    -    __tablename__ = 'cisco_n1kv_trunk_segments'
    -
    -    trunk_segment_id = sa.Column(sa.String(36),
    -                                 sa.ForeignKey('networks.id',
    -                                               ondelete="CASCADE"),
    -                                 primary_key=True)
    -    segment_id = sa.Column(sa.String(36), nullable=False, primary_key=True)
    -    dot1qtag = sa.Column(sa.String(36), nullable=False, primary_key=True)
    -
    -
    -class N1kvMultiSegmentNetworkBinding(model_base.BASEV2):
    -
    -    """Represents binding of segments in multi-segment networks."""
    -    __tablename__ = 'cisco_n1kv_multi_segments'
    -
    -    multi_segment_id = sa.Column(sa.String(36),
    -                                 sa.ForeignKey('networks.id',
    -                                               ondelete="CASCADE"),
    -                                 primary_key=True)
    -    segment1_id = sa.Column(sa.String(36), nullable=False, primary_key=True)
    -    segment2_id = sa.Column(sa.String(36), nullable=False, primary_key=True)
    -    encap_profile_name = sa.Column(sa.String(36))
    
  • neutron/plugins/cisco/db/network_db_v2.py+0 280 removed
    @@ -1,280 +0,0 @@
    -# Copyright 2012, Cisco Systems, Inc.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -from oslo_log import log as logging
    -from oslo_utils import uuidutils
    -from sqlalchemy.orm import exc
    -
    -from neutron.db import api as db
    -from neutron.plugins.cisco.common import cisco_constants as const
    -from neutron.plugins.cisco.common import cisco_exceptions as c_exc
    -from neutron.plugins.cisco.db import network_models_v2
    -
    -
    -LOG = logging.getLogger(__name__)
    -
    -
    -def get_all_qoss(tenant_id):
    -    """Lists all the qos to tenant associations."""
    -    LOG.debug("get_all_qoss() called")
    -    session = db.get_session()
    -    return (session.query(network_models_v2.QoS).
    -            filter_by(tenant_id=tenant_id).all())
    -
    -
    -def get_qos(tenant_id, qos_id):
    -    """Lists the qos given a tenant_id and qos_id."""
    -    LOG.debug("get_qos() called")
    -    session = db.get_session()
    -    try:
    -        return (session.query(network_models_v2.QoS).
    -                filter_by(tenant_id=tenant_id).
    -                filter_by(qos_id=qos_id).one())
    -    except exc.NoResultFound:
    -        raise c_exc.QosNotFound(qos_id=qos_id,
    -                                tenant_id=tenant_id)
    -
    -
    -def add_qos(tenant_id, qos_name, qos_desc):
    -    """Adds a qos to tenant association."""
    -    LOG.debug("add_qos() called")
    -    session = db.get_session()
    -    try:
    -        qos = (session.query(network_models_v2.QoS).
    -               filter_by(tenant_id=tenant_id).
    -               filter_by(qos_name=qos_name).one())
    -        raise c_exc.QosNameAlreadyExists(qos_name=qos_name,
    -                                         tenant_id=tenant_id)
    -    except exc.NoResultFound:
    -        qos = network_models_v2.QoS(qos_id=uuidutils.generate_uuid(),
    -                                    tenant_id=tenant_id,
    -                                    qos_name=qos_name,
    -                                    qos_desc=qos_desc)
    -        session.add(qos)
    -        session.flush()
    -        return qos
    -
    -
    -def remove_qos(tenant_id, qos_id):
    -    """Removes a qos to tenant association."""
    -    session = db.get_session()
    -    try:
    -        qos = (session.query(network_models_v2.QoS).
    -               filter_by(tenant_id=tenant_id).
    -               filter_by(qos_id=qos_id).one())
    -        session.delete(qos)
    -        session.flush()
    -        return qos
    -    except exc.NoResultFound:
    -        pass
    -
    -
    -def update_qos(tenant_id, qos_id, new_qos_name=None):
    -    """Updates a qos to tenant association."""
    -    session = db.get_session()
    -    try:
    -        qos = (session.query(network_models_v2.QoS).
    -               filter_by(tenant_id=tenant_id).
    -               filter_by(qos_id=qos_id).one())
    -        if new_qos_name:
    -            qos["qos_name"] = new_qos_name
    -        session.merge(qos)
    -        session.flush()
    -        return qos
    -    except exc.NoResultFound:
    -        raise c_exc.QosNotFound(qos_id=qos_id,
    -                                tenant_id=tenant_id)
    -
    -
    -def get_all_credentials():
    -    """Lists all the creds for a tenant."""
    -    session = db.get_session()
    -    return (session.query(network_models_v2.Credential).all())
    -
    -
    -def get_credential(credential_id):
    -    """Lists the creds for given a cred_id."""
    -    session = db.get_session()
    -    try:
    -        return (session.query(network_models_v2.Credential).
    -                filter_by(credential_id=credential_id).one())
    -    except exc.NoResultFound:
    -        raise c_exc.CredentialNotFound(credential_id=credential_id)
    -
    -
    -def get_credential_name(credential_name):
    -    """Lists the creds for given a cred_name."""
    -    session = db.get_session()
    -    try:
    -        return (session.query(network_models_v2.Credential).
    -                filter_by(credential_name=credential_name).one())
    -    except exc.NoResultFound:
    -        raise c_exc.CredentialNameNotFound(credential_name=credential_name)
    -
    -
    -def add_credential(credential_name, user_name, password, type):
    -    """Create a credential."""
    -    session = db.get_session()
    -    try:
    -        cred = (session.query(network_models_v2.Credential).
    -                filter_by(credential_name=credential_name).one())
    -        raise c_exc.CredentialAlreadyExists(credential_name=credential_name)
    -    except exc.NoResultFound:
    -        cred = network_models_v2.Credential(
    -            credential_id=uuidutils.generate_uuid(),
    -            credential_name=credential_name,
    -            user_name=user_name,
    -            password=password,
    -            type=type)
    -        session.add(cred)
    -        session.flush()
    -        return cred
    -
    -
    -def remove_credential(credential_id):
    -    """Removes a credential."""
    -    session = db.get_session()
    -    try:
    -        cred = (session.query(network_models_v2.Credential).
    -                filter_by(credential_id=credential_id).one())
    -        session.delete(cred)
    -        session.flush()
    -        return cred
    -    except exc.NoResultFound:
    -        pass
    -
    -
    -def update_credential(credential_id,
    -                      new_user_name=None, new_password=None):
    -    """Updates a credential for a tenant."""
    -    session = db.get_session()
    -    try:
    -        cred = (session.query(network_models_v2.Credential).
    -                filter_by(credential_id=credential_id).one())
    -        if new_user_name:
    -            cred["user_name"] = new_user_name
    -        if new_password:
    -            cred["password"] = new_password
    -        session.merge(cred)
    -        session.flush()
    -        return cred
    -    except exc.NoResultFound:
    -        raise c_exc.CredentialNotFound(credential_id=credential_id)
    -
    -
    -def get_all_n1kv_credentials():
    -    session = db.get_session()
    -    return (session.query(network_models_v2.Credential).
    -            filter_by(type='n1kv'))
    -
    -
    -def delete_all_n1kv_credentials():
    -    session = db.get_session()
    -    session.query(network_models_v2.Credential).filter_by(type='n1kv').delete()
    -
    -
    -def add_provider_network(network_id, network_type, segmentation_id):
    -    """Add a network to the provider network table."""
    -    session = db.get_session()
    -    if session.query(network_models_v2.ProviderNetwork).filter_by(
    -            network_id=network_id).first():
    -        raise c_exc.ProviderNetworkExists(network_id)
    -    pnet = network_models_v2.ProviderNetwork(network_id=network_id,
    -                                             network_type=network_type,
    -                                             segmentation_id=segmentation_id)
    -    session.add(pnet)
    -    session.flush()
    -
    -
    -def remove_provider_network(network_id):
    -    """Remove network_id from the provider network table.
    -
    -    :param network_id: Any network id. If it is not in the table, do nothing.
    -    :return: network_id if it was in the table and successfully removed.
    -    """
    -    session = db.get_session()
    -    pnet = (session.query(network_models_v2.ProviderNetwork).
    -            filter_by(network_id=network_id).first())
    -    if pnet:
    -        session.delete(pnet)
    -        session.flush()
    -        return network_id
    -
    -
    -def is_provider_network(network_id):
    -    """Return True if network_id is in the provider network table."""
    -    session = db.get_session()
    -    if session.query(network_models_v2.ProviderNetwork).filter_by(
    -            network_id=network_id).first():
    -        return True
    -
    -
    -def is_provider_vlan(vlan_id):
    -    """Check for a for a vlan provider network with the specified vland_id.
    -
    -    Returns True if the provider network table contains a vlan network
    -    with the specified vlan_id.
    -    """
    -    session = db.get_session()
    -    if (session.query(network_models_v2.ProviderNetwork).
    -            filter_by(network_type=const.NETWORK_TYPE_VLAN,
    -                      segmentation_id=vlan_id).first()):
    -        return True
    -
    -
    -class Credential_db_mixin(object):
    -
    -    """Mixin class for Cisco Credentials as a resource."""
    -
    -    def _make_credential_dict(self, credential, fields=None):
    -        res = {'credential_id': credential['credential_id'],
    -               'credential_name': credential['credential_name'],
    -               'user_name': credential['user_name'],
    -               'password': credential['password'],
    -               'type': credential['type']}
    -        return self._fields(res, fields)
    -
    -    def create_credential(self, context, credential):
    -        """Create a credential."""
    -        c = credential['credential']
    -        cred = add_credential(c['credential_name'],
    -                              c['user_name'],
    -                              c['password'],
    -                              c['type'])
    -        return self._make_credential_dict(cred)
    -
    -    def get_credentials(self, context, filters=None, fields=None):
    -        """Retrieve a list of credentials."""
    -        return self._get_collection(context,
    -                                    network_models_v2.Credential,
    -                                    self._make_credential_dict,
    -                                    filters=filters,
    -                                    fields=fields)
    -
    -    def get_credential(self, context, id, fields=None):
    -        """Retireve the requested credential based on its id."""
    -        credential = get_credential(id)
    -        return self._make_credential_dict(credential, fields)
    -
    -    def update_credential(self, context, id, credential):
    -        """Update a credential based on its id."""
    -        c = credential['credential']
    -        cred = update_credential(id,
    -                                 c['user_name'],
    -                                 c['password'])
    -        return self._make_credential_dict(cred)
    -
    -    def delete_credential(self, context, id):
    -        """Delete a credential based on its id."""
    -        return remove_credential(id)
    
  • neutron/plugins/cisco/db/network_models_v2.py+0 52 removed
    @@ -1,52 +0,0 @@
    -# Copyright 2012, Cisco Systems, Inc.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -import sqlalchemy as sa
    -
    -from neutron.db import model_base
    -
    -
    -class QoS(model_base.BASEV2):
    -    """Represents QoS policies for a tenant."""
    -
    -    __tablename__ = 'cisco_qos_policies'
    -
    -    qos_id = sa.Column(sa.String(255))
    -    tenant_id = sa.Column(sa.String(255), primary_key=True)
    -    qos_name = sa.Column(sa.String(255), primary_key=True)
    -    qos_desc = sa.Column(sa.String(255))
    -
    -
    -class Credential(model_base.BASEV2):
    -    """Represents credentials for a tenant to control Cisco switches."""
    -
    -    __tablename__ = 'cisco_credentials'
    -
    -    credential_id = sa.Column(sa.String(255))
    -    credential_name = sa.Column(sa.String(255), primary_key=True)
    -    user_name = sa.Column(sa.String(255))
    -    password = sa.Column(sa.String(255))
    -    type = sa.Column(sa.String(255))
    -
    -
    -class ProviderNetwork(model_base.BASEV2):
    -    """Represents networks that were created as provider networks."""
    -
    -    __tablename__ = 'cisco_provider_networks'
    -
    -    network_id = sa.Column(sa.String(36),
    -                           sa.ForeignKey('networks.id', ondelete="CASCADE"),
    -                           primary_key=True)
    -    network_type = sa.Column(sa.String(255), nullable=False)
    -    segmentation_id = sa.Column(sa.Integer, nullable=False)
    
  • neutron/plugins/cisco/extensions/credential.py+0 74 removed
    @@ -1,74 +0,0 @@
    -# Copyright 2013 Cisco Systems, Inc.  All rights reserved.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -from neutron.api import extensions
    -from neutron.api.v2 import attributes
    -from neutron.api.v2 import base
    -from neutron import manager
    -
    -
    -# Attribute Map
    -RESOURCE_ATTRIBUTE_MAP = {
    -    'credentials': {
    -        'credential_id': {'allow_post': False, 'allow_put': False,
    -                          'validate': {'type:regex': attributes.UUID_PATTERN},
    -                          'is_visible': True},
    -        'credential_name': {'allow_post': True, 'allow_put': True,
    -                            'is_visible': True, 'default': ''},
    -        'tenant_id': {'allow_post': True, 'allow_put': False,
    -                      'is_visible': False, 'default': ''},
    -        'type': {'allow_post': True, 'allow_put': True,
    -                 'is_visible': True, 'default': ''},
    -        'user_name': {'allow_post': True, 'allow_put': True,
    -                      'is_visible': True, 'default': ''},
    -        'password': {'allow_post': True, 'allow_put': True,
    -                     'is_visible': True, 'default': ''},
    -    },
    -}
    -
    -
    -class Credential(extensions.ExtensionDescriptor):
    -
    -    @classmethod
    -    def get_name(cls):
    -        """Returns Extended Resource Name."""
    -        return "Cisco Credential"
    -
    -    @classmethod
    -    def get_alias(cls):
    -        """Returns Extended Resource Alias."""
    -        return "credential"
    -
    -    @classmethod
    -    def get_description(cls):
    -        """Returns Extended Resource Description."""
    -        return "Credential include username and password"
    -
    -    @classmethod
    -    def get_updated(cls):
    -        """Returns Extended Resource Update Time."""
    -        return "2011-07-25T13:25:27-06:00"
    -
    -    @classmethod
    -    def get_resources(cls):
    -        """Returns Extended Resources."""
    -        resource_name = "credential"
    -        collection_name = resource_name + "s"
    -        plugin = manager.NeutronManager.get_plugin()
    -        params = RESOURCE_ATTRIBUTE_MAP.get(collection_name, dict())
    -        controller = base.create_resource(collection_name,
    -                                          resource_name,
    -                                          plugin, params)
    -        return [extensions.ResourceExtension(collection_name,
    -                                             controller)]
    
  • neutron/plugins/cisco/extensions/_credential_view.py+0 47 removed
    @@ -1,47 +0,0 @@
    -# Copyright 2011 Cisco Systems, Inc.  All rights reserved.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -
    -def get_view_builder(req):
    -    base_url = req.application_url
    -    return ViewBuilder(base_url)
    -
    -
    -class ViewBuilder(object):
    -    """ViewBuilder for Credential, derived from neutron.views.networks."""
    -
    -    def __init__(self, base_url):
    -        """Initialize builder.
    -
    -        :param base_url: url of the root wsgi application
    -        """
    -        self.base_url = base_url
    -
    -    def build(self, credential_data, is_detail=False):
    -        """Generic method used to generate a credential entity."""
    -        if is_detail:
    -            credential = self._build_detail(credential_data)
    -        else:
    -            credential = self._build_simple(credential_data)
    -        return credential
    -
    -    def _build_simple(self, credential_data):
    -        """Return a simple description of credential."""
    -        return dict(credential=dict(id=credential_data['credential_id']))
    -
    -    def _build_detail(self, credential_data):
    -        """Return a detailed description of credential."""
    -        return dict(credential=dict(id=credential_data['credential_id'],
    -                                    name=credential_data['user_name'],
    -                                    password=credential_data['password']))
    
  • neutron/plugins/cisco/extensions/__init__.py+0 0 removed
  • neutron/plugins/cisco/extensions/n1kv.py+0 95 removed
    @@ -1,95 +0,0 @@
    -# Copyright 2013 Cisco Systems, Inc.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -from neutron.api import extensions
    -from neutron.api.v2 import attributes
    -
    -
    -PROFILE_ID = 'n1kv:profile_id'
    -MULTICAST_IP = 'n1kv:multicast_ip'
    -SEGMENT_ADD = 'n1kv:segment_add'
    -SEGMENT_DEL = 'n1kv:segment_del'
    -MEMBER_SEGMENTS = 'n1kv:member_segments'
    -
    -EXTENDED_ATTRIBUTES_2_0 = {
    -    'networks': {
    -        PROFILE_ID: {'allow_post': True, 'allow_put': False,
    -                     'validate': {'type:regex': attributes.UUID_PATTERN},
    -                     'default': attributes.ATTR_NOT_SPECIFIED,
    -                     'is_visible': True},
    -        MULTICAST_IP: {'allow_post': True, 'allow_put': True,
    -                       'default': attributes.ATTR_NOT_SPECIFIED,
    -                       'is_visible': True},
    -        SEGMENT_ADD: {'allow_post': True, 'allow_put': True,
    -                      'default': attributes.ATTR_NOT_SPECIFIED,
    -                      'is_visible': True},
    -        SEGMENT_DEL: {'allow_post': True, 'allow_put': True,
    -                      'default': attributes.ATTR_NOT_SPECIFIED,
    -                      'is_visible': True},
    -        MEMBER_SEGMENTS: {'allow_post': True, 'allow_put': True,
    -                          'default': attributes.ATTR_NOT_SPECIFIED,
    -                          'is_visible': True},
    -    },
    -    'ports': {
    -        PROFILE_ID: {'allow_post': True, 'allow_put': False,
    -                     'validate': {'type:regex': attributes.UUID_PATTERN},
    -                     'default': attributes.ATTR_NOT_SPECIFIED,
    -                     'is_visible': True}
    -    }
    -}
    -
    -
    -class N1kv(extensions.ExtensionDescriptor):
    -
    -    """Extension class supporting N1kv profiles.
    -
    -    This class is used by neutron's extension framework to make
    -    metadata about the n1kv profile extension available to
    -    clients. No new resources are defined by this extension. Instead,
    -    the existing network resource's request and response messages are
    -    extended with attributes in the n1kv profile namespace.
    -
    -    To create a network based on n1kv profile using the CLI with admin rights:
    -
    -       (shell) net-create --tenant_id <tenant-id> <net-name> \
    -       --n1kv:profile_id <id>
    -       (shell) port-create --tenant_id <tenant-id> <net-name> \
    -       --n1kv:profile_id <id>
    -
    -
    -    With admin rights, network dictionaries returned from CLI commands
    -    will also include n1kv profile attributes.
    -    """
    -
    -    @classmethod
    -    def get_name(cls):
    -        return "n1kv"
    -
    -    @classmethod
    -    def get_alias(cls):
    -        return "n1kv"
    -
    -    @classmethod
    -    def get_description(cls):
    -        return "Expose network profile"
    -
    -    @classmethod
    -    def get_updated(cls):
    -        return "2012-11-15T10:00:00-00:00"
    -
    -    def get_extended_resources(self, version):
    -        if version == "2.0":
    -            return EXTENDED_ATTRIBUTES_2_0
    -        else:
    -            return {}
    
  • neutron/plugins/cisco/extensions/network_profile.py+0 97 removed
    @@ -1,97 +0,0 @@
    -# Copyright 2013 Cisco Systems, Inc.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -from neutron.api import extensions
    -from neutron.api.v2 import attributes
    -from neutron.api.v2 import base
    -from neutron import manager
    -
    -
    -# Attribute Map
    -RESOURCE_ATTRIBUTE_MAP = {
    -    'network_profiles': {
    -        'id': {'allow_post': False, 'allow_put': False,
    -               'validate': {'type:regex': attributes.UUID_PATTERN},
    -               'is_visible': True},
    -        'name': {'allow_post': True, 'allow_put': True,
    -                 'is_visible': True, 'default': ''},
    -        'segment_type': {'allow_post': True, 'allow_put': False,
    -                         'is_visible': True, 'default': ''},
    -        'sub_type': {'allow_post': True, 'allow_put': False,
    -                     'is_visible': True,
    -                     'default': attributes.ATTR_NOT_SPECIFIED},
    -        'segment_range': {'allow_post': True, 'allow_put': True,
    -                          'is_visible': True, 'default': ''},
    -        'multicast_ip_range': {'allow_post': True, 'allow_put': True,
    -                               'is_visible': True,
    -                               'default': attributes.ATTR_NOT_SPECIFIED},
    -        'multicast_ip_index': {'allow_post': False, 'allow_put': False,
    -                               'is_visible': False, 'default': '0'},
    -        'physical_network': {'allow_post': True, 'allow_put': False,
    -                             'is_visible': True, 'default': ''},
    -        'tenant_id': {'allow_post': True, 'allow_put': False,
    -                      'is_visible': False, 'default': ''},
    -        'add_tenants': {'allow_post': True, 'allow_put': True,
    -                        'is_visible': True, 'default': None,
    -                        'convert_to': attributes.convert_none_to_empty_list},
    -        'remove_tenants': {
    -            'allow_post': True, 'allow_put': True,
    -            'is_visible': True, 'default': None,
    -            'convert_to': attributes.convert_none_to_empty_list,
    -        },
    -    },
    -    'network_profile_bindings': {
    -        'profile_id': {'allow_post': False, 'allow_put': False,
    -                       'validate': {'type:regex': attributes.UUID_PATTERN},
    -                       'is_visible': True},
    -        'tenant_id': {'allow_post': True, 'allow_put': False,
    -                      'is_visible': True},
    -    },
    -}
    -
    -
    -class Network_profile(extensions.ExtensionDescriptor):
    -
    -    @classmethod
    -    def get_name(cls):
    -        return "Cisco N1kv Network Profiles"
    -
    -    @classmethod
    -    def get_alias(cls):
    -        return 'network_profile'
    -
    -    @classmethod
    -    def get_description(cls):
    -        return ("Profile includes the type of profile for N1kv")
    -
    -    @classmethod
    -    def get_updated(cls):
    -        return "2012-07-20T10:00:00-00:00"
    -
    -    @classmethod
    -    def get_resources(cls):
    -        """Returns Extended Resources."""
    -        exts = []
    -        plugin = manager.NeutronManager.get_plugin()
    -        for resource_name in ['network_profile', 'network_profile_binding']:
    -            collection_name = resource_name + "s"
    -            controller = base.create_resource(
    -                collection_name,
    -                resource_name,
    -                plugin,
    -                RESOURCE_ATTRIBUTE_MAP.get(collection_name))
    -            ex = extensions.ResourceExtension(collection_name,
    -                                              controller)
    -            exts.append(ex)
    -        return exts
    
  • neutron/plugins/cisco/extensions/policy_profile.py+0 76 removed
    @@ -1,76 +0,0 @@
    -# Copyright 2013 Cisco Systems, Inc.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -from neutron.api import extensions
    -from neutron.api.v2 import attributes
    -from neutron.api.v2 import base
    -from neutron import manager
    -
    -# Attribute Map
    -RESOURCE_ATTRIBUTE_MAP = {
    -    'policy_profiles': {
    -        'id': {'allow_post': False, 'allow_put': False,
    -               'validate': {'type:regex': attributes.UUID_PATTERN},
    -               'is_visible': True},
    -        'name': {'allow_post': False, 'allow_put': False,
    -                 'is_visible': True, 'default': ''},
    -        'add_tenant': {'allow_post': True, 'allow_put': True,
    -                       'is_visible': True, 'default': None},
    -        'remove_tenant': {'allow_post': True, 'allow_put': True,
    -                          'is_visible': True, 'default': None},
    -    },
    -    'policy_profile_bindings': {
    -        'profile_id': {'allow_post': False, 'allow_put': False,
    -                       'validate': {'type:regex': attributes.UUID_PATTERN},
    -                       'is_visible': True},
    -        'tenant_id': {'allow_post': True, 'allow_put': False,
    -                      'is_visible': True},
    -    },
    -}
    -
    -
    -class Policy_profile(extensions.ExtensionDescriptor):
    -
    -    @classmethod
    -    def get_name(cls):
    -        return "Cisco Nexus1000V Policy Profiles"
    -
    -    @classmethod
    -    def get_alias(cls):
    -        return 'policy_profile'
    -
    -    @classmethod
    -    def get_description(cls):
    -        return "Profile includes the type of profile for N1kv"
    -
    -    @classmethod
    -    def get_updated(cls):
    -        return "2012-07-20T10:00:00-00:00"
    -
    -    @classmethod
    -    def get_resources(cls):
    -        """Returns Extended Resources."""
    -        exts = []
    -        plugin = manager.NeutronManager.get_plugin()
    -        for resource_name in ['policy_profile', 'policy_profile_binding']:
    -            collection_name = resource_name + "s"
    -            controller = base.create_resource(
    -                collection_name,
    -                resource_name,
    -                plugin,
    -                RESOURCE_ATTRIBUTE_MAP.get(collection_name))
    -            ex = extensions.ResourceExtension(collection_name,
    -                                              controller)
    -            exts.append(ex)
    -        return exts
    
  • neutron/plugins/cisco/extensions/qos.py+0 146 removed
    @@ -1,146 +0,0 @@
    -# Copyright 2011 Cisco Systems, Inc.
    -# All rights reserved.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -from webob import exc
    -
    -from neutron.api import api_common as common
    -from neutron.api import extensions
    -from neutron import manager
    -from neutron.plugins.cisco.common import cisco_exceptions as exception
    -from neutron.plugins.cisco.common import cisco_faults as faults
    -from neutron.plugins.cisco.extensions import _qos_view as qos_view
    -from neutron import wsgi
    -
    -
    -class Qos(extensions.ExtensionDescriptor):
    -    """Qos extension file."""
    -
    -    @classmethod
    -    def get_name(cls):
    -        """Returns Ext Resource Name."""
    -        return "Cisco qos"
    -
    -    @classmethod
    -    def get_alias(cls):
    -        """Returns Ext Resource Alias."""
    -        return "Cisco qos"
    -
    -    @classmethod
    -    def get_description(cls):
    -        """Returns Ext Resource Description."""
    -        return "qos includes qos_name and qos_desc"
    -
    -    @classmethod
    -    def get_updated(cls):
    -        """Returns Ext Resource update."""
    -        return "2011-07-25T13:25:27-06:00"
    -
    -    @classmethod
    -    def get_resources(cls):
    -        """Returns Ext Resources."""
    -        parent_resource = dict(member_name="tenant",
    -                               collection_name="extensions/csco/tenants")
    -
    -        controller = QosController(manager.NeutronManager.get_plugin())
    -        return [extensions.ResourceExtension('qoss', controller,
    -                                             parent=parent_resource)]
    -
    -
    -class QosController(common.NeutronController, wsgi.Controller):
    -    """qos API controller based on NeutronController."""
    -
    -    _qos_ops_param_list = [
    -        {'param-name': 'qos_name', 'required': True},
    -        {'param-name': 'qos_desc', 'required': True},
    -    ]
    -
    -    _serialization_metadata = {
    -        "application/xml": {
    -            "attributes": {
    -                "qos": ["id", "name"],
    -            },
    -        },
    -    }
    -
    -    def __init__(self, plugin):
    -        self._resource_name = 'qos'
    -        self._plugin = plugin
    -
    -    def index(self, request, tenant_id):
    -        """Returns a list of qos ids."""
    -        return self._items(request, tenant_id, is_detail=False)
    -
    -    def _items(self, request, tenant_id, is_detail):
    -        """Returns a list of qoss."""
    -        qoss = self._plugin.get_all_qoss(tenant_id)
    -        builder = qos_view.get_view_builder(request)
    -        result = [builder.build(qos, is_detail)['qos'] for qos in qoss]
    -        return dict(qoss=result)
    -
    -    # pylint: disable=no-member
    -    def show(self, request, tenant_id, id):
    -        """Returns qos details for the given qos id."""
    -        try:
    -            qos = self._plugin.get_qos_details(tenant_id, id)
    -            builder = qos_view.get_view_builder(request)
    -            #build response with details
    -            result = builder.build(qos, True)
    -            return dict(qoss=result)
    -        except exception.QosNotFound as exp:
    -            return faults.Fault(faults.QosNotFound(exp))
    -
    -    def create(self, request, tenant_id):
    -        """Creates a new qos for a given tenant."""
    -        #look for qos name in request
    -        try:
    -            body = self._deserialize(request.body, request.get_content_type())
    -            req_body = self._prepare_request_body(body,
    -                                                  self._qos_ops_param_list)
    -            req_params = req_body[self._resource_name]
    -        except exc.HTTPError as exp:
    -            return faults.Fault(exp)
    -        qos = self._plugin.create_qos(tenant_id,
    -                                      req_params['qos_name'],
    -                                      req_params['qos_desc'])
    -        builder = qos_view.get_view_builder(request)
    -        result = builder.build(qos)
    -        return dict(qoss=result)
    -
    -    def update(self, request, tenant_id, id):
    -        """Updates the name for the qos with the given id."""
    -        try:
    -            body = self._deserialize(request.body, request.get_content_type())
    -            req_body = self._prepare_request_body(body,
    -                                                  self._qos_ops_param_list)
    -            req_params = req_body[self._resource_name]
    -        except exc.HTTPError as exp:
    -            return faults.Fault(exp)
    -        try:
    -            qos = self._plugin.rename_qos(tenant_id, id,
    -                                          req_params['qos_name'])
    -
    -            builder = qos_view.get_view_builder(request)
    -            result = builder.build(qos, True)
    -            return dict(qoss=result)
    -        except exception.QosNotFound as exp:
    -            return faults.Fault(faults.QosNotFound(exp))
    -
    -    def delete(self, request, tenant_id, id):
    -        """Destroys the qos with the given id."""
    -        try:
    -            self._plugin.delete_qos(tenant_id, id)
    -            return exc.HTTPOk()
    -        except exception.QosNotFound as exp:
    -            return faults.Fault(faults.QosNotFound(exp))
    
  • neutron/plugins/cisco/extensions/_qos_view.py+0 47 removed
    @@ -1,47 +0,0 @@
    -# Copyright 2011 Cisco Systems, Inc.  All rights reserved.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -
    -def get_view_builder(req):
    -    base_url = req.application_url
    -    return ViewBuilder(base_url)
    -
    -
    -class ViewBuilder(object):
    -    """ViewBuilder for QoS, derived from neutron.views.networks."""
    -
    -    def __init__(self, base_url):
    -        """Initialize builder.
    -
    -        :param base_url: url of the root wsgi application
    -        """
    -        self.base_url = base_url
    -
    -    def build(self, qos_data, is_detail=False):
    -        """Generic method used to generate a QoS entity."""
    -        if is_detail:
    -            qos = self._build_detail(qos_data)
    -        else:
    -            qos = self._build_simple(qos_data)
    -        return qos
    -
    -    def _build_simple(self, qos_data):
    -        """Return a simple description of qos."""
    -        return dict(qos=dict(id=qos_data['qos_id']))
    -
    -    def _build_detail(self, qos_data):
    -        """Return a detailed description of qos."""
    -        return dict(qos=dict(id=qos_data['qos_id'],
    -                             name=qos_data['qos_name'],
    -                             description=qos_data['qos_desc']))
    
  • neutron/plugins/cisco/l2device_plugin_base.py+0 171 removed
    @@ -1,171 +0,0 @@
    -# Copyright 2012 Cisco Systems, Inc.  All rights reserved.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -import abc
    -import inspect
    -import six
    -
    -
    -@six.add_metaclass(abc.ABCMeta)
    -class L2DevicePluginBase(object):
    -    """Base class for a device-specific plugin.
    -
    -    An example of a device-specific plugin is a Nexus switch plugin.
    -    The network model relies on device-category-specific plugins to perform
    -    the configuration on each device.
    -    """
    -
    -    @abc.abstractmethod
    -    def create_network(self, tenant_id, net_name, net_id, vlan_name, vlan_id,
    -                       **kwargs):
    -        """Create network.
    -
    -        :returns:
    -        :raises:
    -        """
    -        pass
    -
    -    @abc.abstractmethod
    -    def delete_network(self, tenant_id, net_id, **kwargs):
    -        """Delete network.
    -
    -        :returns:
    -        :raises:
    -        """
    -        pass
    -
    -    @abc.abstractmethod
    -    def update_network(self, tenant_id, net_id, name, **kwargs):
    -        """Update network.
    -
    -        :returns:
    -        :raises:
    -        """
    -        pass
    -
    -    @abc.abstractmethod
    -    def create_port(self, tenant_id, net_id, port_state, port_id, **kwargs):
    -        """Create port.
    -
    -        :returns:
    -        :raises:
    -        """
    -        pass
    -
    -    @abc.abstractmethod
    -    def delete_port(self, tenant_id, net_id, port_id, **kwargs):
    -        """Delete port.
    -
    -        :returns:
    -        :raises:
    -        """
    -        pass
    -
    -    @abc.abstractmethod
    -    def update_port(self, tenant_id, net_id, port_id, **kwargs):
    -        """Update port.
    -
    -        :returns:
    -        :raises:
    -        """
    -        pass
    -
    -    @abc.abstractmethod
    -    def plug_interface(self, tenant_id, net_id, port_id, remote_interface_id,
    -                       **kwargs):
    -        """Plug interface.
    -
    -        :returns:
    -        :raises:
    -        """
    -        pass
    -
    -    @abc.abstractmethod
    -    def unplug_interface(self, tenant_id, net_id, port_id, **kwargs):
    -        """Unplug interface.
    -
    -        :returns:
    -        :raises:
    -        """
    -        pass
    -
    -    def create_subnet(self, tenant_id, net_id, ip_version,
    -                      subnet_cidr, **kwargs):
    -        """Create subnet.
    -
    -        :returns:
    -        :raises:
    -        """
    -        pass
    -
    -    def get_subnets(self, tenant_id, net_id, **kwargs):
    -        """Get subnets.
    -
    -        :returns:
    -        :raises:
    -        """
    -        pass
    -
    -    def get_subnet(self, tenant_id, net_id, subnet_id, **kwargs):
    -        """Get subnet.
    -
    -        :returns:
    -        :raises:
    -        """
    -        pass
    -
    -    def update_subnet(self, tenant_id, net_id, subnet_id, **kwargs):
    -        """Update subnet.
    -
    -        :returns:
    -        :raises:
    -        """
    -        pass
    -
    -    def delete_subnet(self, tenant_id, net_id, subnet_id, **kwargs):
    -        """Delete subnet.
    -
    -        :returns:
    -        :raises:
    -        """
    -        pass
    -
    -    @classmethod
    -    def __subclasshook__(cls, klass):
    -        """Check plugin class.
    -
    -        The __subclasshook__ method is a class method
    -        that will be called every time a class is tested
    -        using issubclass(klass, Plugin).
    -        In that case, it will check that every method
    -        marked with the abstractmethod decorator is
    -        provided by the plugin class.
    -        """
    -        if cls is L2DevicePluginBase:
    -            for method in cls.__abstractmethods__:
    -                method_ok = False
    -                for base in klass.__mro__:
    -                    if method in base.__dict__:
    -                        fn_obj = base.__dict__[method]
    -                        if inspect.isfunction(fn_obj):
    -                            abstract_fn_obj = cls.__dict__[method]
    -                            arg_count = fn_obj.__code__.co_argcount
    -                            expected_arg_count = \
    -                                abstract_fn_obj.__code__.co_argcount
    -                            method_ok = arg_count == expected_arg_count
    -                if method_ok:
    -                    continue
    -                return NotImplemented
    -            return True
    -        return NotImplemented
    
  • neutron/plugins/cisco/models/__init__.py+0 0 removed
  • neutron/plugins/cisco/models/virt_phy_sw_v2.py+0 320 removed
    @@ -1,320 +0,0 @@
    -# Copyright 2012 Cisco Systems, Inc.
    -# All rights reserved.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -import inspect
    -
    -from oslo_log import log as logging
    -from oslo_utils import excutils
    -from oslo_utils import importutils
    -
    -from neutron.api.v2 import attributes
    -from neutron.extensions import portbindings
    -from neutron.extensions import providernet as provider
    -from neutron.i18n import _LE, _LI
    -from neutron import neutron_plugin_base_v2
    -from neutron.plugins.cisco.common import cisco_constants as const
    -from neutron.plugins.cisco.common import cisco_credentials_v2 as cred
    -from neutron.plugins.cisco.common import config as conf
    -from neutron.plugins.cisco.db import network_db_v2 as cdb
    -
    -
    -LOG = logging.getLogger(__name__)
    -
    -
    -class VirtualPhysicalSwitchModelV2(neutron_plugin_base_v2.NeutronPluginBaseV2):
    -    """Virtual Physical Switch Model.
    -
    -    This implementation works with n1kv sub-plugin for the
    -    following topology:
    -    One or more servers to a n1kv switch.
    -    """
    -    __native_bulk_support = True
    -    supported_extension_aliases = ["provider", "binding"]
    -    _methods_to_delegate = ['create_network_bulk',
    -                            'get_network', 'get_networks',
    -                            'create_port_bulk',
    -                            'get_port', 'get_ports',
    -                            'create_subnet', 'create_subnet_bulk',
    -                            'delete_subnet', 'update_subnet',
    -                            'get_subnet', 'get_subnets',
    -                            'create_or_update_agent', 'report_state']
    -
    -    def __init__(self):
    -        """Initialize the segmentation manager.
    -
    -        Checks which device plugins are configured, and load the inventories
    -        those device plugins for which the inventory is configured.
    -        """
    -        conf.CiscoConfigOptions()
    -
    -        self._plugins = {}
    -        self._plugins['vswitch_plugin'] = importutils.import_object(
    -            'neutron.plugins.cisco.n1kv.n1kv_neutron_plugin.'
    -            'N1kvNeutronPluginV2')
    -
    -        if ((const.VSWITCH_PLUGIN in self._plugins) and
    -            hasattr(self._plugins[const.VSWITCH_PLUGIN],
    -                    "supported_extension_aliases")):
    -            self.supported_extension_aliases.extend(
    -                self._plugins[const.VSWITCH_PLUGIN].
    -                supported_extension_aliases)
    -
    -        # Initialize credential store after database initialization
    -        cred.Store.initialize()
    -        LOG.debug("%(module)s.%(name)s init done",
    -                  {'module': __name__,
    -                   'name': self.__class__.__name__})
    -
    -    def __getattribute__(self, name):
    -        """Delegate calls to sub-plugin.
    -
    -        This delegates the calls to the methods implemented by the
    -        sub-plugin. Note: Currently, bulking is handled by the caller
    -        (PluginV2), and this model class expects to receive only non-bulking
    -        calls. If, however, a bulking call is made, this will method will
    -        delegate the call to the sub-plugin.
    -        """
    -        super_getattribute = super(VirtualPhysicalSwitchModelV2,
    -                                   self).__getattribute__
    -        methods = super_getattribute('_methods_to_delegate')
    -
    -        if name in methods:
    -            plugin = super_getattribute('_plugins')[const.VSWITCH_PLUGIN]
    -            return getattr(plugin, name)
    -
    -        try:
    -            return super_getattribute(name)
    -        except AttributeError:
    -            plugin = super_getattribute('_plugins')[const.VSWITCH_PLUGIN]
    -            return getattr(plugin, name)
    -
    -    def _func_name(self, offset=0):
    -        """Get the name of the calling function."""
    -        frame_record = inspect.stack()[1 + offset]
    -        func_name = frame_record[3]
    -        return func_name
    -
    -    def _invoke_plugin_per_device(self, plugin_key, function_name,
    -                                  args, **kwargs):
    -        """Invoke plugin per device.
    -
    -        Invokes a device plugin's relevant functions (based on the
    -        plugin implementation) for completing this operation.
    -        """
    -        if plugin_key not in self._plugins:
    -            LOG.info(_LI("No %s Plugin loaded"), plugin_key)
    -            LOG.info(_LI("%(plugin_key)s: %(function_name)s with args "
    -                         "%(args)s ignored"),
    -                     {'plugin_key': plugin_key,
    -                      'function_name': function_name,
    -                      'args': args})
    -        else:
    -            func = getattr(self._plugins[plugin_key], function_name)
    -            return func(*args, **kwargs)
    -
    -    def _get_provider_vlan_id(self, network):
    -        if (all(attributes.is_attr_set(network.get(attr))
    -                for attr in (provider.NETWORK_TYPE,
    -                             provider.PHYSICAL_NETWORK,
    -                             provider.SEGMENTATION_ID))
    -            and
    -                network[provider.NETWORK_TYPE] == const.NETWORK_TYPE_VLAN):
    -            return network[provider.SEGMENTATION_ID]
    -
    -    def create_network(self, context, network):
    -        """Create network.
    -
    -        Perform this operation in the context of the configured device
    -        plugins.
    -        """
    -        LOG.debug("create_network() called")
    -        provider_vlan_id = self._get_provider_vlan_id(network[const.NETWORK])
    -        args = [context, network]
    -        switch_output = self._invoke_plugin_per_device(const.VSWITCH_PLUGIN,
    -                                                       self._func_name(),
    -                                                       args)
    -        # The vswitch plugin did all the verification. If it's a provider
    -        # vlan network, save it for the sub-plugin to use later.
    -        if provider_vlan_id:
    -            network_id = switch_output[const.NET_ID]
    -            cdb.add_provider_network(network_id,
    -                                     const.NETWORK_TYPE_VLAN,
    -                                     provider_vlan_id)
    -            LOG.debug("Provider network added to DB: %(network_id)s, "
    -                      "%(vlan_id)s",
    -                      {'network_id': network_id, 'vlan_id': provider_vlan_id})
    -        return switch_output
    -
    -    def update_network(self, context, id, network):
    -        """Update network.
    -
    -        Perform this operation in the context of the configured device
    -        plugins.
    -        """
    -        LOG.debug("update_network() called")
    -
    -        # We can only support updating of provider attributes if all the
    -        # configured sub-plugins support it. Currently we have no method
    -        # in place for checking whether a sub-plugin supports it,
    -        # so assume not.
    -        provider._raise_if_updates_provider_attributes(network['network'])
    -
    -        args = [context, id, network]
    -        return self._invoke_plugin_per_device(const.VSWITCH_PLUGIN,
    -                                              self._func_name(),
    -                                              args)
    -
    -    def delete_network(self, context, id):
    -        """Delete network.
    -
    -        Perform this operation in the context of the configured device
    -        plugins.
    -        """
    -        args = [context, id]
    -        switch_output = self._invoke_plugin_per_device(const.VSWITCH_PLUGIN,
    -                                                       self._func_name(),
    -                                                       args)
    -        if cdb.remove_provider_network(id):
    -            LOG.debug("Provider network removed from DB: %s", id)
    -        return switch_output
    -
    -    def get_network(self, context, id, fields=None):
    -        """Get network. This method is delegated to the vswitch plugin.
    -
    -        This method is included here to satisfy abstract method requirements.
    -        """
    -        pass  # pragma no cover
    -
    -    def get_networks(self, context, filters=None, fields=None,
    -                     sorts=None, limit=None, marker=None, page_reverse=False):
    -        """Get networks. This method is delegated to the vswitch plugin.
    -
    -        This method is included here to satisfy abstract method requirements.
    -        """
    -        pass  # pragma no cover
    -
    -    def _check_valid_port_device_owner(self, port):
    -        """Check the port for valid device_owner.
    -
    -        Don't call the sub-plugin for router and dhcp
    -        port owners.
    -        """
    -        return port['device_owner'].startswith('compute')
    -
    -    def _get_port_host_id_from_bindings(self, port):
    -        """Get host_id from portbindings."""
    -        host_id = None
    -
    -        if (portbindings.HOST_ID in port and
    -            attributes.is_attr_set(port[portbindings.HOST_ID])):
    -            host_id = port[portbindings.HOST_ID]
    -
    -        return host_id
    -
    -    def create_port(self, context, port):
    -        """Create port.
    -
    -        Perform this operation in the context of the configured device
    -        plugins.
    -        """
    -        LOG.debug("create_port() called")
    -        args = [context, port]
    -        return self._invoke_plugin_per_device(const.VSWITCH_PLUGIN,
    -                                              self._func_name(),
    -                                              args)
    -
    -    def get_port(self, context, id, fields=None):
    -        """Get port. This method is delegated to the vswitch plugin.
    -
    -        This method is included here to satisfy abstract method requirements.
    -        """
    -        pass  # pragma no cover
    -
    -    def get_ports(self, context, filters=None, fields=None):
    -        """Get ports. This method is delegated to the vswitch plugin.
    -
    -        This method is included here to satisfy abstract method requirements.
    -        """
    -        pass  # pragma no cover
    -
    -    def update_port(self, context, id, port):
    -        """Update port.
    -
    -        Perform this operation in the context of the configured device
    -        plugins.
    -        """
    -        LOG.debug("update_port() called")
    -        args = [context, id, port]
    -        return self._invoke_plugin_per_device(const.VSWITCH_PLUGIN,
    -                                              self._func_name(),
    -                                              args)
    -
    -    def delete_port(self, context, id, l3_port_check=True):
    -        """Delete port.
    -
    -        Perform this operation in the context of the configured device
    -        plugins.
    -        """
    -        LOG.debug("delete_port() called")
    -        port = self.get_port(context, id)
    -
    -        try:
    -            args = [context, id]
    -            switch_output = self._invoke_plugin_per_device(
    -                const.VSWITCH_PLUGIN, self._func_name(),
    -                args, l3_port_check=l3_port_check)
    -        except Exception as e:
    -            with excutils.save_and_reraise_exception():
    -                LOG.error(_LE("Unable to delete port '%(pname)s' on switch. "
    -                              "Exception: %(exp)s"), {'pname': port['name'],
    -                                                      'exp': e})
    -
    -        return switch_output
    -
    -    def create_subnet(self, context, subnet):
    -        """Create subnet. This method is delegated to the vswitch plugin.
    -
    -        This method is included here to satisfy abstract method requirements.
    -        """
    -        pass  # pragma no cover
    -
    -    def update_subnet(self, context, id, subnet):
    -        """Update subnet. This method is delegated to the vswitch plugin.
    -
    -        This method is included here to satisfy abstract method requirements.
    -        """
    -        pass  # pragma no cover
    -
    -    def get_subnet(self, context, id, fields=None):
    -        """Get subnet. This method is delegated to the vswitch plugin.
    -
    -        This method is included here to satisfy abstract method requirements.
    -        """
    -        pass  # pragma no cover
    -
    -    def delete_subnet(self, context, id, kwargs):
    -        """Delete subnet. This method is delegated to the vswitch plugin.
    -
    -        This method is included here to satisfy abstract method requirements.
    -        """
    -        pass  # pragma no cover
    -
    -    def get_subnets(self, context, filters=None, fields=None,
    -                    sorts=None, limit=None, marker=None, page_reverse=False):
    -        """Get subnets. This method is delegated to the vswitch plugin.
    -
    -        This method is included here to satisfy abstract method requirements.
    -        """
    -        pass  # pragma no cover
    
  • neutron/plugins/cisco/n1kv/__init__.py+0 0 removed
  • neutron/plugins/cisco/n1kv/n1kv_client.py+0 558 removed
    @@ -1,558 +0,0 @@
    -# Copyright 2013 Cisco Systems, Inc.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -import base64
    -
    -import eventlet
    -import netaddr
    -from oslo_log import log as logging
    -from oslo_serialization import jsonutils
    -import requests
    -import six
    -
    -from neutron.common import exceptions as n_exc
    -from neutron.extensions import providernet
    -from neutron.plugins.cisco.common import cisco_constants as c_const
    -from neutron.plugins.cisco.common import cisco_credentials_v2 as c_cred
    -from neutron.plugins.cisco.common import cisco_exceptions as c_exc
    -from neutron.plugins.cisco.common import config as c_conf
    -from neutron.plugins.cisco.db import network_db_v2
    -from neutron.plugins.cisco.extensions import n1kv
    -
    -LOG = logging.getLogger(__name__)
    -
    -
    -def safe_b64_encode(s):
    -    if six.PY3:
    -        method = base64.encodebytes
    -    else:
    -        method = base64.encodestring
    -
    -    if isinstance(s, six.text_type):
    -        s = s.encode('utf-8')
    -    encoded_string = method(s).rstrip()
    -
    -    if six.PY3:
    -        return encoded_string.decode('utf-8')
    -    else:
    -        return encoded_string
    -
    -
    -class Client(object):
    -
    -    """
    -    Client for the Cisco Nexus1000V Neutron Plugin.
    -
    -    This client implements functions to communicate with
    -    Cisco Nexus1000V VSM.
    -
    -    For every Neutron objects, Cisco Nexus1000V Neutron Plugin
    -    creates a corresponding object in the controller (Cisco
    -    Nexus1000V VSM).
    -
    -    CONCEPTS:
    -
    -    Following are few concepts used in Nexus1000V VSM:
    -
    -    port-profiles:
    -    Policy profiles correspond to port profiles on Nexus1000V VSM.
    -    Port profiles are the primary mechanism by which network policy is
    -    defined and applied to switch interfaces in a Nexus 1000V system.
    -
    -    network-segment:
    -    Each network-segment represents a broadcast domain.
    -
    -    network-segment-pool:
    -    A network-segment-pool contains one or more network-segments.
    -
    -    logical-network:
    -    A logical-network contains one or more network-segment-pools.
    -
    -    bridge-domain:
    -    A bridge-domain is created when the network-segment is of type VXLAN.
    -    Each VXLAN <--> VLAN combination can be thought of as a bridge domain.
    -
    -    ip-pool:
    -    Each ip-pool represents a subnet on the Nexus1000V VSM.
    -
    -    vm-network:
    -    vm-network refers to a network-segment and policy-profile.
    -    It maintains a list of ports that uses the network-segment and
    -    policy-profile this vm-network refers to.
    -
    -    events:
    -    Events correspond to commands that are logged on Nexus1000V VSM.
    -    Events are used to poll for a certain resource on Nexus1000V VSM.
    -    Event type of port_profile: Return all updates/create/deletes
    -    of port profiles from the VSM.
    -    Event type of port_profile_update: Return only updates regarding
    -    policy-profiles.
    -    Event type of port_profile_delete: Return only deleted policy profiles.
    -
    -
    -    WORK FLOW:
    -
    -    For every network profile a corresponding logical-network and
    -    a network-segment-pool, under this logical-network, will be created.
    -
    -    For every network created from a given network profile, a
    -    network-segment will be added to the network-segment-pool corresponding
    -    to that network profile.
    -
    -    A port is created on a network and associated with a policy-profile.
    -    Hence for every unique combination of a network and a policy-profile, a
    -    unique vm-network will be created and a reference to the port will be
    -    added. If the same combination of network and policy-profile is used by
    -    another port, the references to that port will be added to the same
    -    vm-network.
    -
    -
    -    """
    -
    -    # Define paths for the URI where the client connects for HTTP requests.
    -    port_profiles_path = "/virtual-port-profile"
    -    network_segment_path = "/network-segment/%s"
    -    network_segment_pool_path = "/network-segment-pool/%s"
    -    ip_pool_path = "/ip-pool-template/%s"
    -    ports_path = "/kvm/vm-network/%s/ports"
    -    port_path = "/kvm/vm-network/%s/ports/%s"
    -    vm_networks_path = "/kvm/vm-network"
    -    vm_network_path = "/kvm/vm-network/%s"
    -    bridge_domains_path = "/kvm/bridge-domain"
    -    bridge_domain_path = "/kvm/bridge-domain/%s"
    -    logical_network_path = "/logical-network/%s"
    -    events_path = "/kvm/events"
    -    clusters_path = "/cluster"
    -    encap_profiles_path = "/encapsulation-profile"
    -    encap_profile_path = "/encapsulation-profile/%s"
    -
    -    pool = eventlet.GreenPool(c_conf.CISCO_N1K.http_pool_size)
    -
    -    def __init__(self, **kwargs):
    -        """Initialize a new client for the plugin."""
    -        self.format = 'json'
    -        self.hosts = self._get_vsm_hosts()
    -        self.action_prefix = 'http://%s/api/n1k' % self.hosts[0]
    -        self.timeout = c_conf.CISCO_N1K.http_timeout
    -
    -    def list_port_profiles(self):
    -        """
    -        Fetch all policy profiles from the VSM.
    -
    -        :returns: JSON string
    -        """
    -        return self._get(self.port_profiles_path)
    -
    -    def create_bridge_domain(self, network, overlay_subtype):
    -        """
    -        Create a bridge domain on VSM.
    -
    -        :param network: network dict
    -        :param overlay_subtype: string representing subtype of overlay network
    -        """
    -        body = {'name': network['id'] + c_const.BRIDGE_DOMAIN_SUFFIX,
    -                'segmentId': network[providernet.SEGMENTATION_ID],
    -                'subType': overlay_subtype,
    -                'tenantId': network['tenant_id']}
    -        if overlay_subtype == c_const.NETWORK_SUBTYPE_NATIVE_VXLAN:
    -            body['groupIp'] = network[n1kv.MULTICAST_IP]
    -        return self._post(self.bridge_domains_path,
    -                          body=body)
    -
    -    def delete_bridge_domain(self, name):
    -        """
    -        Delete a bridge domain on VSM.
    -
    -        :param name: name of the bridge domain to be deleted
    -        """
    -        return self._delete(self.bridge_domain_path % name)
    -
    -    def create_network_segment(self, network, network_profile):
    -        """
    -        Create a network segment on the VSM.
    -
    -        :param network: network dict
    -        :param network_profile: network profile dict
    -        """
    -        body = {'publishName': network['id'],
    -                'description': network['name'],
    -                'id': network['id'],
    -                'tenantId': network['tenant_id'],
    -                'networkSegmentPool': network_profile['id'], }
    -        if network[providernet.NETWORK_TYPE] == c_const.NETWORK_TYPE_VLAN:
    -            body['vlan'] = network[providernet.SEGMENTATION_ID]
    -        elif network[providernet.NETWORK_TYPE] == c_const.NETWORK_TYPE_OVERLAY:
    -            body['bridgeDomain'] = (network['id'] +
    -                                    c_const.BRIDGE_DOMAIN_SUFFIX)
    -        if network_profile['segment_type'] == c_const.NETWORK_TYPE_TRUNK:
    -            body['mode'] = c_const.NETWORK_TYPE_TRUNK
    -            body['segmentType'] = network_profile['sub_type']
    -            if network_profile['sub_type'] == c_const.NETWORK_TYPE_VLAN:
    -                body['addSegments'] = network['add_segment_list']
    -                body['delSegments'] = network['del_segment_list']
    -            else:
    -                body['encapProfile'] = (network['id'] +
    -                                        c_const.ENCAPSULATION_PROFILE_SUFFIX)
    -        else:
    -            body['mode'] = 'access'
    -            body['segmentType'] = network_profile['segment_type']
    -        return self._post(self.network_segment_path % network['id'],
    -                          body=body)
    -
    -    def update_network_segment(self, network_segment_id, body):
    -        """
    -        Update a network segment on the VSM.
    -
    -        Network segment on VSM can be updated to associate it with an ip-pool
    -        or update its description and segment id.
    -
    -        :param network_segment_id: UUID representing the network segment
    -        :param body: dict of arguments to be updated
    -        """
    -        return self._post(self.network_segment_path % network_segment_id,
    -                          body=body)
    -
    -    def delete_network_segment(self, network_segment_id):
    -        """
    -        Delete a network segment on the VSM.
    -
    -        :param network_segment_id: UUID representing the network segment
    -        """
    -        return self._delete(self.network_segment_path % network_segment_id)
    -
    -    def create_logical_network(self, network_profile, tenant_id):
    -        """
    -        Create a logical network on the VSM.
    -
    -        :param network_profile: network profile dict
    -        :param tenant_id: UUID representing the tenant
    -        """
    -        LOG.debug("Logical network")
    -        body = {'description': network_profile['name'],
    -                'tenantId': tenant_id}
    -        logical_network_name = (network_profile['id'] +
    -                                c_const.LOGICAL_NETWORK_SUFFIX)
    -        return self._post(self.logical_network_path % logical_network_name,
    -                          body=body)
    -
    -    def delete_logical_network(self, logical_network_name):
    -        """
    -        Delete a logical network on VSM.
    -
    -        :param logical_network_name: string representing name of the logical
    -                                     network
    -        """
    -        return self._delete(
    -            self.logical_network_path % logical_network_name)
    -
    -    def create_network_segment_pool(self, network_profile, tenant_id):
    -        """
    -        Create a network segment pool on the VSM.
    -
    -        :param network_profile: network profile dict
    -        :param tenant_id: UUID representing the tenant
    -        """
    -        LOG.debug("network_segment_pool")
    -        logical_network_name = (network_profile['id'] +
    -                                c_const.LOGICAL_NETWORK_SUFFIX)
    -        body = {'name': network_profile['name'],
    -                'description': network_profile['name'],
    -                'id': network_profile['id'],
    -                'logicalNetwork': logical_network_name,
    -                'tenantId': tenant_id}
    -        if network_profile['segment_type'] == c_const.NETWORK_TYPE_OVERLAY:
    -            body['subType'] = network_profile['sub_type']
    -        return self._post(
    -            self.network_segment_pool_path % network_profile['id'],
    -            body=body)
    -
    -    def update_network_segment_pool(self, network_profile):
    -        """
    -        Update a network segment pool on the VSM.
    -
    -        :param network_profile: network profile dict
    -        """
    -        body = {'name': network_profile['name'],
    -                'description': network_profile['name']}
    -        return self._post(self.network_segment_pool_path %
    -                          network_profile['id'], body=body)
    -
    -    def delete_network_segment_pool(self, network_segment_pool_id):
    -        """
    -        Delete a network segment pool on the VSM.
    -
    -        :param network_segment_pool_id: UUID representing the network
    -                                        segment pool
    -        """
    -        return self._delete(self.network_segment_pool_path %
    -                            network_segment_pool_id)
    -
    -    def create_ip_pool(self, subnet):
    -        """
    -        Create an ip-pool on the VSM.
    -
    -        :param subnet: subnet dict
    -        """
    -        if subnet['cidr']:
    -            try:
    -                ip = netaddr.IPNetwork(subnet['cidr'])
    -                netmask = str(ip.netmask)
    -                network_address = str(ip.network)
    -            except (ValueError, netaddr.AddrFormatError):
    -                msg = _("Invalid input for CIDR")
    -                raise n_exc.InvalidInput(error_message=msg)
    -        else:
    -            netmask = network_address = ""
    -
    -        if subnet['allocation_pools']:
    -            address_range_start = subnet['allocation_pools'][0]['start']
    -            address_range_end = subnet['allocation_pools'][0]['end']
    -        else:
    -            address_range_start = None
    -            address_range_end = None
    -
    -        body = {'addressRangeStart': address_range_start,
    -                'addressRangeEnd': address_range_end,
    -                'ipAddressSubnet': netmask,
    -                'description': subnet['name'],
    -                'gateway': subnet['gateway_ip'],
    -                'dhcp': subnet['enable_dhcp'],
    -                'dnsServersList': subnet['dns_nameservers'],
    -                'networkAddress': network_address,
    -                'netSegmentName': subnet['network_id'],
    -                'id': subnet['id'],
    -                'tenantId': subnet['tenant_id']}
    -        return self._post(self.ip_pool_path % subnet['id'],
    -                          body=body)
    -
    -    def update_ip_pool(self, subnet):
    -        """
    -        Update an ip-pool on the VSM.
    -
    -        :param subnet: subnet dictionary
    -        """
    -        body = {'description': subnet['name'],
    -                'dhcp': subnet['enable_dhcp'],
    -                'dnsServersList': subnet['dns_nameservers']}
    -        return self._post(self.ip_pool_path % subnet['id'],
    -                          body=body)
    -
    -    def delete_ip_pool(self, subnet_id):
    -        """
    -        Delete an ip-pool on the VSM.
    -
    -        :param subnet_id: UUID representing the subnet
    -        """
    -        return self._delete(self.ip_pool_path % subnet_id)
    -
    -    def create_vm_network(self,
    -                          port,
    -                          vm_network_name,
    -                          policy_profile):
    -        """
    -        Create a VM network on the VSM.
    -
    -        :param port: port dict
    -        :param vm_network_name: name of the VM network
    -        :param policy_profile: policy profile dict
    -        """
    -        body = {'name': vm_network_name,
    -                'networkSegmentId': port['network_id'],
    -                'networkSegment': port['network_id'],
    -                'portProfile': policy_profile['name'],
    -                'portProfileId': policy_profile['id'],
    -                'tenantId': port['tenant_id'],
    -                'portId': port['id'],
    -                'macAddress': port['mac_address'],
    -                }
    -        if port.get('fixed_ips'):
    -            body['ipAddress'] = port['fixed_ips'][0]['ip_address']
    -            body['subnetId'] = port['fixed_ips'][0]['subnet_id']
    -        return self._post(self.vm_networks_path,
    -                          body=body)
    -
    -    def delete_vm_network(self, vm_network_name):
    -        """
    -        Delete a VM network on the VSM.
    -
    -        :param vm_network_name: name of the VM network
    -        """
    -        return self._delete(self.vm_network_path % vm_network_name)
    -
    -    def create_n1kv_port(self, port, vm_network_name):
    -        """
    -        Create a port on the VSM.
    -
    -        :param port: port dict
    -        :param vm_network_name: name of the VM network which imports this port
    -        """
    -        body = {'id': port['id'],
    -                'macAddress': port['mac_address']}
    -        if port.get('fixed_ips'):
    -            body['ipAddress'] = port['fixed_ips'][0]['ip_address']
    -            body['subnetId'] = port['fixed_ips'][0]['subnet_id']
    -        return self._post(self.ports_path % vm_network_name,
    -                          body=body)
    -
    -    def update_n1kv_port(self, vm_network_name, port_id, body):
    -        """
    -        Update a port on the VSM.
    -
    -        Update the mac address associated with the port
    -
    -        :param vm_network_name: name of the VM network which imports this port
    -        :param port_id: UUID of the port
    -        :param body: dict of the arguments to be updated
    -        """
    -        return self._post(self.port_path % (vm_network_name, port_id),
    -                          body=body)
    -
    -    def delete_n1kv_port(self, vm_network_name, port_id):
    -        """
    -        Delete a port on the VSM.
    -
    -        :param vm_network_name: name of the VM network which imports this port
    -        :param port_id: UUID of the port
    -        """
    -        return self._delete(self.port_path % (vm_network_name, port_id))
    -
    -    def _do_request(self, method, action, body=None,
    -                    headers=None):
    -        """
    -        Perform the HTTP request.
    -
    -        The response is in either JSON format or plain text. A GET method will
    -        invoke a JSON response while a PUT/POST/DELETE returns message from the
    -        VSM in plain text format.
    -        Exception is raised when VSM replies with an INTERNAL SERVER ERROR HTTP
    -        status code (500) i.e. an error has occurred on the VSM or SERVICE
    -        UNAVAILABLE (503) i.e. VSM is not reachable.
    -
    -        :param method: type of the HTTP request. POST, GET, PUT or DELETE
    -        :param action: path to which the client makes request
    -        :param body: dict for arguments which are sent as part of the request
    -        :param headers: header for the HTTP request
    -        :returns: JSON or plain text in HTTP response
    -        """
    -        action = self.action_prefix + action
    -        if not headers and self.hosts:
    -            headers = self._get_auth_header(self.hosts[0])
    -        headers['Content-Type'] = self._set_content_type('json')
    -        headers['Accept'] = self._set_content_type('json')
    -        if body:
    -            body = jsonutils.dumps(body, indent=2)
    -            LOG.debug("req: %s", body)
    -        try:
    -            resp = self.pool.spawn(requests.request,
    -                                   method,
    -                                   url=action,
    -                                   data=body,
    -                                   headers=headers,
    -                                   timeout=self.timeout).wait()
    -        except Exception as e:
    -            raise c_exc.VSMConnectionFailed(reason=e)
    -        LOG.debug("status_code %s", resp.status_code)
    -        if resp.status_code == requests.codes.OK:
    -            if 'application/json' in resp.headers['content-type']:
    -                try:
    -                    return resp.json()
    -                except ValueError:
    -                    return {}
    -            elif 'text/plain' in resp.headers['content-type']:
    -                LOG.debug("VSM: %s", resp.text)
    -        else:
    -            raise c_exc.VSMError(reason=resp.text)
    -
    -    def _set_content_type(self, format=None):
    -        """
    -        Set the mime-type to either 'xml' or 'json'.
    -
    -        :param format: format to be set.
    -        :return: mime-type string
    -        """
    -        if not format:
    -            format = self.format
    -        return "application/%s" % format
    -
    -    def _delete(self, action, body=None, headers=None):
    -        return self._do_request("DELETE", action, body=body,
    -                                headers=headers)
    -
    -    def _get(self, action, body=None, headers=None):
    -        return self._do_request("GET", action, body=body,
    -                                headers=headers)
    -
    -    def _post(self, action, body=None, headers=None):
    -        return self._do_request("POST", action, body=body,
    -                                headers=headers)
    -
    -    def _put(self, action, body=None, headers=None):
    -        return self._do_request("PUT", action, body=body,
    -                                headers=headers)
    -
    -    def _get_vsm_hosts(self):
    -        """
    -        Retrieve a list of VSM ip addresses.
    -
    -        :return: list of host ip addresses
    -        """
    -        return [cr[c_const.CREDENTIAL_NAME] for cr in
    -                network_db_v2.get_all_n1kv_credentials()]
    -
    -    def _get_auth_header(self, host_ip):
    -        """
    -        Retrieve header with auth info for the VSM.
    -
    -        :param host_ip: IP address of the VSM
    -        :return: authorization header dict
    -        """
    -        username = c_cred.Store.get_username(host_ip)
    -        password = c_cred.Store.get_password(host_ip)
    -        auth = safe_b64_encode("%s:%s" % (username, password))
    -        header = {"Authorization": "Basic %s" % auth}
    -        return header
    -
    -    def get_clusters(self):
    -        """Fetches a list of all vxlan gateway clusters."""
    -        return self._get(self.clusters_path)
    -
    -    def create_encapsulation_profile(self, encap):
    -        """
    -        Create an encapsulation profile on VSM.
    -
    -        :param encap: encapsulation dict
    -        """
    -        body = {'name': encap['name'],
    -                'addMappings': encap['add_segment_list'],
    -                'delMappings': encap['del_segment_list']}
    -        return self._post(self.encap_profiles_path,
    -                          body=body)
    -
    -    def update_encapsulation_profile(self, context, profile_name, body):
    -        """
    -        Adds a vlan to bridge-domain mapping to an encapsulation profile.
    -
    -        :param profile_name: Name of the encapsulation profile
    -        :param body: mapping dictionary
    -        """
    -        return self._post(self.encap_profile_path
    -                          % profile_name, body=body)
    -
    -    def delete_encapsulation_profile(self, name):
    -        """
    -        Delete an encapsulation profile on VSM.
    -
    -        :param name: name of the encapsulation profile to be deleted
    -        """
    -        return self._delete(self.encap_profile_path % name)
    
  • neutron/plugins/cisco/n1kv/n1kv_neutron_plugin.py+0 1429 removed
    @@ -1,1429 +0,0 @@
    -# Copyright 2013 Cisco Systems, Inc.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -import eventlet
    -from oslo_config import cfg as o_conf
    -from oslo_log import log as logging
    -from oslo_utils import excutils
    -from oslo_utils import importutils
    -from oslo_utils import uuidutils
    -
    -from neutron.api.rpc.agentnotifiers import dhcp_rpc_agent_api
    -from neutron.api.rpc.handlers import dhcp_rpc
    -from neutron.api.rpc.handlers import metadata_rpc
    -from neutron.api.v2 import attributes
    -from neutron.common import constants
    -from neutron.common import exceptions as n_exc
    -from neutron.common import rpc as n_rpc
    -from neutron.common import topics
    -from neutron.db import agents_db
    -from neutron.db import agentschedulers_db
    -from neutron.db import db_base_plugin_v2
    -from neutron.db import external_net_db
    -from neutron.db import portbindings_db
    -from neutron.db.quota import driver
    -from neutron.extensions import portbindings
    -from neutron.extensions import providernet
    -from neutron.i18n import _LW
    -from neutron import manager
    -from neutron.plugins.cisco.common import cisco_constants as c_const
    -from neutron.plugins.cisco.common import cisco_credentials_v2 as c_cred
    -from neutron.plugins.cisco.common import cisco_exceptions
    -from neutron.plugins.cisco.common import config as c_conf
    -from neutron.plugins.cisco.db import n1kv_db_v2
    -from neutron.plugins.cisco.db import network_db_v2
    -from neutron.plugins.cisco.extensions import n1kv
    -from neutron.plugins.cisco.n1kv import n1kv_client
    -from neutron.plugins.common import constants as svc_constants
    -from neutron.plugins.common import utils
    -
    -
    -LOG = logging.getLogger(__name__)
    -
    -
    -class N1kvNeutronPluginV2(db_base_plugin_v2.NeutronDbPluginV2,
    -                          external_net_db.External_net_db_mixin,
    -                          portbindings_db.PortBindingMixin,
    -                          n1kv_db_v2.NetworkProfile_db_mixin,
    -                          n1kv_db_v2.PolicyProfile_db_mixin,
    -                          network_db_v2.Credential_db_mixin,
    -                          agentschedulers_db.DhcpAgentSchedulerDbMixin,
    -                          driver.DbQuotaDriver):
    -
    -    """
    -    Implement the Neutron abstractions using Cisco Nexus1000V.
    -
    -    Refer README file for the architecture, new features, and
    -    workflow
    -
    -    """
    -
    -    # This attribute specifies whether the plugin supports or not
    -    # bulk operations.
    -    __native_bulk_support = False
    -    supported_extension_aliases = ["provider", "agent",
    -                                   "n1kv", "network_profile",
    -                                   "policy_profile", "external-net",
    -                                   "binding", "credential", "quotas",
    -                                   "dhcp_agent_scheduler"]
    -
    -    def __init__(self, configfile=None):
    -        """
    -        Initialize Nexus1000V Neutron plugin.
    -
    -        1. Initialize VIF type to OVS
    -        2. clear N1kv credential
    -        3. Initialize Nexus1000v and Credential DB
    -        4. Establish communication with Cisco Nexus1000V
    -        """
    -        super(N1kvNeutronPluginV2, self).__init__()
    -        self.base_binding_dict = {
    -            portbindings.VIF_TYPE: portbindings.VIF_TYPE_OVS,
    -            portbindings.VIF_DETAILS: {
    -                # TODO(rkukura): Replace with new VIF security details
    -                portbindings.CAP_PORT_FILTER:
    -                'security-group' in self.supported_extension_aliases}}
    -        network_db_v2.delete_all_n1kv_credentials()
    -        c_cred.Store.initialize()
    -        self._setup_vsm()
    -        self._setup_rpc()
    -        self.network_scheduler = importutils.import_object(
    -            o_conf.CONF.network_scheduler_driver
    -        )
    -        self.start_periodic_dhcp_agent_status_check()
    -
    -    def _setup_rpc(self):
    -        # RPC support
    -        self.service_topics = {svc_constants.CORE: topics.PLUGIN}
    -        self.conn = n_rpc.create_connection(new=True)
    -        self.endpoints = [dhcp_rpc.DhcpRpcCallback(),
    -                          agents_db.AgentExtRpcCallback(),
    -                          metadata_rpc.MetadataRpcCallback()]
    -        for svc_topic in self.service_topics.values():
    -            self.conn.create_consumer(svc_topic, self.endpoints, fanout=False)
    -        self.dhcp_agent_notifier = dhcp_rpc_agent_api.DhcpAgentNotifyAPI()
    -        # Consume from all consumers in threads
    -        self.conn.consume_in_threads()
    -
    -    def _setup_vsm(self):
    -        """
    -        Setup Cisco Nexus 1000V related parameters and pull policy profiles.
    -
    -        Retrieve all the policy profiles from the VSM when the plugin
    -        is instantiated for the first time and then continue to poll for
    -        policy profile updates.
    -        """
    -        LOG.debug('_setup_vsm')
    -        self.agent_vsm = True
    -        # Poll VSM for create/delete of policy profile.
    -        eventlet.spawn(self._poll_policy_profiles)
    -
    -    def _poll_policy_profiles(self):
    -        """Start a green thread to pull policy profiles from VSM."""
    -        while True:
    -            self._populate_policy_profiles()
    -            eventlet.sleep(c_conf.CISCO_N1K.poll_duration)
    -
    -    def _populate_policy_profiles(self):
    -        """
    -        Populate all the policy profiles from VSM.
    -
    -        The tenant id is not available when the policy profiles are polled
    -        from the VSM. Hence we associate the policy profiles with fake
    -        tenant-ids.
    -        """
    -        LOG.debug('_populate_policy_profiles')
    -        try:
    -            n1kvclient = n1kv_client.Client()
    -            policy_profiles = n1kvclient.list_port_profiles()
    -            vsm_profiles = {}
    -            plugin_profiles_set = set()
    -            # Fetch policy profiles from VSM
    -            for profile_name in policy_profiles:
    -                profile_id = (policy_profiles
    -                              [profile_name][c_const.PROPERTIES][c_const.ID])
    -                vsm_profiles[profile_id] = profile_name
    -            # Fetch policy profiles previously populated
    -            for profile in n1kv_db_v2.get_policy_profiles():
    -                plugin_profiles_set.add(profile.id)
    -            vsm_profiles_set = set(vsm_profiles)
    -            # Update database if the profile sets differ.
    -            if vsm_profiles_set ^ plugin_profiles_set:
    -                # Add profiles in database if new profiles were created in VSM
    -                for pid in vsm_profiles_set - plugin_profiles_set:
    -                    self._add_policy_profile(vsm_profiles[pid], pid)
    -
    -                # Delete profiles from database if profiles were deleted in VSM
    -                for pid in plugin_profiles_set - vsm_profiles_set:
    -                    self._delete_policy_profile(pid)
    -            self._remove_all_fake_policy_profiles()
    -        except (cisco_exceptions.VSMError,
    -                cisco_exceptions.VSMConnectionFailed):
    -            LOG.warning(_LW('No policy profile populated from VSM'))
    -
    -    def _extend_network_dict_provider(self, context, network):
    -        """Add extended network parameters."""
    -        binding = n1kv_db_v2.get_network_binding(context.session,
    -                                                 network['id'])
    -        network[providernet.NETWORK_TYPE] = binding.network_type
    -        if binding.network_type == c_const.NETWORK_TYPE_OVERLAY:
    -            network[providernet.PHYSICAL_NETWORK] = None
    -            network[providernet.SEGMENTATION_ID] = binding.segmentation_id
    -            network[n1kv.MULTICAST_IP] = binding.multicast_ip
    -        elif binding.network_type == c_const.NETWORK_TYPE_VLAN:
    -            network[providernet.PHYSICAL_NETWORK] = binding.physical_network
    -            network[providernet.SEGMENTATION_ID] = binding.segmentation_id
    -        elif binding.network_type == c_const.NETWORK_TYPE_TRUNK:
    -            network[providernet.PHYSICAL_NETWORK] = binding.physical_network
    -            network[providernet.SEGMENTATION_ID] = None
    -            network[n1kv.MULTICAST_IP] = None
    -        elif binding.network_type == c_const.NETWORK_TYPE_MULTI_SEGMENT:
    -            network[providernet.PHYSICAL_NETWORK] = None
    -            network[providernet.SEGMENTATION_ID] = None
    -            network[n1kv.MULTICAST_IP] = None
    -
    -    def _process_provider_create(self, context, attrs):
    -        network_type = attrs.get(providernet.NETWORK_TYPE)
    -        physical_network = attrs.get(providernet.PHYSICAL_NETWORK)
    -        segmentation_id = attrs.get(providernet.SEGMENTATION_ID)
    -
    -        network_type_set = attributes.is_attr_set(network_type)
    -        physical_network_set = attributes.is_attr_set(physical_network)
    -        segmentation_id_set = attributes.is_attr_set(segmentation_id)
    -
    -        if not (network_type_set or physical_network_set or
    -                segmentation_id_set):
    -            return (None, None, None)
    -
    -        if not network_type_set:
    -            msg = _("provider:network_type required")
    -            raise n_exc.InvalidInput(error_message=msg)
    -        elif network_type == c_const.NETWORK_TYPE_VLAN:
    -            if not segmentation_id_set:
    -                msg = _("provider:segmentation_id required")
    -                raise n_exc.InvalidInput(error_message=msg)
    -            if segmentation_id < 1 or segmentation_id > 4094:
    -                msg = _("provider:segmentation_id out of range "
    -                        "(1 through 4094)")
    -                raise n_exc.InvalidInput(error_message=msg)
    -        elif network_type == c_const.NETWORK_TYPE_OVERLAY:
    -            if physical_network_set:
    -                msg = _("provider:physical_network specified for Overlay "
    -                        "network")
    -                raise n_exc.InvalidInput(error_message=msg)
    -            else:
    -                physical_network = None
    -            if not segmentation_id_set:
    -                msg = _("provider:segmentation_id required")
    -                raise n_exc.InvalidInput(error_message=msg)
    -            if segmentation_id < 5000:
    -                msg = _("provider:segmentation_id out of range "
    -                        "(5000+)")
    -                raise n_exc.InvalidInput(error_message=msg)
    -        else:
    -            msg = _("provider:network_type %s not supported"), network_type
    -            raise n_exc.InvalidInput(error_message=msg)
    -
    -        if network_type == c_const.NETWORK_TYPE_VLAN:
    -            if physical_network_set:
    -                network_profiles = n1kv_db_v2.get_network_profiles()
    -                for network_profile in network_profiles:
    -                    if physical_network == network_profile[
    -                        'physical_network']:
    -                        break
    -                else:
    -                    msg = (_("Unknown provider:physical_network %s"),
    -                           physical_network)
    -                    raise n_exc.InvalidInput(error_message=msg)
    -            else:
    -                msg = _("provider:physical_network required")
    -                raise n_exc.InvalidInput(error_message=msg)
    -
    -        return (network_type, physical_network, segmentation_id)
    -
    -    def _check_provider_update(self, context, attrs):
    -        """Handle Provider network updates."""
    -        network_type = attrs.get(providernet.NETWORK_TYPE)
    -        physical_network = attrs.get(providernet.PHYSICAL_NETWORK)
    -        segmentation_id = attrs.get(providernet.SEGMENTATION_ID)
    -
    -        network_type_set = attributes.is_attr_set(network_type)
    -        physical_network_set = attributes.is_attr_set(physical_network)
    -        segmentation_id_set = attributes.is_attr_set(segmentation_id)
    -
    -        if not (network_type_set or physical_network_set or
    -                segmentation_id_set):
    -            return
    -
    -        # TBD : Need to handle provider network updates
    -        msg = _("Plugin does not support updating provider attributes")
    -        raise n_exc.InvalidInput(error_message=msg)
    -
    -    def _get_cluster(self, segment1, segment2, clusters):
    -        """
    -        Returns a cluster to apply the segment mapping
    -
    -        :param segment1: UUID of segment to be mapped
    -        :param segment2: UUID of segment to be mapped
    -        :param clusters: List of clusters
    -        """
    -        for cluster in sorted(clusters, key=lambda k: k['size']):
    -            for mapping in cluster[c_const.MAPPINGS]:
    -                for segment in mapping[c_const.SEGMENTS]:
    -                    if segment1 in segment or segment2 in segment:
    -                        break
    -                else:
    -                    cluster['size'] += 2
    -                    return cluster['encapProfileName']
    -                break
    -        return
    -
    -    def _extend_mapping_dict(self, context, mapping_dict, segment):
    -        """
    -        Extend a mapping dictionary with dot1q tag and bridge-domain name.
    -
    -        :param context: neutron api request context
    -        :param mapping_dict: dictionary to populate values
    -        :param segment: id of the segment being populated
    -        """
    -        net = self.get_network(context, segment)
    -        if net[providernet.NETWORK_TYPE] == c_const.NETWORK_TYPE_VLAN:
    -            mapping_dict['dot1q'] = str(net[providernet.SEGMENTATION_ID])
    -        else:
    -            mapping_dict['bridgeDomain'] = (net['name'] +
    -                                            c_const.BRIDGE_DOMAIN_SUFFIX)
    -
    -    def _send_add_multi_segment_request(self, context, net_id, segment_pairs):
    -        """
    -        Send Add multi-segment network request to VSM.
    -
    -        :param context: neutron api request context
    -        :param net_id: UUID of the multi-segment network
    -        :param segment_pairs: List of segments in UUID pairs
    -                              that need to be bridged
    -        """
    -
    -        if not segment_pairs:
    -            return
    -
    -        session = context.session
    -        n1kvclient = n1kv_client.Client()
    -        clusters = n1kvclient.get_clusters()
    -        online_clusters = []
    -        encap_dict = {}
    -        for cluster in clusters['body'][c_const.SET]:
    -            cluster = cluster[c_const.PROPERTIES]
    -            if cluster[c_const.STATE] == c_const.ONLINE:
    -                cluster['size'] = 0
    -                for mapping in cluster[c_const.MAPPINGS]:
    -                    cluster['size'] += (
    -                        len(mapping[c_const.SEGMENTS]))
    -                online_clusters.append(cluster)
    -        for (segment1, segment2) in segment_pairs:
    -            encap_profile = self._get_cluster(segment1, segment2,
    -                                              online_clusters)
    -            if encap_profile is not None:
    -                if encap_profile in encap_dict:
    -                    profile_dict = encap_dict[encap_profile]
    -                else:
    -                    profile_dict = {'name': encap_profile,
    -                                    'addMappings': [],
    -                                    'delMappings': []}
    -                    encap_dict[encap_profile] = profile_dict
    -                mapping_dict = {}
    -                self._extend_mapping_dict(context,
    -                                          mapping_dict, segment1)
    -                self._extend_mapping_dict(context,
    -                                          mapping_dict, segment2)
    -                profile_dict['addMappings'].append(mapping_dict)
    -                n1kv_db_v2.add_multi_segment_encap_profile_name(session,
    -                                                                net_id,
    -                                                                (segment1,
    -                                                                 segment2),
    -                                                                encap_profile)
    -            else:
    -                raise cisco_exceptions.NoClusterFound()
    -
    -        for profile in encap_dict:
    -            n1kvclient.update_encapsulation_profile(context, profile,
    -                                                    encap_dict[profile])
    -
    -    def _send_del_multi_segment_request(self, context, net_id, segment_pairs):
    -        """
    -        Send Delete multi-segment network request to VSM.
    -
    -        :param context: neutron api request context
    -        :param net_id: UUID of the multi-segment network
    -        :param segment_pairs: List of segments in UUID pairs
    -                              whose bridging needs to be removed
    -        """
    -        if not segment_pairs:
    -            return
    -        session = context.session
    -        encap_dict = {}
    -        n1kvclient = n1kv_client.Client()
    -        for (segment1, segment2) in segment_pairs:
    -            binding = (
    -                n1kv_db_v2.get_multi_segment_network_binding(session, net_id,
    -                                                             (segment1,
    -                                                              segment2)))
    -            encap_profile = binding['encap_profile_name']
    -            if encap_profile in encap_dict:
    -                profile_dict = encap_dict[encap_profile]
    -            else:
    -                profile_dict = {'name': encap_profile,
    -                                'addMappings': [],
    -                                'delMappings': []}
    -                encap_dict[encap_profile] = profile_dict
    -            mapping_dict = {}
    -            self._extend_mapping_dict(context,
    -                                      mapping_dict, segment1)
    -            self._extend_mapping_dict(context,
    -                                      mapping_dict, segment2)
    -            profile_dict['delMappings'].append(mapping_dict)
    -
    -        for profile in encap_dict:
    -            n1kvclient.update_encapsulation_profile(context, profile,
    -                                                    encap_dict[profile])
    -
    -    def _get_encap_segments(self, context, segment_pairs):
    -        """
    -        Get the list of segments in encapsulation profile format.
    -
    -        :param context: neutron api request context
    -        :param segment_pairs: List of segments that need to be bridged
    -        """
    -        member_list = []
    -        for pair in segment_pairs:
    -            (segment, dot1qtag) = pair
    -            member_dict = {}
    -            net = self.get_network(context, segment)
    -            member_dict['bridgeDomain'] = (net['name'] +
    -                                           c_const.BRIDGE_DOMAIN_SUFFIX)
    -            member_dict['dot1q'] = dot1qtag
    -            member_list.append(member_dict)
    -        return member_list
    -
    -    def _populate_member_segments(self, context, network, segment_pairs, oper):
    -        """
    -        Populate trunk network dict with member segments.
    -
    -        :param context: neutron api request context
    -        :param network: Dictionary containing the trunk network information
    -        :param segment_pairs: List of segments in UUID pairs
    -                              that needs to be trunked
    -        :param oper: Operation to be performed
    -        """
    -        LOG.debug('_populate_member_segments %s', segment_pairs)
    -        trunk_list = []
    -        for (segment, dot1qtag) in segment_pairs:
    -            net = self.get_network(context, segment)
    -            member_dict = {'segment': net['name'],
    -                           'dot1qtag': dot1qtag}
    -            trunk_list.append(member_dict)
    -        if oper == n1kv.SEGMENT_ADD:
    -            network['add_segment_list'] = trunk_list
    -        elif oper == n1kv.SEGMENT_DEL:
    -            network['del_segment_list'] = trunk_list
    -
    -    def _parse_multi_segments(self, context, attrs, param):
    -        """
    -        Parse the multi-segment network attributes.
    -
    -        :param context: neutron api request context
    -        :param attrs: Attributes of the network
    -        :param param: Additional parameter indicating an add
    -                      or del operation
    -        :returns: List of segment UUIDs in set pairs
    -        """
    -        pair_list = []
    -        valid_seg_types = [c_const.NETWORK_TYPE_VLAN,
    -                           c_const.NETWORK_TYPE_OVERLAY]
    -        segments = attrs.get(param)
    -        if not attributes.is_attr_set(segments):
    -            return pair_list
    -        for pair in segments.split(','):
    -            segment1, sep, segment2 = pair.partition(':')
    -            if (uuidutils.is_uuid_like(segment1) and
    -                    uuidutils.is_uuid_like(segment2)):
    -                binding1 = n1kv_db_v2.get_network_binding(context.session,
    -                                                          segment1)
    -                binding2 = n1kv_db_v2.get_network_binding(context.session,
    -                                                          segment2)
    -                if (binding1.network_type not in valid_seg_types or
    -                        binding2.network_type not in valid_seg_types or
    -                        binding1.network_type == binding2.network_type):
    -                    msg = _("Invalid pairing supplied")
    -                    raise n_exc.InvalidInput(error_message=msg)
    -                else:
    -                    pair_list.append((segment1, segment2))
    -            else:
    -                LOG.debug('Invalid UUID supplied in %s', pair)
    -                msg = _("Invalid UUID supplied")
    -                raise n_exc.InvalidInput(error_message=msg)
    -        return pair_list
    -
    -    def _parse_trunk_segments(self, context, attrs, param, physical_network,
    -                              sub_type):
    -        """
    -        Parse the trunk network attributes.
    -
    -        :param context: neutron api request context
    -        :param attrs: Attributes of the network
    -        :param param: Additional parameter indicating an add
    -                      or del operation
    -        :param physical_network: Physical network of the trunk segment
    -        :param sub_type: Sub-type of the trunk segment
    -        :returns: List of segment UUIDs and dot1qtag (for vxlan) in set pairs
    -        """
    -        pair_list = []
    -        segments = attrs.get(param)
    -        if not attributes.is_attr_set(segments):
    -            return pair_list
    -        for pair in segments.split(','):
    -            segment, sep, dot1qtag = pair.partition(':')
    -            if sub_type == c_const.NETWORK_TYPE_VLAN:
    -                dot1qtag = ''
    -            if uuidutils.is_uuid_like(segment):
    -                binding = n1kv_db_v2.get_network_binding(context.session,
    -                                                         segment)
    -                if binding.network_type == c_const.NETWORK_TYPE_TRUNK:
    -                    msg = _("Cannot add a trunk segment '%s' as a member of "
    -                            "another trunk segment") % segment
    -                    raise n_exc.InvalidInput(error_message=msg)
    -                elif binding.network_type == c_const.NETWORK_TYPE_VLAN:
    -                    if sub_type == c_const.NETWORK_TYPE_OVERLAY:
    -                        msg = _("Cannot add vlan segment '%s' as a member of "
    -                                "a vxlan trunk segment") % segment
    -                        raise n_exc.InvalidInput(error_message=msg)
    -                    if not physical_network:
    -                        physical_network = binding.physical_network
    -                    elif physical_network != binding.physical_network:
    -                        msg = _("Network UUID '%s' belongs to a different "
    -                                "physical network") % segment
    -                        raise n_exc.InvalidInput(error_message=msg)
    -                elif binding.network_type == c_const.NETWORK_TYPE_OVERLAY:
    -                    if sub_type == c_const.NETWORK_TYPE_VLAN:
    -                        msg = _("Cannot add vxlan segment '%s' as a member of "
    -                                "a vlan trunk segment") % segment
    -                        raise n_exc.InvalidInput(error_message=msg)
    -                    try:
    -                        if not utils.is_valid_vlan_tag(int(dot1qtag)):
    -                            msg = _("Vlan tag '%s' is out of range") % dot1qtag
    -                            raise n_exc.InvalidInput(error_message=msg)
    -                    except ValueError:
    -                        msg = _("Vlan tag '%s' is not an integer "
    -                                "value") % dot1qtag
    -                        raise n_exc.InvalidInput(error_message=msg)
    -                pair_list.append((segment, dot1qtag))
    -            else:
    -                LOG.debug('%s is not a valid uuid', segment)
    -                msg = _("'%s' is not a valid UUID") % segment
    -                raise n_exc.InvalidInput(error_message=msg)
    -        return pair_list
    -
    -    def _extend_network_dict_member_segments(self, context, network):
    -        """Add the extended parameter member segments to the network."""
    -        members = []
    -        binding = n1kv_db_v2.get_network_binding(context.session,
    -                                                 network['id'])
    -        if binding.network_type == c_const.NETWORK_TYPE_TRUNK:
    -            members = n1kv_db_v2.get_trunk_members(context.session,
    -                                                   network['id'])
    -        elif binding.network_type == c_const.NETWORK_TYPE_MULTI_SEGMENT:
    -            members = n1kv_db_v2.get_multi_segment_members(context.session,
    -                                                           network['id'])
    -        network[n1kv.MEMBER_SEGMENTS] = members
    -
    -    def _extend_network_dict_profile(self, context, network):
    -        """Add the extended parameter network profile to the network."""
    -        binding = n1kv_db_v2.get_network_binding(context.session,
    -                                                 network['id'])
    -        network[n1kv.PROFILE_ID] = binding.profile_id
    -
    -    def _extend_port_dict_profile(self, context, port):
    -        """Add the extended parameter port profile to the port."""
    -        binding = n1kv_db_v2.get_port_binding(context.session,
    -                                              port['id'])
    -        port[n1kv.PROFILE_ID] = binding.profile_id
    -
    -    def _process_network_profile(self, context, network):
    -        """Validate network profile exists."""
    -        profile_id = network.get(n1kv.PROFILE_ID)
    -        profile_id_set = attributes.is_attr_set(profile_id)
    -        if not profile_id_set:
    -            profile_name = c_conf.CISCO_N1K.default_network_profile
    -            net_p = self._get_network_profile_by_name(context.session,
    -                                                      profile_name)
    -            profile_id = net_p['id']
    -            network['n1kv:profile_id'] = profile_id
    -        return profile_id
    -
    -    def _process_policy_profile(self, context, attrs):
    -        """Validates whether policy profile exists."""
    -        profile_id = attrs.get(n1kv.PROFILE_ID)
    -        profile_id_set = attributes.is_attr_set(profile_id)
    -        if not profile_id_set:
    -            msg = _("n1kv:profile_id does not exist")
    -            raise n_exc.InvalidInput(error_message=msg)
    -        if not self._policy_profile_exists(profile_id):
    -            msg = _("n1kv:profile_id does not exist")
    -            raise n_exc.InvalidInput(error_message=msg)
    -
    -        return profile_id
    -
    -    def _send_create_logical_network_request(self, network_profile, tenant_id):
    -        """
    -        Send create logical network request to VSM.
    -
    -        :param network_profile: network profile dictionary
    -        :param tenant_id: UUID representing the tenant
    -        """
    -        LOG.debug('_send_create_logical_network')
    -        n1kvclient = n1kv_client.Client()
    -        n1kvclient.create_logical_network(network_profile, tenant_id)
    -
    -    def _send_delete_logical_network_request(self, network_profile):
    -        """
    -        Send delete logical network request to VSM.
    -
    -        :param network_profile: network profile dictionary
    -        """
    -        LOG.debug('_send_delete_logical_network')
    -        n1kvclient = n1kv_client.Client()
    -        logical_network_name = (network_profile['id'] +
    -                                c_const.LOGICAL_NETWORK_SUFFIX)
    -        n1kvclient.delete_logical_network(logical_network_name)
    -
    -    def _send_create_network_profile_request(self, context, profile):
    -        """
    -        Send create network profile request to VSM.
    -
    -        :param context: neutron api request context
    -        :param profile: network profile dictionary
    -        """
    -        LOG.debug('_send_create_network_profile_request: %s', profile['id'])
    -        n1kvclient = n1kv_client.Client()
    -        n1kvclient.create_network_segment_pool(profile, context.tenant_id)
    -
    -    def _send_update_network_profile_request(self, profile):
    -        """
    -        Send update network profile request to VSM.
    -
    -        :param profile: network profile dictionary
    -        """
    -        LOG.debug('_send_update_network_profile_request: %s', profile['id'])
    -        n1kvclient = n1kv_client.Client()
    -        n1kvclient.update_network_segment_pool(profile)
    -
    -    def _send_delete_network_profile_request(self, profile):
    -        """
    -        Send delete network profile request to VSM.
    -
    -        :param profile: network profile dictionary
    -        """
    -        LOG.debug('_send_delete_network_profile_request: %s',
    -                  profile['name'])
    -        n1kvclient = n1kv_client.Client()
    -        n1kvclient.delete_network_segment_pool(profile['id'])
    -
    -    def _send_create_network_request(self, context, network, segment_pairs):
    -        """
    -        Send create network request to VSM.
    -
    -        Create a bridge domain for network of type Overlay.
    -        :param context: neutron api request context
    -        :param network: network dictionary
    -        :param segment_pairs: List of segments in UUID pairs
    -                              that need to be bridged
    -        """
    -        LOG.debug('_send_create_network_request: %s', network['id'])
    -        profile = self.get_network_profile(context,
    -                                           network[n1kv.PROFILE_ID])
    -        n1kvclient = n1kv_client.Client()
    -        if network[providernet.NETWORK_TYPE] == c_const.NETWORK_TYPE_OVERLAY:
    -            n1kvclient.create_bridge_domain(network, profile['sub_type'])
    -        if network[providernet.NETWORK_TYPE] == c_const.NETWORK_TYPE_TRUNK:
    -            self._populate_member_segments(context, network, segment_pairs,
    -                                           n1kv.SEGMENT_ADD)
    -            network['del_segment_list'] = []
    -            if profile['sub_type'] == c_const.NETWORK_TYPE_OVERLAY:
    -                encap_dict = {'name': (network['name'] +
    -                                       c_const.ENCAPSULATION_PROFILE_SUFFIX),
    -                              'add_segment_list': (
    -                                  self._get_encap_segments(context,
    -                                                           segment_pairs)),
    -                              'del_segment_list': []}
    -                n1kvclient.create_encapsulation_profile(encap_dict)
    -        n1kvclient.create_network_segment(network, profile)
    -
    -    def _send_update_network_request(self, context, network, add_segments,
    -                                     del_segments):
    -        """
    -        Send update network request to VSM.
    -
    -        :param context: neutron api request context
    -        :param network: network dictionary
    -        :param add_segments: List of segments bindings
    -                             that need to be deleted
    -        :param del_segments: List of segments bindings
    -                             that need to be deleted
    -        """
    -        LOG.debug('_send_update_network_request: %s', network['id'])
    -        db_session = context.session
    -        profile = n1kv_db_v2.get_network_profile(
    -            db_session, network[n1kv.PROFILE_ID], context.tenant_id)
    -        n1kvclient = n1kv_client.Client()
    -        body = {'description': network['name'],
    -                'id': network['id'],
    -                'networkSegmentPool': profile['id'],
    -                'vlan': network[providernet.SEGMENTATION_ID],
    -                'mode': 'access',
    -                'segmentType': profile['segment_type'],
    -                'addSegments': [],
    -                'delSegments': []}
    -        if network[providernet.NETWORK_TYPE] == c_const.NETWORK_TYPE_TRUNK:
    -            self._populate_member_segments(context, network, add_segments,
    -                                           n1kv.SEGMENT_ADD)
    -            self._populate_member_segments(context, network, del_segments,
    -                                           n1kv.SEGMENT_DEL)
    -            body['mode'] = c_const.NETWORK_TYPE_TRUNK
    -            body['segmentType'] = profile['sub_type']
    -            body['addSegments'] = network['add_segment_list']
    -            body['delSegments'] = network['del_segment_list']
    -            LOG.debug('add_segments=%s', body['addSegments'])
    -            LOG.debug('del_segments=%s', body['delSegments'])
    -            if profile['sub_type'] == c_const.NETWORK_TYPE_OVERLAY:
    -                encap_profile = (network['id'] +
    -                                 c_const.ENCAPSULATION_PROFILE_SUFFIX)
    -                encap_dict = {'name': encap_profile,
    -                              'addMappings': (
    -                                  self._get_encap_segments(context,
    -                                                           add_segments)),
    -                              'delMappings': (
    -                                  self._get_encap_segments(context,
    -                                                           del_segments))}
    -                n1kvclient.update_encapsulation_profile(context, encap_profile,
    -                                                        encap_dict)
    -        n1kvclient.update_network_segment(network['id'], body)
    -
    -    def _send_delete_network_request(self, context, network):
    -        """
    -        Send delete network request to VSM.
    -
    -        Delete bridge domain if network is of type Overlay.
    -        Delete encapsulation profile if network is of type OVERLAY Trunk.
    -        :param context: neutron api request context
    -        :param network: network dictionary
    -        """
    -        LOG.debug('_send_delete_network_request: %s', network['id'])
    -        n1kvclient = n1kv_client.Client()
    -        session = context.session
    -        if network[providernet.NETWORK_TYPE] == c_const.NETWORK_TYPE_OVERLAY:
    -            name = network['id'] + c_const.BRIDGE_DOMAIN_SUFFIX
    -            n1kvclient.delete_bridge_domain(name)
    -        elif network[providernet.NETWORK_TYPE] == c_const.NETWORK_TYPE_TRUNK:
    -            profile = self.get_network_profile(
    -                context, network[n1kv.PROFILE_ID])
    -            if profile['sub_type'] == c_const.NETWORK_TYPE_OVERLAY:
    -                profile_name = (network['id'] +
    -                                c_const.ENCAPSULATION_PROFILE_SUFFIX)
    -                n1kvclient.delete_encapsulation_profile(profile_name)
    -        elif (network[providernet.NETWORK_TYPE] ==
    -                c_const.NETWORK_TYPE_MULTI_SEGMENT):
    -            encap_dict = n1kv_db_v2.get_multi_segment_encap_dict(session,
    -                                                                 network['id'])
    -            for profile in encap_dict:
    -                profile_dict = {'name': profile,
    -                                'addSegments': [],
    -                                'delSegments': []}
    -                for segment_pair in encap_dict[profile]:
    -                    mapping_dict = {}
    -                    (segment1, segment2) = segment_pair
    -                    self._extend_mapping_dict(context,
    -                                              mapping_dict, segment1)
    -                    self._extend_mapping_dict(context,
    -                                              mapping_dict, segment2)
    -                    profile_dict['delSegments'].append(mapping_dict)
    -                n1kvclient.update_encapsulation_profile(context, profile,
    -                                                        profile_dict)
    -        n1kvclient.delete_network_segment(network['id'])
    -
    -    def _send_create_subnet_request(self, context, subnet):
    -        """
    -        Send create subnet request to VSM.
    -
    -        :param context: neutron api request context
    -        :param subnet: subnet dictionary
    -        """
    -        LOG.debug('_send_create_subnet_request: %s', subnet['id'])
    -        n1kvclient = n1kv_client.Client()
    -        n1kvclient.create_ip_pool(subnet)
    -
    -    def _send_update_subnet_request(self, subnet):
    -        """
    -        Send update subnet request to VSM.
    -
    -        :param subnet: subnet dictionary
    -        """
    -        LOG.debug('_send_update_subnet_request: %s', subnet['name'])
    -        n1kvclient = n1kv_client.Client()
    -        n1kvclient.update_ip_pool(subnet)
    -
    -    def _send_delete_subnet_request(self, context, subnet):
    -        """
    -        Send delete subnet request to VSM.
    -
    -        :param context: neutron api request context
    -        :param subnet: subnet dictionary
    -        """
    -        LOG.debug('_send_delete_subnet_request: %s', subnet['name'])
    -        body = {'ipPool': subnet['id'], 'deleteSubnet': True}
    -        n1kvclient = n1kv_client.Client()
    -        n1kvclient.update_network_segment(subnet['network_id'], body=body)
    -        n1kvclient.delete_ip_pool(subnet['id'])
    -
    -    def _send_create_port_request(self,
    -                                  context,
    -                                  port,
    -                                  port_count,
    -                                  policy_profile,
    -                                  vm_network_name):
    -        """
    -        Send create port request to VSM.
    -
    -        Create a VM network for a network and policy profile combination.
    -        If the VM network already exists, bind this port to the existing
    -        VM network on the VSM.
    -        :param context: neutron api request context
    -        :param port: port dictionary
    -        :param port_count: integer representing the number of ports in one
    -                           VM Network
    -        :param policy_profile: object of type policy profile
    -        :param vm_network_name: string representing the name of the VM
    -                                network
    -        """
    -        LOG.debug('_send_create_port_request: %s', port)
    -        n1kvclient = n1kv_client.Client()
    -        if port_count == 1:
    -            n1kvclient.create_vm_network(port,
    -                                         vm_network_name,
    -                                         policy_profile)
    -        else:
    -            n1kvclient.create_n1kv_port(port, vm_network_name)
    -
    -    def _send_update_port_request(self, port_id, mac_address, vm_network_name):
    -        """
    -        Send update port request to VSM.
    -
    -        :param port_id: UUID representing port to update
    -        :param mac_address: string representing the mac address
    -        :param vm_network_name: VM network name to which the port is bound
    -        """
    -        LOG.debug('_send_update_port_request: %s', port_id)
    -        body = {'portId': port_id,
    -                'macAddress': mac_address}
    -        n1kvclient = n1kv_client.Client()
    -        n1kvclient.update_n1kv_port(vm_network_name, port_id, body)
    -
    -    def _send_delete_port_request(self, context, port, vm_network):
    -        """
    -        Send delete port request to VSM.
    -
    -        Delete the port on the VSM.
    -        :param context: neutron api request context
    -        :param port: port object which is to be deleted
    -        :param vm_network: VM network object with which the port is associated
    -        """
    -        LOG.debug('_send_delete_port_request: %s', port['id'])
    -        n1kvclient = n1kv_client.Client()
    -        n1kvclient.delete_n1kv_port(vm_network['name'], port['id'])
    -
    -    def _get_segmentation_id(self, context, id):
    -        """
    -        Retrieve segmentation ID for a given network.
    -
    -        :param context: neutron api request context
    -        :param id: UUID of the network
    -        :returns: segmentation ID for the network
    -        """
    -        session = context.session
    -        binding = n1kv_db_v2.get_network_binding(session, id)
    -        return binding.segmentation_id
    -
    -    def create_network(self, context, network):
    -        """
    -        Create network based on network profile.
    -
    -        :param context: neutron api request context
    -        :param network: network dictionary
    -        :returns: network object
    -        """
    -        (network_type, physical_network,
    -         segmentation_id) = self._process_provider_create(context,
    -                                                          network['network'])
    -        profile_id = self._process_network_profile(context, network['network'])
    -        segment_pairs = None
    -        LOG.debug('Create network: profile_id=%s', profile_id)
    -        session = context.session
    -        with session.begin(subtransactions=True):
    -            if not network_type:
    -                # tenant network
    -                (physical_network, network_type, segmentation_id,
    -                    multicast_ip) = n1kv_db_v2.alloc_network(session,
    -                                                             profile_id,
    -                                                             context.tenant_id)
    -                LOG.debug('Physical_network %(phy_net)s, '
    -                          'seg_type %(net_type)s, '
    -                          'seg_id %(seg_id)s, '
    -                          'multicast_ip %(multicast_ip)s',
    -                          {'phy_net': physical_network,
    -                           'net_type': network_type,
    -                           'seg_id': segmentation_id,
    -                           'multicast_ip': multicast_ip})
    -                if network_type == c_const.NETWORK_TYPE_MULTI_SEGMENT:
    -                    segment_pairs = (
    -                        self._parse_multi_segments(context, network['network'],
    -                                                   n1kv.SEGMENT_ADD))
    -                    LOG.debug('Seg list %s ', segment_pairs)
    -                elif network_type == c_const.NETWORK_TYPE_TRUNK:
    -                    network_profile = self.get_network_profile(context,
    -                                                               profile_id)
    -                    segment_pairs = (
    -                        self._parse_trunk_segments(context, network['network'],
    -                                                   n1kv.SEGMENT_ADD,
    -                                                   physical_network,
    -                                                   network_profile['sub_type']
    -                                                   ))
    -                    LOG.debug('Seg list %s ', segment_pairs)
    -                else:
    -                    if not segmentation_id:
    -                        raise n_exc.TenantNetworksDisabled()
    -            else:
    -                # provider network
    -                if network_type == c_const.NETWORK_TYPE_VLAN:
    -                    network_profile = self.get_network_profile(context,
    -                                                               profile_id)
    -                    seg_min, seg_max = self._get_segment_range(
    -                        network_profile['segment_range'])
    -                    if not seg_min <= segmentation_id <= seg_max:
    -                        raise cisco_exceptions.VlanIDOutsidePool()
    -                    n1kv_db_v2.reserve_specific_vlan(session,
    -                                                     physical_network,
    -                                                     segmentation_id)
    -                    multicast_ip = "0.0.0.0"
    -            net = super(N1kvNeutronPluginV2, self).create_network(context,
    -                                                                  network)
    -            n1kv_db_v2.add_network_binding(session,
    -                                           net['id'],
    -                                           network_type,
    -                                           physical_network,
    -                                           segmentation_id,
    -                                           multicast_ip,
    -                                           profile_id,
    -                                           segment_pairs)
    -            self._process_l3_create(context, net, network['network'])
    -            self._extend_network_dict_provider(context, net)
    -            self._extend_network_dict_profile(context, net)
    -        try:
    -            if network_type == c_const.NETWORK_TYPE_MULTI_SEGMENT:
    -                self._send_add_multi_segment_request(context, net['id'],
    -                                                     segment_pairs)
    -            else:
    -                self._send_create_network_request(context, net, segment_pairs)
    -        except(cisco_exceptions.VSMError,
    -               cisco_exceptions.VSMConnectionFailed):
    -            with excutils.save_and_reraise_exception():
    -                self._delete_network_db(context, net['id'])
    -        else:
    -            LOG.debug("Created network: %s", net['id'])
    -            return net
    -
    -    def update_network(self, context, id, network):
    -        """
    -        Update network parameters.
    -
    -        :param context: neutron api request context
    -        :param id: UUID representing the network to update
    -        :returns: updated network object
    -        """
    -        self._check_provider_update(context, network['network'])
    -        add_segments = []
    -        del_segments = []
    -
    -        session = context.session
    -        with session.begin(subtransactions=True):
    -            net = super(N1kvNeutronPluginV2, self).update_network(context, id,
    -                                                                  network)
    -            self._process_l3_update(context, net, network['network'])
    -            binding = n1kv_db_v2.get_network_binding(session, id)
    -            if binding.network_type == c_const.NETWORK_TYPE_MULTI_SEGMENT:
    -                add_segments = (
    -                    self._parse_multi_segments(context, network['network'],
    -                                               n1kv.SEGMENT_ADD))
    -                n1kv_db_v2.add_multi_segment_binding(session,
    -                                                     net['id'], add_segments)
    -                del_segments = (
    -                    self._parse_multi_segments(context, network['network'],
    -                                               n1kv.SEGMENT_DEL))
    -                self._send_add_multi_segment_request(context, net['id'],
    -                                                     add_segments)
    -                self._send_del_multi_segment_request(context, net['id'],
    -                                                     del_segments)
    -                n1kv_db_v2.del_multi_segment_binding(session,
    -                                                     net['id'], del_segments)
    -            elif binding.network_type == c_const.NETWORK_TYPE_TRUNK:
    -                network_profile = self.get_network_profile(context,
    -                                                           binding.profile_id)
    -                add_segments = (
    -                    self._parse_trunk_segments(context, network['network'],
    -                                               n1kv.SEGMENT_ADD,
    -                                               binding.physical_network,
    -                                               network_profile['sub_type']))
    -                n1kv_db_v2.add_trunk_segment_binding(session,
    -                                                     net['id'], add_segments)
    -                del_segments = (
    -                    self._parse_trunk_segments(context, network['network'],
    -                                               n1kv.SEGMENT_DEL,
    -                                               binding.physical_network,
    -                                               network_profile['sub_type']))
    -                n1kv_db_v2.del_trunk_segment_binding(session,
    -                                                     net['id'], del_segments)
    -            self._extend_network_dict_provider(context, net)
    -            self._extend_network_dict_profile(context, net)
    -            if binding.network_type != c_const.NETWORK_TYPE_MULTI_SEGMENT:
    -                self._send_update_network_request(context, net, add_segments,
    -                                                  del_segments)
    -            LOG.debug("Updated network: %s", net['id'])
    -            return net
    -
    -    def delete_network(self, context, id):
    -        """
    -        Delete a network.
    -
    -        :param context: neutron api request context
    -        :param id: UUID representing the network to delete
    -        """
    -        session = context.session
    -        with session.begin(subtransactions=True):
    -            network = self.get_network(context, id)
    -            if network['subnets']:
    -                msg = _("Cannot delete network '%s', "
    -                        "delete the associated subnet first") % network['name']
    -                raise n_exc.InvalidInput(error_message=msg)
    -            if n1kv_db_v2.is_trunk_member(session, id):
    -                msg = _("Cannot delete network '%s' "
    -                        "that is member of a trunk segment") % network['name']
    -                raise n_exc.InvalidInput(error_message=msg)
    -            if n1kv_db_v2.is_multi_segment_member(session, id):
    -                msg = _("Cannot delete network '%s' that is a member of a "
    -                        "multi-segment network") % network['name']
    -                raise n_exc.InvalidInput(error_message=msg)
    -            self._delete_network_db(context, id)
    -            # the network_binding record is deleted via cascade from
    -            # the network record, so explicit removal is not necessary
    -        self._send_delete_network_request(context, network)
    -        LOG.debug("Deleted network: %s", id)
    -
    -    def _delete_network_db(self, context, id):
    -        session = context.session
    -        with session.begin(subtransactions=True):
    -            binding = n1kv_db_v2.get_network_binding(session, id)
    -            if binding.network_type == c_const.NETWORK_TYPE_OVERLAY:
    -                n1kv_db_v2.release_vxlan(session, binding.segmentation_id)
    -            elif binding.network_type == c_const.NETWORK_TYPE_VLAN:
    -                n1kv_db_v2.release_vlan(session, binding.physical_network,
    -                                        binding.segmentation_id)
    -            super(N1kvNeutronPluginV2, self).delete_network(context, id)
    -
    -    def get_network(self, context, id, fields=None):
    -        """
    -        Retrieve a Network.
    -
    -        :param context: neutron api request context
    -        :param id: UUID representing the network to fetch
    -        :returns: requested network dictionary
    -        """
    -        LOG.debug("Get network: %s", id)
    -        net = super(N1kvNeutronPluginV2, self).get_network(context, id, None)
    -        self._extend_network_dict_provider(context, net)
    -        self._extend_network_dict_profile(context, net)
    -        self._extend_network_dict_member_segments(context, net)
    -        return self._fields(net, fields)
    -
    -    def get_networks(self, context, filters=None, fields=None):
    -        """
    -        Retrieve a list of networks.
    -
    -        :param context: neutron api request context
    -        :param filters: a dictionary with keys that are valid keys for a
    -                        network object. Values in this dictiontary are an
    -                        iterable containing values that will be used for an
    -                        exact match comparison for that value. Each result
    -                        returned by this function will have matched one of the
    -                        values for each key in filters
    -        :params fields: a list of strings that are valid keys in a network
    -                        dictionary. Only these fields will be returned.
    -        :returns: list of network dictionaries.
    -        """
    -        LOG.debug("Get networks")
    -        nets = super(N1kvNeutronPluginV2, self).get_networks(context, filters,
    -                                                             None)
    -        for net in nets:
    -            self._extend_network_dict_provider(context, net)
    -            self._extend_network_dict_profile(context, net)
    -
    -        return [self._fields(net, fields) for net in nets]
    -
    -    def create_port(self, context, port):
    -        """
    -        Create neutron port.
    -
    -        Create a port. Use a default policy profile for ports created for dhcp
    -        and router interface. Default policy profile name is configured in the
    -        /etc/neutron/cisco_plugins.ini file.
    -
    -        :param context: neutron api request context
    -        :param port: port dictionary
    -        :returns: port object
    -        """
    -        p_profile = None
    -        port_count = None
    -        vm_network = None
    -        vm_network_name = None
    -        profile_id_set = False
    -
    -        # Set the network policy profile id for auto generated L3/DHCP ports
    -        if ('device_id' in port['port'] and port['port']['device_owner'] in
    -            [constants.DEVICE_OWNER_DHCP, constants.DEVICE_OWNER_ROUTER_INTF,
    -             constants.DEVICE_OWNER_ROUTER_GW,
    -             constants.DEVICE_OWNER_FLOATINGIP]):
    -            p_profile_name = c_conf.CISCO_N1K.network_node_policy_profile
    -            p_profile = self._get_policy_profile_by_name(p_profile_name)
    -            if p_profile:
    -                port['port']['n1kv:profile_id'] = p_profile['id']
    -
    -        if n1kv.PROFILE_ID in port['port']:
    -            profile_id = port['port'].get(n1kv.PROFILE_ID)
    -            profile_id_set = attributes.is_attr_set(profile_id)
    -
    -        # Set the default policy profile id for ports if no id is set
    -        if not profile_id_set:
    -            p_profile_name = c_conf.CISCO_N1K.default_policy_profile
    -            p_profile = self._get_policy_profile_by_name(p_profile_name)
    -            if p_profile:
    -                port['port']['n1kv:profile_id'] = p_profile['id']
    -                profile_id_set = True
    -
    -        profile_id = self._process_policy_profile(context,
    -                                                  port['port'])
    -        LOG.debug('Create port: profile_id=%s', profile_id)
    -        session = context.session
    -        with session.begin(subtransactions=True):
    -            pt = super(N1kvNeutronPluginV2, self).create_port(context,
    -                                                              port)
    -            n1kv_db_v2.add_port_binding(session, pt['id'], profile_id)
    -            self._extend_port_dict_profile(context, pt)
    -            try:
    -                vm_network = n1kv_db_v2.get_vm_network(
    -                    context.session,
    -                    profile_id,
    -                    pt['network_id'])
    -            except cisco_exceptions.VMNetworkNotFound:
    -                # Create a VM Network if no VM network exists.
    -                vm_network_name = "%s%s_%s" % (c_const.VM_NETWORK_NAME_PREFIX,
    -                                               profile_id,
    -                                               pt['network_id'])
    -                port_count = 1
    -                vm_network = n1kv_db_v2.add_vm_network(context.session,
    -                                                       vm_network_name,
    -                                                       profile_id,
    -                                                       pt['network_id'],
    -                                                       port_count)
    -            else:
    -                # Update port count of the VM network.
    -                vm_network_name = vm_network['name']
    -                port_count = vm_network['port_count'] + 1
    -                n1kv_db_v2.update_vm_network_port_count(context.session,
    -                                                        vm_network_name,
    -                                                        port_count)
    -            self._process_portbindings_create_and_update(context,
    -                                                         port['port'],
    -                                                         pt)
    -            # Extract policy profile for VM network create in VSM.
    -            if not p_profile:
    -                p_profile = n1kv_db_v2.get_policy_profile(session, profile_id)
    -        try:
    -            self._send_create_port_request(context,
    -                                           pt,
    -                                           port_count,
    -                                           p_profile,
    -                                           vm_network_name)
    -        except(cisco_exceptions.VSMError,
    -               cisco_exceptions.VSMConnectionFailed):
    -            with excutils.save_and_reraise_exception():
    -                self._delete_port_db(context, pt, vm_network)
    -        else:
    -            LOG.debug("Created port: %s", pt)
    -            return pt
    -
    -    def update_port(self, context, id, port):
    -        """
    -        Update port parameters.
    -
    -        :param context: neutron api request context
    -        :param id: UUID representing the port to update
    -        :returns: updated port object
    -        """
    -        LOG.debug("Update port: %s", id)
    -        with context.session.begin(subtransactions=True):
    -            updated_port = super(N1kvNeutronPluginV2,
    -                                 self).update_port(context, id, port)
    -            self._process_portbindings_create_and_update(context,
    -                                                         port['port'],
    -                                                         updated_port)
    -            self._extend_port_dict_profile(context, updated_port)
    -        return updated_port
    -
    -    @property
    -    def l3plugin(self):
    -        try:
    -            return self._l3plugin
    -        except AttributeError:
    -            self._l3plugin = manager.NeutronManager.get_service_plugins().get(
    -                svc_constants.L3_ROUTER_NAT)
    -            return self._l3plugin
    -
    -    def delete_port(self, context, id, l3_port_check=True):
    -        """
    -        Delete a port.
    -
    -        :param context: neutron api request context
    -        :param id: UUID representing the port to delete
    -        """
    -        # if needed, check to see if this is a port owned by
    -        # and l3-router.  If so, we should prevent deletion.
    -        if self.l3plugin and l3_port_check:
    -            self.l3plugin.prevent_l3_port_deletion(context, id)
    -        with context.session.begin(subtransactions=True):
    -            port = self.get_port(context, id)
    -            vm_network = n1kv_db_v2.get_vm_network(context.session,
    -                                                   port[n1kv.PROFILE_ID],
    -                                                   port['network_id'])
    -            if self.l3plugin:
    -                self.l3plugin.disassociate_floatingips(context, id,
    -                                                       do_notify=False)
    -            self._delete_port_db(context, port, vm_network)
    -
    -        self._send_delete_port_request(context, port, vm_network)
    -
    -    def _delete_port_db(self, context, port, vm_network):
    -        with context.session.begin(subtransactions=True):
    -            vm_network['port_count'] -= 1
    -            n1kv_db_v2.update_vm_network_port_count(context.session,
    -                                                    vm_network['name'],
    -                                                    vm_network['port_count'])
    -            if vm_network['port_count'] == 0:
    -                n1kv_db_v2.delete_vm_network(context.session,
    -                                             port[n1kv.PROFILE_ID],
    -                                             port['network_id'])
    -            super(N1kvNeutronPluginV2, self).delete_port(context, port['id'])
    -
    -    def get_port(self, context, id, fields=None):
    -        """
    -        Retrieve a port.
    -        :param context: neutron api request context
    -        :param id: UUID representing the port to retrieve
    -        :param fields: a list of strings that are valid keys in a port
    -                       dictionary. Only these fields will be returned.
    -        :returns: port dictionary
    -        """
    -        LOG.debug("Get port: %s", id)
    -        port = super(N1kvNeutronPluginV2, self).get_port(context, id, None)
    -        self._extend_port_dict_profile(context, port)
    -        return self._fields(port, fields)
    -
    -    def get_ports(self, context, filters=None, fields=None):
    -        """
    -        Retrieve a list of ports.
    -
    -        :param context: neutron api request context
    -        :param filters: a dictionary with keys that are valid keys for a
    -                        port object. Values in this dictiontary are an
    -                        iterable containing values that will be used for an
    -                        exact match comparison for that value. Each result
    -                        returned by this function will have matched one of the
    -                        values for each key in filters
    -        :params fields: a list of strings that are valid keys in a port
    -                        dictionary. Only these fields will be returned.
    -        :returns: list of port dictionaries
    -        """
    -        LOG.debug("Get ports")
    -        ports = super(N1kvNeutronPluginV2, self).get_ports(context, filters,
    -                                                           None)
    -        for port in ports:
    -            self._extend_port_dict_profile(context, port)
    -
    -        return [self._fields(port, fields) for port in ports]
    -
    -    def create_subnet(self, context, subnet):
    -        """
    -        Create subnet for a given network.
    -
    -        :param context: neutron api request context
    -        :param subnet: subnet dictionary
    -        :returns: subnet object
    -        """
    -        LOG.debug('Create subnet')
    -        sub = super(N1kvNeutronPluginV2, self).create_subnet(context, subnet)
    -        try:
    -            self._send_create_subnet_request(context, sub)
    -        except(cisco_exceptions.VSMError,
    -               cisco_exceptions.VSMConnectionFailed):
    -            with excutils.save_and_reraise_exception():
    -                super(N1kvNeutronPluginV2,
    -                      self).delete_subnet(context, sub['id'])
    -        else:
    -            LOG.debug("Created subnet: %s", sub['id'])
    -            return sub
    -
    -    def update_subnet(self, context, id, subnet):
    -        """
    -        Update a subnet.
    -
    -        :param context: neutron api request context
    -        :param id: UUID representing subnet to update
    -        :returns: updated subnet object
    -        """
    -        LOG.debug('Update subnet')
    -        sub = super(N1kvNeutronPluginV2, self).update_subnet(context,
    -                                                             id,
    -                                                             subnet)
    -        self._send_update_subnet_request(sub)
    -        return sub
    -
    -    def delete_subnet(self, context, id):
    -        """
    -        Delete a subnet.
    -
    -        :param context: neutron api request context
    -        :param id: UUID representing subnet to delete
    -        :returns: deleted subnet object
    -        """
    -        LOG.debug('Delete subnet: %s', id)
    -        subnet = self.get_subnet(context, id)
    -        self._send_delete_subnet_request(context, subnet)
    -        return super(N1kvNeutronPluginV2, self).delete_subnet(context, id)
    -
    -    def get_subnet(self, context, id, fields=None):
    -        """
    -        Retrieve a subnet.
    -
    -        :param context: neutron api request context
    -        :param id: UUID representing subnet to retrieve
    -        :params fields: a list of strings that are valid keys in a subnet
    -                        dictionary. Only these fields will be returned.
    -        :returns: subnet object
    -        """
    -        LOG.debug("Get subnet: %s", id)
    -        subnet = super(N1kvNeutronPluginV2, self).get_subnet(context, id,
    -                                                             None)
    -        return self._fields(subnet, fields)
    -
    -    def get_subnets(self, context, filters=None, fields=None):
    -        """
    -        Retrieve a list of subnets.
    -
    -        :param context: neutron api request context
    -        :param filters: a dictionary with keys that are valid keys for a
    -                        subnet object. Values in this dictiontary are an
    -                        iterable containing values that will be used for an
    -                        exact match comparison for that value. Each result
    -                        returned by this function will have matched one of the
    -                        values for each key in filters
    -        :params fields: a list of strings that are valid keys in a subnet
    -                        dictionary. Only these fields will be returned.
    -        :returns: list of dictionaries of subnets
    -        """
    -        LOG.debug("Get subnets")
    -        subnets = super(N1kvNeutronPluginV2, self).get_subnets(context,
    -                                                               filters,
    -                                                               None)
    -        return [self._fields(subnet, fields) for subnet in subnets]
    -
    -    def create_network_profile(self, context, network_profile):
    -        """
    -        Create a network profile.
    -
    -        Create a network profile, which represents a po
    ... [truncated]
    
  • neutron/plugins/cisco/network_plugin.py+0 171 removed
    @@ -1,171 +0,0 @@
    -# Copyright 2012 Cisco Systems, Inc.  All rights reserved.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -from oslo_log import log as logging
    -from oslo_utils import importutils
    -import webob.exc as wexc
    -
    -from neutron.api import extensions as neutron_extensions
    -from neutron.api.v2 import base
    -from neutron.db import db_base_plugin_v2
    -from neutron.plugins.cisco.common import cisco_exceptions as cexc
    -from neutron.plugins.cisco.common import config
    -from neutron.plugins.cisco.db import network_db_v2 as cdb
    -from neutron.plugins.cisco import extensions
    -
    -LOG = logging.getLogger(__name__)
    -
    -
    -class PluginV2(db_base_plugin_v2.NeutronDbPluginV2):
    -    """Meta-Plugin with v2 API support for multiple sub-plugins."""
    -    _supported_extension_aliases = ["credential", "Cisco qos"]
    -    _methods_to_delegate = ['create_network',
    -                            'delete_network', 'update_network', 'get_network',
    -                            'get_networks',
    -                            'create_port', 'delete_port',
    -                            'update_port', 'get_port', 'get_ports',
    -                            'create_subnet',
    -                            'delete_subnet', 'update_subnet',
    -                            'get_subnet', 'get_subnets', ]
    -
    -    CISCO_FAULT_MAP = {
    -        cexc.CredentialAlreadyExists: wexc.HTTPBadRequest,
    -        cexc.CredentialNameNotFound: wexc.HTTPNotFound,
    -        cexc.CredentialNotFound: wexc.HTTPNotFound,
    -        cexc.NetworkSegmentIDNotFound: wexc.HTTPNotFound,
    -        cexc.NetworkVlanBindingAlreadyExists: wexc.HTTPBadRequest,
    -        cexc.NexusComputeHostNotConfigured: wexc.HTTPNotFound,
    -        cexc.NexusConfigFailed: wexc.HTTPBadRequest,
    -        cexc.NexusConnectFailed: wexc.HTTPServiceUnavailable,
    -        cexc.NexusPortBindingNotFound: wexc.HTTPNotFound,
    -        cexc.NoMoreNics: wexc.HTTPBadRequest,
    -        cexc.PortIdForNexusSvi: wexc.HTTPBadRequest,
    -        cexc.PortVnicBindingAlreadyExists: wexc.HTTPBadRequest,
    -        cexc.PortVnicNotFound: wexc.HTTPNotFound,
    -        cexc.QosNameAlreadyExists: wexc.HTTPBadRequest,
    -        cexc.QosNotFound: wexc.HTTPNotFound,
    -        cexc.SubnetNotSpecified: wexc.HTTPBadRequest,
    -        cexc.VlanIDNotAvailable: wexc.HTTPNotFound,
    -        cexc.VlanIDNotFound: wexc.HTTPNotFound,
    -    }
    -
    -    @property
    -    def supported_extension_aliases(self):
    -        if not hasattr(self, '_aliases'):
    -            aliases = self._supported_extension_aliases[:]
    -            if hasattr(self._model, "supported_extension_aliases"):
    -                aliases.extend(self._model.supported_extension_aliases)
    -            self._aliases = aliases
    -        return self._aliases
    -
    -    def __init__(self):
    -        """Load the model class."""
    -        self._model_name = config.CISCO.model_class
    -        self._model = importutils.import_object(self._model_name)
    -        native_bulk_attr_name = ("_%s__native_bulk_support"
    -                                 % self._model.__class__.__name__)
    -        self.__native_bulk_support = getattr(self._model,
    -                                             native_bulk_attr_name, False)
    -
    -        neutron_extensions.append_api_extensions_path(extensions.__path__)
    -
    -        # Extend the fault map
    -        self._extend_fault_map()
    -
    -        LOG.debug("Plugin initialization complete")
    -
    -    def __getattribute__(self, name):
    -        """Delegate core API calls to the model class.
    -
    -        Core API calls are delegated directly to the configured model class.
    -        Note: Bulking calls will be handled by this class, and turned into
    -        non-bulking calls to be considered for delegation.
    -        """
    -        methods = object.__getattribute__(self, "_methods_to_delegate")
    -        if name in methods:
    -            return getattr(object.__getattribute__(self, "_model"),
    -                           name)
    -        else:
    -            return object.__getattribute__(self, name)
    -
    -    def __getattr__(self, name):
    -        """Delegate calls to the extensions.
    -
    -        This delegates the calls to the extensions explicitly implemented by
    -        the model.
    -        """
    -        if hasattr(self._model, name):
    -            return getattr(self._model, name)
    -        else:
    -            # Must make sure we re-raise the error that led us here, since
    -            # otherwise getattr() and even hasattr() doesn't work correctly.
    -            raise AttributeError(
    -                _("'%(model)s' object has no attribute '%(name)s'") %
    -                {'model': self._model_name, 'name': name})
    -
    -    def _extend_fault_map(self):
    -        """Extend the Neutron Fault Map for Cisco exceptions.
    -
    -        Map exceptions which are specific to the Cisco Plugin
    -        to standard HTTP exceptions.
    -
    -        """
    -        base.FAULT_MAP.update(self.CISCO_FAULT_MAP)
    -
    -    #
    -    # Extension API implementation
    -    #
    -    def get_all_qoss(self, tenant_id):
    -        """Get all QoS levels."""
    -        LOG.debug("get_all_qoss() called")
    -        qoslist = cdb.get_all_qoss(tenant_id)
    -        return qoslist
    -
    -    def get_qos_details(self, tenant_id, qos_id):
    -        """Get QoS Details."""
    -        LOG.debug("get_qos_details() called")
    -        return cdb.get_qos(tenant_id, qos_id)
    -
    -    def create_qos(self, tenant_id, qos_name, qos_desc):
    -        """Create a QoS level."""
    -        LOG.debug("create_qos() called")
    -        qos = cdb.add_qos(tenant_id, qos_name, str(qos_desc))
    -        return qos
    -
    -    def delete_qos(self, tenant_id, qos_id):
    -        """Delete a QoS level."""
    -        LOG.debug("delete_qos() called")
    -        return cdb.remove_qos(tenant_id, qos_id)
    -
    -    def rename_qos(self, tenant_id, qos_id, new_name):
    -        """Rename QoS level."""
    -        LOG.debug("rename_qos() called")
    -        return cdb.update_qos(tenant_id, qos_id, new_name)
    -
    -    def get_all_credentials(self):
    -        """Get all credentials."""
    -        LOG.debug("get_all_credentials() called")
    -        credential_list = cdb.get_all_credentials()
    -        return credential_list
    -
    -    def get_credential_details(self, credential_id):
    -        """Get a particular credential."""
    -        LOG.debug("get_credential_details() called")
    -        return cdb.get_credential(credential_id)
    -
    -    def rename_credential(self, credential_id, new_name, new_password):
    -        """Rename the particular credential resource."""
    -        LOG.debug("rename_credential() called")
    -        return cdb.update_credential(credential_id, new_name,
    -                                     new_password=new_password)
    
  • neutron/plugins/cisco/README+0 7 removed
    @@ -1,7 +0,0 @@
    -Cisco Neutron Virtual Network Plugin
    -
    -This plugin implements Neutron v2 APIs and helps configure
    -topologies consisting of virtual and physical switches.
    -
    -For more details on use please refer to:
    -http://wiki.openstack.org/cisco-neutron
    
  • neutron/plugins/cisco/service_plugins/cisco_router_plugin.py+0 24 removed
    @@ -1,24 +0,0 @@
    -# Copyright 2015 Cisco Systems, Inc.  All rights reserved.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -from networking_cisco.plugins.cisco.service_plugins import cisco_router_plugin
    -
    -
    -class CiscoRouterPluginRpcCallbacks(
    -    cisco_router_plugin.CiscoRouterPluginRpcCallbacks):
    -    pass
    -
    -
    -class CiscoRouterPlugin(cisco_router_plugin.CiscoRouterPlugin):
    -    pass
    
  • neutron/plugins/cisco/service_plugins/__init__.py+0 0 removed
  • neutron/plugins/cisco/service_plugins/requirements.txt+0 1 removed
    @@ -1 +0,0 @@
    -networking-cisco
    
  • neutron/plugins/ml2/driver_api.py+8 0 modified
    @@ -888,6 +888,14 @@ def check_vlan_transparency(self, context):
             """
             pass
     
    +    def get_workers(self):
    +        """Get any NeutronWorker instances that should have their own process
    +
    +        Any driver that needs to run processes separate from the API or RPC
    +        workers, can return a sequence of NeutronWorker instances.
    +        """
    +        return ()
    +
     
     @six.add_metaclass(abc.ABCMeta)
     class ExtensionDriver(object):
    
  • neutron/plugins/ml2/drivers/l2pop/mech_driver.py+4 4 modified
    @@ -76,7 +76,7 @@ def _fixed_ips_changed(self, context, orig, port, diff_ips):
                 context, orig, agent_host)
             if not port_infos:
                 return
    -        agent, agent_host, agent_ip, segment, port_fdb_entries = port_infos
    +        agent, agent_ip, segment, port_fdb_entries = port_infos
     
             orig_mac_ip = [l2pop_rpc.PortInfo(mac_address=port['mac_address'],
                                               ip_address=ip)
    @@ -182,7 +182,7 @@ def _get_port_infos(self, context, port, agent_host):
     
             fdb_entries = self._get_port_fdb_entries(port)
     
    -        return agent, agent_host, agent_ip, segment, fdb_entries
    +        return agent, agent_ip, segment, fdb_entries
     
         def _create_agent_fdb(self, session, agent, segment, network_id):
             agent_fdb_entries = {network_id:
    @@ -227,7 +227,7 @@ def _update_port_up(self, context):
             port_infos = self._get_port_infos(context, port, agent_host)
             if not port_infos:
                 return
    -        agent, agent_host, agent_ip, segment, port_fdb_entries = port_infos
    +        agent, agent_ip, segment, port_fdb_entries = port_infos
     
             network_id = port['network_id']
     
    @@ -268,7 +268,7 @@ def _get_agent_fdb(self, context, port, agent_host):
             port_infos = self._get_port_infos(context, port, agent_host)
             if not port_infos:
                 return
    -        agent, agent_host, agent_ip, segment, port_fdb_entries = port_infos
    +        agent, agent_ip, segment, port_fdb_entries = port_infos
     
             network_id = port['network_id']
     
    
  • neutron/plugins/ml2/drivers/linuxbridge/agent/arp_protect.py+7 0 modified
    @@ -18,6 +18,7 @@
     from oslo_log import log as logging
     
     from neutron.agent.linux import ip_lib
    +from neutron.common import utils
     from neutron.i18n import _LI
     
     LOG = logging.getLogger(__name__)
    @@ -32,6 +33,12 @@ def setup_arp_spoofing_protection(vif, port_details):
             LOG.info(_LI("Skipping ARP spoofing rules for port '%s' because "
                          "it has port security disabled"), vif)
             return
    +    if utils.is_port_trusted(port_details):
    +        # clear any previous entries related to this port
    +        delete_arp_spoofing_protection([vif], current_rules)
    +        LOG.debug("Skipping ARP spoofing rules for network owned port "
    +                  "'%s'.", vif)
    +        return
         # collect all of the addresses and cidrs that belong to the port
         addresses = {f['ip_address'] for f in port_details['fixed_ips']}
         if port_details.get('allowed_address_pairs'):
    
  • neutron/plugins/ml2/drivers/linuxbridge/agent/common/config.py+4 0 modified
    @@ -16,6 +16,7 @@
     
     from neutron.agent.common import config
     
    +DEFAULT_BRIDGE_MAPPINGS = []
     DEFAULT_INTERFACE_MAPPINGS = []
     DEFAULT_VXLAN_GROUP = '224.0.0.1'
     
    @@ -47,6 +48,9 @@
         cfg.ListOpt('physical_interface_mappings',
                     default=DEFAULT_INTERFACE_MAPPINGS,
                     help=_("List of <physical_network>:<physical_interface>")),
    +    cfg.ListOpt('bridge_mappings',
    +                default=DEFAULT_BRIDGE_MAPPINGS,
    +                help=_("List of <physical_network>:<physical_bridge>")),
     ]
     
     agent_opts = [
    
  • neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py+107 29 modified
    @@ -77,9 +77,11 @@ def __init__(self, network_type, physical_network, segmentation_id):
     
     
     class LinuxBridgeManager(object):
    -    def __init__(self, interface_mappings):
    +    def __init__(self, bridge_mappings, interface_mappings):
    +        self.bridge_mappings = bridge_mappings
             self.interface_mappings = interface_mappings
             self.validate_interface_mappings()
    +        self.validate_bridge_mappings()
             self.ip = ip_lib.IPWrapper()
             # VXLAN related parameters:
             self.local_ip = cfg.CONF.VXLAN.local_ip
    @@ -104,13 +106,26 @@ def validate_interface_mappings(self):
                               {'intf': interface, 'net': physnet})
                     sys.exit(1)
     
    +    def validate_bridge_mappings(self):
    +        for physnet, bridge in self.bridge_mappings.items():
    +            if not ip_lib.device_exists(bridge):
    +                LOG.error(_LE("Bridge %(brq)s for physical network %(net)s"
    +                              " does not exist. Agent terminated!"),
    +                          {'brq': bridge, 'net': physnet})
    +                sys.exit(1)
    +
         def interface_exists_on_bridge(self, bridge, interface):
             directory = '/sys/class/net/%s/brif' % bridge
             for filename in os.listdir(directory):
                 if filename == interface:
                     return True
             return False
     
    +    def get_existing_bridge_name(self, physical_network):
    +        if not physical_network:
    +            return None
    +        return self.bridge_mappings.get(physical_network)
    +
         def get_bridge_name(self, network_id):
             if not network_id:
                 LOG.warning(_LW("Invalid Network ID, will lead to incorrect "
    @@ -160,6 +175,11 @@ def get_all_neutron_bridges(self):
             for bridge in bridge_list:
                 if bridge.startswith(BRIDGE_NAME_PREFIX):
                     neutron_bridge_list.append(bridge)
    +
    +        # NOTE(nick-ma-z): Add pre-existing user-defined bridges
    +        for bridge_name in self.bridge_mappings.values():
    +            if bridge_name not in neutron_bridge_list:
    +                neutron_bridge_list.append(bridge_name)
             return neutron_bridge_list
     
         def get_interfaces_on_bridge(self, bridge_name):
    @@ -197,13 +217,17 @@ def is_device_on_bridge(self, device_name):
                     DEVICE_NAME_PLACEHOLDER, device_name)
                 return os.path.exists(bridge_port_path)
     
    -    def ensure_vlan_bridge(self, network_id, physical_interface, vlan_id):
    +    def ensure_vlan_bridge(self, network_id, phy_bridge_name,
    +                           physical_interface, vlan_id):
             """Create a vlan and bridge unless they already exist."""
             interface = self.ensure_vlan(physical_interface, vlan_id)
    -        bridge_name = self.get_bridge_name(network_id)
    -        ips, gateway = self.get_interface_details(interface)
    -        if self.ensure_bridge(bridge_name, interface, ips, gateway):
    -            return interface
    +        if phy_bridge_name:
    +            return self.ensure_bridge(phy_bridge_name)
    +        else:
    +            bridge_name = self.get_bridge_name(network_id)
    +            ips, gateway = self.get_interface_details(interface)
    +            if self.ensure_bridge(bridge_name, interface, ips, gateway):
    +                return interface
     
         def ensure_vxlan_bridge(self, network_id, segmentation_id):
             """Create a vxlan and bridge unless they already exist."""
    @@ -225,16 +249,24 @@ def get_interface_details(self, interface):
             gateway = device.route.get_gateway(scope='global')
             return ips, gateway
     
    -    def ensure_flat_bridge(self, network_id, physical_interface):
    +    def ensure_flat_bridge(self, network_id, phy_bridge_name,
    +                           physical_interface):
             """Create a non-vlan bridge unless it already exists."""
    -        bridge_name = self.get_bridge_name(network_id)
    -        ips, gateway = self.get_interface_details(physical_interface)
    -        if self.ensure_bridge(bridge_name, physical_interface, ips, gateway):
    -            return physical_interface
    +        if phy_bridge_name:
    +            return self.ensure_bridge(phy_bridge_name)
    +        else:
    +            bridge_name = self.get_bridge_name(network_id)
    +            ips, gateway = self.get_interface_details(physical_interface)
    +            if self.ensure_bridge(bridge_name, physical_interface, ips,
    +                                  gateway):
    +                return physical_interface
     
    -    def ensure_local_bridge(self, network_id):
    +    def ensure_local_bridge(self, network_id, phy_bridge_name):
             """Create a local bridge unless it already exists."""
    -        bridge_name = self.get_bridge_name(network_id)
    +        if phy_bridge_name:
    +            bridge_name = phy_bridge_name
    +        else:
    +            bridge_name = self.get_bridge_name(network_id)
             return self.ensure_bridge(bridge_name)
     
         def ensure_vlan(self, physical_interface, vlan_id):
    @@ -389,15 +421,20 @@ def ensure_physical_in_bridge(self, network_id,
                     return
                 return self.ensure_vxlan_bridge(network_id, segmentation_id)
     
    +        # NOTE(nick-ma-z): Obtain mappings of physical bridge and interfaces
    +        physical_bridge = self.get_existing_bridge_name(physical_network)
             physical_interface = self.interface_mappings.get(physical_network)
    -        if not physical_interface:
    -            LOG.error(_LE("No mapping for physical network %s"),
    +        if not physical_bridge and not physical_interface:
    +            LOG.error(_LE("No bridge or interface mappings"
    +                          " for physical network %s"),
                           physical_network)
                 return
             if network_type == p_const.TYPE_FLAT:
    -            return self.ensure_flat_bridge(network_id, physical_interface)
    +            return self.ensure_flat_bridge(network_id, physical_bridge,
    +                                           physical_interface)
             elif network_type == p_const.TYPE_VLAN:
    -            return self.ensure_vlan_bridge(network_id, physical_interface,
    +            return self.ensure_vlan_bridge(network_id, physical_bridge,
    +                                           physical_interface,
                                                segmentation_id)
             else:
                 LOG.error(_LE("Unknown network_type %(network_type)s for network "
    @@ -416,9 +453,13 @@ def add_tap_interface(self, network_id, network_type, physical_network,
                           "this host, skipped", tap_device_name)
                 return False
     
    -        bridge_name = self.get_bridge_name(network_id)
    +        if physical_network:
    +            bridge_name = self.get_existing_bridge_name(physical_network)
    +        else:
    +            bridge_name = self.get_bridge_name(network_id)
    +
             if network_type == p_const.TYPE_LOCAL:
    -            self.ensure_local_bridge(network_id)
    +            self.ensure_local_bridge(network_id, bridge_name)
             else:
                 phy_dev_name = self.ensure_physical_in_bridge(network_id,
                                                               network_type,
    @@ -495,6 +536,11 @@ def delete_bridge(self, bridge_name):
     
         def remove_empty_bridges(self):
             for network_id in list(self.network_map.keys()):
    +            # NOTE(nick-ma-z): Don't remove pre-existing user-defined bridges
    +            phy_net = self.network_map[network_id].physical_network
    +            if phy_net and phy_net in self.bridge_mappings:
    +                continue
    +
                 bridge_name = self.get_bridge_name(network_id)
                 if not self.get_tap_devices_count(bridge_name):
                     self.delete_bridge(bridge_name)
    @@ -678,6 +724,19 @@ def __init__(self, context, agent, sg_agent):
         def network_delete(self, context, **kwargs):
             LOG.debug("network_delete received")
             network_id = kwargs.get('network_id')
    +
    +        # NOTE(nick-ma-z): Don't remove pre-existing user-defined bridges
    +        if network_id in self.agent.br_mgr.network_map:
    +            phynet = self.agent.br_mgr.network_map[network_id].physical_network
    +            if phynet and phynet in self.agent.br_mgr.bridge_mappings:
    +                LOG.info(_LI("Physical network %s is defined in "
    +                             "bridge_mappings and cannot be deleted."),
    +                         network_id)
    +                return
    +        else:
    +            LOG.error(_LE("Network %s is not available."), network_id)
    +            return
    +
             bridge_name = self.agent.br_mgr.get_bridge_name(network_id)
             LOG.debug("Delete %s", bridge_name)
             self.agent.br_mgr.delete_bridge(bridge_name)
    @@ -773,10 +832,12 @@ def fdb_update(self, context, fdb_entries):
     
     class LinuxBridgeNeutronAgentRPC(service.Service):
     
    -    def __init__(self, interface_mappings, polling_interval,
    +    def __init__(self, bridge_mappings, interface_mappings, polling_interval,
                      quitting_rpc_timeout):
             """Constructor.
     
    +        :param bridge_mappings: dict mapping physical_networks to
    +               physical_bridges.
             :param interface_mappings: dict mapping physical_networks to
                    physical_interfaces.
             :param polling_interval: interval (secs) to poll DB.
    @@ -785,13 +846,15 @@ def __init__(self, interface_mappings, polling_interval,
             """
             super(LinuxBridgeNeutronAgentRPC, self).__init__()
             self.interface_mappings = interface_mappings
    +        self.bridge_mappings = bridge_mappings
             self.polling_interval = polling_interval
             self.quitting_rpc_timeout = quitting_rpc_timeout
     
         def start(self):
             self.prevent_arp_spoofing = cfg.CONF.AGENT.prevent_arp_spoofing
    -        self.setup_linux_bridge(self.interface_mappings)
    -        configurations = {'interface_mappings': self.interface_mappings}
    +        self.setup_linux_bridge(self.bridge_mappings, self.interface_mappings)
    +        configurations = {'bridge_mappings': self.bridge_mappings,
    +                          'interface_mappings': self.interface_mappings}
             if self.br_mgr.vxlan_mode != lconst.VXLAN_NONE:
                 configurations['tunneling_ip'] = self.br_mgr.local_ip
                 configurations['tunnel_types'] = [p_const.TYPE_VXLAN]
    @@ -858,8 +921,7 @@ def setup_rpc(self, physical_interfaces):
                          [topics.NETWORK, topics.DELETE],
                          [topics.SECURITY_GROUP, topics.UPDATE]]
             if cfg.CONF.VXLAN.l2_population:
    -            consumers.append([topics.L2POPULATION,
    -                              topics.UPDATE, cfg.CONF.host])
    +            consumers.append([topics.L2POPULATION, topics.UPDATE])
             self.connection = agent_rpc.create_consumers(self.endpoints,
                                                          self.topic,
                                                          consumers)
    @@ -869,11 +931,15 @@ def setup_rpc(self, physical_interfaces):
                     self._report_state)
                 heartbeat.start(interval=report_interval)
     
    -    def setup_linux_bridge(self, interface_mappings):
    -        self.br_mgr = LinuxBridgeManager(interface_mappings)
    +    def setup_linux_bridge(self, bridge_mappings, interface_mappings):
    +        self.br_mgr = LinuxBridgeManager(bridge_mappings, interface_mappings)
     
    -    def remove_port_binding(self, network_id, interface_id):
    -        bridge_name = self.br_mgr.get_bridge_name(network_id)
    +    def remove_port_binding(self, network_id, physical_network, interface_id):
    +        if physical_network:
    +            bridge_name = self.br_mgr.get_existing_bridge_name(
    +                physical_network)
    +        else:
    +            bridge_name = self.br_mgr.get_bridge_name(network_id)
             tap_device_name = self.br_mgr.get_tap_device_name(interface_id)
             return self.br_mgr.remove_interface(bridge_name, tap_device_name)
     
    @@ -941,7 +1007,9 @@ def treat_devices_added_updated(self, devices):
                                                                self.agent_id,
                                                                cfg.CONF.host)
                     else:
    +                    physical_network = device_details['physical_network']
                         self.remove_port_binding(device_details['network_id'],
    +                                             physical_network,
                                                  device_details['port_id'])
                 else:
                     LOG.info(_LI("Device %s not defined on plugin"), device)
    @@ -1073,9 +1141,19 @@ def main():
             sys.exit(1)
         LOG.info(_LI("Interface mappings: %s"), interface_mappings)
     
    +    try:
    +        bridge_mappings = n_utils.parse_mappings(
    +            cfg.CONF.LINUX_BRIDGE.bridge_mappings)
    +    except ValueError as e:
    +        LOG.error(_LE("Parsing bridge_mappings failed: %s. "
    +                      "Agent terminated!"), e)
    +        sys.exit(1)
    +    LOG.info(_LI("Bridge mappings: %s"), bridge_mappings)
    +
         polling_interval = cfg.CONF.AGENT.polling_interval
         quitting_rpc_timeout = cfg.CONF.AGENT.quitting_rpc_timeout
    -    agent = LinuxBridgeNeutronAgentRPC(interface_mappings,
    +    agent = LinuxBridgeNeutronAgentRPC(bridge_mappings,
    +                                       interface_mappings,
                                            polling_interval,
                                            quitting_rpc_timeout)
         LOG.info(_LI("Agent initialized successfully, now running... "))
    
  • neutron/plugins/ml2/drivers/linuxbridge/mech_driver/mech_linuxbridge.py+3 1 modified
    @@ -47,7 +47,9 @@ def get_allowed_network_types(self, agent):
                      p_constants.TYPE_VLAN])
     
         def get_mappings(self, agent):
    -        return agent['configurations'].get('interface_mappings', {})
    +        mappings = dict(agent['configurations'].get('interface_mappings', {}),
    +                        **agent['configurations'].get('bridge_mappings', {}))
    +        return mappings
     
         def check_vlan_transparency(self, context):
             """Linuxbridge driver vlan transparency support."""
    
  • neutron/plugins/ml2/drivers/mech_sriov/agent/common/exceptions.py+4 0 modified
    @@ -28,5 +28,9 @@ class IpCommandError(SriovNicError):
         message = _("ip command failed on device %(dev_name)s: %(reason)s")
     
     
    +class IpCommandOperationNotSupportedError(SriovNicError):
    +    message = _("Operation not supported on device %(dev_name)s")
    +
    +
     class InvalidPciSlotError(SriovNicError):
         message = _("Invalid pci slot %(pci_slot)s")
    
  • neutron/plugins/ml2/drivers/mech_sriov/agent/eswitch_manager.py+43 28 modified
    @@ -125,20 +125,25 @@ def get_pci_slot_list(self):
             """Get list of VF addresses."""
             return self.pci_slot_map.keys()
     
    -    def get_assigned_devices(self):
    -        """Get assigned Virtual Functions.
    +    def get_assigned_devices_info(self):
    +        """Get assigned Virtual Functions mac and pci slot
    +        information and populates vf_to_pci_slot mappings
     
    -        @return: list of VF mac addresses
    +        @return: list of VF pair (mac address, pci slot)
             """
    -        vf_list = []
    -        assigned_macs = []
    -        for vf_index in self.pci_slot_map.values():
    +        vf_to_pci_slot_mapping = {}
    +        assigned_devices_info = []
    +        for pci_slot, vf_index in self.pci_slot_map.items():
                 if not PciOsWrapper.is_assigned_vf(self.dev_name, vf_index):
                     continue
    -            vf_list.append(vf_index)
    -        if vf_list:
    -            assigned_macs = self.pci_dev_wrapper.get_assigned_macs(vf_list)
    -        return assigned_macs
    +            vf_to_pci_slot_mapping[vf_index] = pci_slot
    +        if vf_to_pci_slot_mapping:
    +            vf_to_mac_mapping = self.pci_dev_wrapper.get_assigned_macs(
    +                list(vf_to_pci_slot_mapping.keys()))
    +            for vf_index, mac in vf_to_mac_mapping.items():
    +                pci_slot = vf_to_pci_slot_mapping[vf_index]
    +                assigned_devices_info.append((mac, pci_slot))
    +        return assigned_devices_info
     
         def get_device_state(self, pci_slot):
             """Get device state.
    @@ -219,8 +224,7 @@ def get_pci_device(self, pci_slot):
             if vf_index is not None:
                 if PciOsWrapper.is_assigned_vf(self.dev_name, vf_index):
                     macs = self.pci_dev_wrapper.get_assigned_macs([vf_index])
    -                if macs:
    -                    mac = macs[0]
    +                mac = macs.get(vf_index)
             return mac
     
     
    @@ -247,12 +251,12 @@ def device_exists(self, device_mac, pci_slot):
                 return True
             return False
     
    -    def get_assigned_devices(self, phys_net=None):
    +    def get_assigned_devices_info(self, phys_net=None):
             """Get all assigned devices.
     
             Get all assigned devices belongs to given embedded switch
             @param phys_net: physical network, if none get all assigned devices
    -        @return: set of assigned VFs mac addresses
    +        @return: set of assigned VFs (mac address, pci slot) pair
             """
             if phys_net:
                 embedded_switch = self.emb_switches_map.get(phys_net, None)
    @@ -263,16 +267,16 @@ def get_assigned_devices(self, phys_net=None):
                 eswitch_objects = self.emb_switches_map.values()
             assigned_devices = set()
             for embedded_switch in eswitch_objects:
    -            for device_mac in embedded_switch.get_assigned_devices():
    -                assigned_devices.add(device_mac)
    +            for device in embedded_switch.get_assigned_devices_info():
    +                assigned_devices.add(device)
             return assigned_devices
     
         def get_device_state(self, device_mac, pci_slot):
             """Get device state.
     
             Get the device state (up/True or down/False)
             @param device_mac: device mac
    -        @param pci_slot: VF pci slot
    +        @param pci_slot: VF PCI slot
             @return: device state (True/False) None if failed
             """
             embedded_switch = self._get_emb_eswitch(device_mac, pci_slot)
    @@ -355,16 +359,27 @@ def _get_emb_eswitch(self, device_mac, pci_slot):
                     embedded_switch = None
             return embedded_switch
     
    -    def get_pci_slot_by_mac(self, device_mac):
    -        """Get pci slot by mac.
    +    def clear_max_rate(self, pci_slot):
    +        """Clear the max rate
     
    -        Get pci slot by device mac
    -        @param device_mac: device mac
    +        Clear the max rate configuration from VF by setting it to 0
    +        @param pci_slot: VF PCI slot
             """
    -        result = None
    -        for pci_slot, embedded_switch in self.pci_slot_map.items():
    -            used_device_mac = embedded_switch.get_pci_device(pci_slot)
    -            if used_device_mac == device_mac:
    -                result = pci_slot
    -                break
    -        return result
    +        #(Note): we don't use the self._get_emb_eswitch here, because when
    +        #clearing the VF it may be not assigned. This happens when libvirt
    +        #releases the VF back to the hypervisor on delete VM. Therefore we
    +        #should just clear the VF max rate according to pci_slot no matter
    +        #if VF is assigned or not.
    +        embedded_switch = self.pci_slot_map.get(pci_slot)
    +        if embedded_switch:
    +            #(Note): check the pci_slot is not assigned to some
    +            # other port before resetting the max rate.
    +            if embedded_switch.get_pci_device(pci_slot) is None:
    +                embedded_switch.set_device_max_rate(pci_slot, 0)
    +            else:
    +                LOG.warning(_LW("VF with PCI slot %(pci_slot)s is already "
    +                                "assigned; skipping reset maximum rate"),
    +                            {'pci_slot': pci_slot})
    +        else:
    +            LOG.error(_LE("PCI slot %(pci_slot)s has no mapping to Embedded "
    +                          "Switch; skipping"), {'pci_slot': pci_slot})
    
  • neutron/plugins/ml2/drivers/mech_sriov/agent/extension_drivers/qos_driver.py+7 30 modified
    @@ -15,7 +15,7 @@
     from oslo_log import log as logging
     
     from neutron.agent.l2.extensions import qos
    -from neutron.i18n import _LE, _LI, _LW
    +from neutron.i18n import _LE, _LI
     from neutron.plugins.ml2.drivers.mech_sriov.agent.common import (
         exceptions as exc)
     from neutron.plugins.ml2.drivers.mech_sriov.agent import eswitch_manager as esm
    @@ -27,7 +27,7 @@
     
     class QosSRIOVAgentDriver(qos.QosAgentDriver):
     
    -    _SUPPORTED_RULES = (
    +    SUPPORTED_RULES = (
             mech_driver.SriovNicSwitchMechanismDriver.supported_qos_rule_types)
     
         def __init__(self):
    @@ -37,40 +37,17 @@ def __init__(self):
         def initialize(self):
             self.eswitch_mgr = esm.ESwitchManager()
     
    -    def create(self, port, qos_policy):
    -        self._handle_rules('create', port, qos_policy)
    +    def create_bandwidth_limit(self, port, rule):
    +        self.update_bandwidth_limit(port, rule)
     
    -    def update(self, port, qos_policy):
    -        self._handle_rules('update', port, qos_policy)
    -
    -    def delete(self, port, qos_policy):
    -        # TODO(QoS): consider optimizing flushing of all QoS rules from the
    -        # port by inspecting qos_policy.rules contents
    -        self._delete_bandwidth_limit(port)
    -
    -    def _handle_rules(self, action, port, qos_policy):
    -        for rule in qos_policy.rules:
    -            if rule.rule_type in self._SUPPORTED_RULES:
    -                handler_name = ("".join(("_", action, "_", rule.rule_type)))
    -                handler = getattr(self, handler_name)
    -                handler(port, rule)
    -            else:
    -                LOG.warning(_LW('Unsupported QoS rule type for %(rule_id)s: '
    -                            '%(rule_type)s; skipping'),
    -                            {'rule_id': rule.id, 'rule_type': rule.rule_type})
    -
    -    def _create_bandwidth_limit(self, port, rule):
    -        self._update_bandwidth_limit(port, rule)
    -
    -    def _update_bandwidth_limit(self, port, rule):
    +    def update_bandwidth_limit(self, port, rule):
             pci_slot = port['profile'].get('pci_slot')
             device = port['device']
             self._set_vf_max_rate(device, pci_slot, rule.max_kbps)
     
    -    def _delete_bandwidth_limit(self, port):
    +    def delete_bandwidth_limit(self, port):
             pci_slot = port['profile'].get('pci_slot')
    -        device = port['device']
    -        self._set_vf_max_rate(device, pci_slot)
    +        self.eswitch_mgr.clear_max_rate(pci_slot)
     
         def _set_vf_max_rate(self, device, pci_slot, max_kbps=0):
             if self.eswitch_mgr.device_exists(device, pci_slot):
    
  • neutron/plugins/ml2/drivers/mech_sriov/agent/pci_lib.py+34 27 modified
    @@ -38,6 +38,8 @@ class PciDeviceIPWrapper(ip_lib.IPWrapper):
         VF_LINE_FORMAT = VF_PATTERN + MAC_PATTERN + ANY_PATTERN + STATE_PATTERN
         VF_DETAILS_REG_EX = re.compile(VF_LINE_FORMAT)
     
    +    IP_LINK_OP_NOT_SUPPORTED = 'RTNETLINK answers: Operation not supported'
    +
         class LinkState(object):
             ENABLE = "enable"
             DISABLE = "disable"
    @@ -46,26 +48,51 @@ def __init__(self, dev_name):
             super(PciDeviceIPWrapper, self).__init__()
             self.dev_name = dev_name
     
    +    def _set_feature(self, vf_index, feature, value):
    +        """Sets vf feature
    +
    +        Checks if the feature is not supported or there's some
    +        general error during ip link invocation and raises
    +        exception accordingly.
    +
    +        :param vf_index: vf index
    +        :param feature: name of a feature to be passed to ip link,
    +                        such as 'state' or 'spoofchk'
    +        :param value: value of the feature setting
    +        """
    +        try:
    +            self._as_root([], "link", ("set", self.dev_name, "vf",
    +                                       str(vf_index), feature, value))
    +        except Exception as e:
    +            if self.IP_LINK_OP_NOT_SUPPORTED in str(e):
    +                raise exc.IpCommandOperationNotSupportedError(
    +                    dev_name=self.dev_name)
    +            else:
    +                raise exc.IpCommandError(dev_name=self.dev_name,
    +                                         reason=str(e))
    +
         def get_assigned_macs(self, vf_list):
             """Get assigned mac addresses for vf list.
     
             @param vf_list: list of vf indexes
    -        @return: list of assigned mac addresses
    +        @return: dict mapping of vf to mac
             """
             try:
                 out = self._as_root([], "link", ("show", self.dev_name))
             except Exception as e:
                 LOG.exception(_LE("Failed executing ip command"))
                 raise exc.IpCommandError(dev_name=self.dev_name,
                                          reason=e)
    +        vf_to_mac_mapping = {}
             vf_lines = self._get_vf_link_show(vf_list, out)
    -        vf_details_list = []
             if vf_lines:
                 for vf_line in vf_lines:
                     vf_details = self._parse_vf_link_show(vf_line)
                     if vf_details:
    -                    vf_details_list.append(vf_details)
    -        return [details.get("MAC") for details in vf_details_list]
    +                    vf_num = vf_details.get('vf')
    +                    vf_mac = vf_details.get("MAC")
    +                    vf_to_mac_mapping[vf_num] = vf_mac
    +        return vf_to_mac_mapping
     
         def get_vf_state(self, vf_index):
             """Get vf state {True/False}
    @@ -97,14 +124,7 @@ def set_vf_state(self, vf_index, state):
             """
             status_str = self.LinkState.ENABLE if state else \
                 self.LinkState.DISABLE
    -
    -        try:
    -            self._as_root([], "link", ("set", self.dev_name, "vf",
    -                                       str(vf_index), "state", status_str))
    -        except Exception as e:
    -            LOG.exception(_LE("Failed executing ip command"))
    -            raise exc.IpCommandError(dev_name=self.dev_name,
    -                                     reason=e)
    +        self._set_feature(vf_index, "state", status_str)
     
         def set_vf_spoofcheck(self, vf_index, enabled):
             """sets vf spoofcheck
    @@ -114,28 +134,15 @@ def set_vf_spoofcheck(self, vf_index, enabled):
                             False to disable
             """
             setting = "on" if enabled else "off"
    -
    -        try:
    -            self._as_root('', "link", ("set", self.dev_name, "vf",
    -                                       str(vf_index), "spoofchk", setting))
    -        except Exception as e:
    -            raise exc.IpCommandError(dev_name=self.dev_name,
    -                                     reason=str(e))
    +        self._set_feature(vf_index, "spoofchk", setting)
     
         def set_vf_max_rate(self, vf_index, max_tx_rate):
             """sets vf max rate.
     
             @param vf_index: vf index
             @param max_tx_rate: vf max tx rate in Mbps
             """
    -        try:
    -            self._as_root([], "link", ("set", self.dev_name, "vf",
    -                                       str(vf_index), "rate",
    -                                       str(max_tx_rate)))
    -        except Exception as e:
    -            LOG.exception(_LE("Failed executing ip command"))
    -            raise exc.IpCommandError(dev_name=self.dev_name,
    -                                     reason=e)
    +        self._set_feature(vf_index, "rate", str(max_tx_rate))
     
         def _get_vf_link_show(self, vf_list, link_show_out):
             """Get link show output for VFs
    
  • neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py+52 23 modified
    @@ -64,8 +64,20 @@ def port_update(self, context, **kwargs):
             # Do not store port details, as if they're used for processing
             # notifications there is no guarantee the notifications are
             # processed in the same order as the relevant API requests.
    -        self.agent.updated_devices.add(port['mac_address'])
    -        LOG.debug("port_update RPC received for port: %s", port['id'])
    +        mac = port['mac_address']
    +        pci_slot = None
    +        if port.get('binding:profile'):
    +            pci_slot = port['binding:profile'].get('pci_slot')
    +
    +        if pci_slot:
    +            self.agent.updated_devices.add((mac, pci_slot))
    +            LOG.debug("port_update RPC received for port: %(id)s with MAC "
    +                      "%(mac)s and PCI slot %(pci_slot)s slot",
    +                      {'id': port['id'], 'mac': mac, 'pci_slot': pci_slot})
    +        else:
    +            LOG.debug("No PCI Slot for port %(id)s with MAC %(mac)s; "
    +                      "skipping", {'id': port['id'], 'mac': mac,
    +                                   'pci_slot': pci_slot})
     
     
     class SriovNicSwitchAgent(object):
    @@ -87,6 +99,7 @@ def __init__(self, physical_devices_mappings, exclude_devices,
     
             # Stores port update notifications for processing in the main loop
             self.updated_devices = set()
    +        self.mac_to_port_id_mapping = {}
     
             self.context = context.get_admin_context_without_session()
             self.plugin_rpc = agent_rpc.PluginApi(topics.PLUGIN)
    @@ -128,7 +141,7 @@ def _setup_rpc(self):
     
         def _report_state(self):
             try:
    -            devices = len(self.eswitch_mgr.get_assigned_devices())
    +            devices = len(self.eswitch_mgr.get_assigned_devices_info())
                 self.agent_state.get('configurations')['devices'] = devices
                 self.state_rpc.report_state(self.context,
                                             self.agent_state)
    @@ -147,7 +160,7 @@ def setup_eswitch_mgr(self, device_mappings, exclude_devices={}):
             self.eswitch_mgr.discover_devices(device_mappings, exclude_devices)
     
         def scan_devices(self, registered_devices, updated_devices):
    -        curr_devices = self.eswitch_mgr.get_assigned_devices()
    +        curr_devices = self.eswitch_mgr.get_assigned_devices_info()
             device_info = {}
             device_info['current'] = curr_devices
             device_info['added'] = curr_devices - registered_devices
    @@ -197,8 +210,11 @@ def treat_device(self, device, pci_slot, admin_state_up, spoofcheck=True):
                 try:
                     self.eswitch_mgr.set_device_state(device, pci_slot,
                                                       admin_state_up)
    +            except exc.IpCommandOperationNotSupportedError:
    +                LOG.warning(_LW("Device %s does not support state change"),
    +                            device)
                 except exc.SriovNicError:
    -                LOG.exception(_LE("Failed to set device %s state"), device)
    +                LOG.warning(_LW("Failed to set device %s state"), device)
                     return
                 if admin_state_up:
                     # update plugin about port status
    @@ -214,14 +230,15 @@ def treat_device(self, device, pci_slot, admin_state_up, spoofcheck=True):
             else:
                 LOG.info(_LI("No device with MAC %s defined on agent."), device)
     
    -    def treat_devices_added_updated(self, devices):
    +    def treat_devices_added_updated(self, devices_info):
             try:
    +            macs_list = set([device_info[0] for device_info in devices_info])
                 devices_details_list = self.plugin_rpc.get_devices_details_list(
    -                self.context, devices, self.agent_id)
    +                self.context, macs_list, self.agent_id)
             except Exception as e:
                 LOG.debug("Unable to get port details for devices "
    -                      "with MAC address %(devices)s: %(e)s",
    -                      {'devices': devices, 'e': e})
    +                      "with MAC addresses %(devices)s: %(e)s",
    +                      {'devices': macs_list, 'e': e})
                 # resync is needed
                 return True
     
    @@ -232,9 +249,11 @@ def treat_devices_added_updated(self, devices):
                 if 'port_id' in device_details:
                     LOG.info(_LI("Port %(device)s updated. Details: %(details)s"),
                              {'device': device, 'details': device_details})
    +                port_id = device_details['port_id']
    +                self.mac_to_port_id_mapping[device] = port_id
                     profile = device_details['profile']
                     spoofcheck = device_details.get('port_security_enabled', True)
    -                self.treat_device(device_details['device'],
    +                self.treat_device(device,
                                       profile.get('pci_slot'),
                                       device_details['admin_state_up'],
                                       spoofcheck)
    @@ -247,31 +266,41 @@ def treat_devices_added_updated(self, devices):
         def treat_devices_removed(self, devices):
             resync = False
             for device in devices:
    -            LOG.info(_LI("Removing device with mac_address %s"), device)
    +            mac, pci_slot = device
    +            LOG.info(_LI("Removing device with MAC address %(mac)s and "
    +                         "PCI slot %(pci_slot)s"),
    +                     {'mac': mac, 'pci_slot': pci_slot})
                 try:
    -                pci_slot = self.eswitch_mgr.get_pci_slot_by_mac(device)
    -                if pci_slot:
    +                port_id = self.mac_to_port_id_mapping.get(mac)
    +                if port_id:
                         profile = {'pci_slot': pci_slot}
    -                    port = {'device': device, 'profile': profile}
    +                    port = {'port_id': port_id,
    +                            'device': mac,
    +                            'profile': profile}
                         self.ext_manager.delete_port(self.context, port)
    +                    del self.mac_to_port_id_mapping[mac]
                     else:
    -                    LOG.warning(_LW("Failed to find pci slot for device "
    -                                    "%(device)s; skipping extension port "
    -                                    "cleanup"), device)
    -
    +                    LOG.warning(_LW("port_id to device with MAC "
    +                                 "%s not found"), mac)
                     dev_details = self.plugin_rpc.update_device_down(self.context,
    -                                                                 device,
    +                                                                 mac,
                                                                      self.agent_id,
                                                                      cfg.CONF.host)
    +
                 except Exception as e:
    -                LOG.debug("Removing port failed for device %(device)s "
    -                          "due to %(exc)s", {'device': device, 'exc': e})
    +                LOG.debug("Removing port failed for device with MAC address "
    +                          "%(mac)s and PCI slot %(pci_slot)s due to %(exc)s",
    +                          {'mac': mac, 'pci_slot': pci_slot, 'exc': e})
                     resync = True
                     continue
                 if dev_details['exists']:
    -                LOG.info(_LI("Port %s updated."), device)
    +                LOG.info(_LI("Port with MAC %(mac)s and PCI slot "
    +                             "%(pci_slot)s updated."),
    +                         {'mac': mac, 'pci_slot': pci_slot})
                 else:
    -                LOG.debug("Device %s not defined on plugin", device)
    +                LOG.debug("Device with MAC %(mac)s and PCI slot "
    +                          "%(pci_slot)s not defined on plugin",
    +                          {'mac': mac, 'pci_slot': pci_slot})
             return resync
     
         def daemon_loop(self):
    
  • neutron/plugins/ml2/drivers/ofagent/requirements.txt+0 1 removed
    @@ -1 +0,0 @@
    -networking-ofagent>=2015.1,<2015.2
    
  • neutron/plugins/ml2/drivers/openvswitch/agent/common/config.py+21 2 modified
    @@ -45,12 +45,27 @@
         cfg.BoolOpt('use_veth_interconnection', default=False,
                     help=_("Use veths instead of patch ports to interconnect the "
                            "integration bridge to physical bridges.")),
    -    cfg.StrOpt('of_interface', default='ovs-ofctl', choices=['ovs-ofctl'],
    +    cfg.StrOpt('of_interface', default='ovs-ofctl',
    +               choices=['ovs-ofctl', 'native'],
                    help=_("OpenFlow interface to use.")),
         cfg.StrOpt('datapath_type', default=constants.OVS_DATAPATH_SYSTEM,
                    choices=[constants.OVS_DATAPATH_SYSTEM,
                             constants.OVS_DATAPATH_NETDEV],
                    help=_("OVS datapath to use.")),
    +    cfg.IPOpt('of_listen_address', default='127.0.0.1',
    +              help=_("Address to listen on for OpenFlow connections. "
    +                     "Used only for 'native' driver.")),
    +    cfg.IntOpt('of_listen_port', default=6633,
    +               help=_("Port to listen on for OpenFlow connections. "
    +                      "Used only for 'native' driver.")),
    +    cfg.IntOpt('of_connect_timeout', default=30,
    +               help=_("Timeout in seconds to wait for "
    +                      "the local switch connecting the controller. "
    +                      "Used only for 'native' driver.")),
    +    cfg.IntOpt('of_request_timeout', default=10,
    +               help=_("Timeout in seconds to wait for a single "
    +                      "OpenFlow request. "
    +                      "Used only for 'native' driver.")),
     ]
     
     agent_opts = [
    @@ -104,7 +119,11 @@
                           "timeout won't be changed")),
         cfg.BoolOpt('drop_flows_on_start', default=False,
                     help=_("Reset flow table on start. Setting this to True will "
    -                       "cause brief traffic interruption."))
    +                       "cause brief traffic interruption.")),
    +    cfg.BoolOpt('tunnel_csum', default=False,
    +                help=_("Set or un-set the tunnel header checksum  on "
    +                       "outgoing IP packet carrying GRE/VXLAN tunnel."))
    +
     ]
     
     
    
  • neutron/plugins/ml2/drivers/openvswitch/agent/common/constants.py+2 0 modified
    @@ -100,3 +100,5 @@
     # ovs datapath types
     OVS_DATAPATH_SYSTEM = 'system'
     OVS_DATAPATH_NETDEV = 'netdev'
    +
    +MAX_DEVICE_RETRIES = 5
    
  • neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py+5 31 modified
    @@ -13,20 +13,16 @@
     #    under the License.
     
     from oslo_config import cfg
    -from oslo_log import log as logging
     
     from neutron.agent.common import ovs_lib
     from neutron.agent.l2.extensions import qos
    -from neutron.i18n import _LW
     from neutron.plugins.ml2.drivers.openvswitch.mech_driver import (
         mech_openvswitch)
     
    -LOG = logging.getLogger(__name__)
    -
     
     class QosOVSAgentDriver(qos.QosAgentDriver):
     
    -    _SUPPORTED_RULES = (
    +    SUPPORTED_RULES = (
             mech_openvswitch.OpenvswitchMechanismDriver.supported_qos_rule_types)
     
         def __init__(self):
    @@ -37,32 +33,10 @@ def __init__(self):
         def initialize(self):
             self.br_int = ovs_lib.OVSBridge(self.br_int_name)
     
    -    def create(self, port, qos_policy):
    -        self._handle_rules('create', port, qos_policy)
    -
    -    def update(self, port, qos_policy):
    -        self._handle_rules('update', port, qos_policy)
    -
    -    def delete(self, port, qos_policy):
    -        # TODO(QoS): consider optimizing flushing of all QoS rules from the
    -        # port by inspecting qos_policy.rules contents
    -        self._delete_bandwidth_limit(port)
    -
    -    def _handle_rules(self, action, port, qos_policy):
    -        for rule in qos_policy.rules:
    -            if rule.rule_type in self._SUPPORTED_RULES:
    -                handler_name = ("".join(("_", action, "_", rule.rule_type)))
    -                handler = getattr(self, handler_name)
    -                handler(port, rule)
    -            else:
    -                LOG.warning(_LW('Unsupported QoS rule type for %(rule_id)s: '
    -                            '%(rule_type)s; skipping'),
    -                            {'rule_id': rule.id, 'rule_type': rule.rule_type})
    -
    -    def _create_bandwidth_limit(self, port, rule):
    -        self._update_bandwidth_limit(port, rule)
    +    def create_bandwidth_limit(self, port, rule):
    +        self.update_bandwidth_limit(port, rule)
     
    -    def _update_bandwidth_limit(self, port, rule):
    +    def update_bandwidth_limit(self, port, rule):
             port_name = port['vif_port'].port_name
             max_kbps = rule.max_kbps
             max_burst_kbps = rule.max_burst_kbps
    @@ -71,6 +45,6 @@ def _update_bandwidth_limit(self, port, rule):
                                                         max_kbps,
                                                         max_burst_kbps)
     
    -    def _delete_bandwidth_limit(self, port):
    +    def delete_bandwidth_limit(self, port):
             port_name = port['vif_port'].port_name
             self.br_int.delete_egress_bw_limit_for_port(port_name)
    
  • neutron/plugins/ml2/drivers/openvswitch/agent/main.py+2 0 modified
    @@ -33,6 +33,8 @@
     _main_modules = {
         'ovs-ofctl': 'neutron.plugins.ml2.drivers.openvswitch.agent.openflow.'
                      'ovs_ofctl.main',
    +    'native': 'neutron.plugins.ml2.drivers.openvswitch.agent.openflow.'
    +                 'native.main',
     }
     
     
    
  • neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/br_dvr_process.py+113 0 added
    @@ -0,0 +1,113 @@
    +# Copyright (C) 2014,2015 VA Linux Systems Japan K.K.
    +# Copyright (C) 2014,2015 YAMAMOTO Takashi <yamamoto at valinux co jp>
    +# All Rights Reserved.
    +#
    +#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    +#    not use this file except in compliance with the License. You may obtain
    +#    a copy of the License at
    +#
    +#         http://www.apache.org/licenses/LICENSE-2.0
    +#
    +#    Unless required by applicable law or agreed to in writing, software
    +#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    +#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    +#    License for the specific language governing permissions and limitations
    +#    under the License.
    +
    +from ryu.lib.packet import ether_types
    +from ryu.lib.packet import icmpv6
    +from ryu.lib.packet import in_proto
    +
    +
    +class OVSDVRProcessMixin(object):
    +    """Common logic for br-tun and br-phys' DVR_PROCESS tables.
    +
    +    Inheriters should provide self.dvr_process_table_id and
    +    self.dvr_process_next_table_id.
    +    """
    +
    +    @staticmethod
    +    def _dvr_process_ipv4_match(ofp, ofpp, vlan_tag, gateway_ip):
    +        return ofpp.OFPMatch(vlan_vid=vlan_tag | ofp.OFPVID_PRESENT,
    +                             eth_type=ether_types.ETH_TYPE_ARP,
    +                             arp_tpa=gateway_ip)
    +
    +    def install_dvr_process_ipv4(self, vlan_tag, gateway_ip):
    +        # block ARP
    +        (_dp, ofp, ofpp) = self._get_dp()
    +        match = self._dvr_process_ipv4_match(ofp, ofpp,
    +            vlan_tag=vlan_tag, gateway_ip=gateway_ip)
    +        self.install_drop(table_id=self.dvr_process_table_id,
    +                          priority=3,
    +                          match=match)
    +
    +    def delete_dvr_process_ipv4(self, vlan_tag, gateway_ip):
    +        (_dp, ofp, ofpp) = self._get_dp()
    +        match = self._dvr_process_ipv4_match(ofp, ofpp,
    +            vlan_tag=vlan_tag, gateway_ip=gateway_ip)
    +        self.delete_flows(table_id=self.dvr_process_table_id, match=match)
    +
    +    @staticmethod
    +    def _dvr_process_ipv6_match(ofp, ofpp, vlan_tag, gateway_mac):
    +        return ofpp.OFPMatch(vlan_vid=vlan_tag | ofp.OFPVID_PRESENT,
    +                             eth_type=ether_types.ETH_TYPE_IPV6,
    +                             ip_proto=in_proto.IPPROTO_ICMPV6,
    +                             icmpv6_type=icmpv6.ND_ROUTER_ADVERT,
    +                             eth_src=gateway_mac)
    +
    +    def install_dvr_process_ipv6(self, vlan_tag, gateway_mac):
    +        # block RA
    +        (_dp, ofp, ofpp) = self._get_dp()
    +        match = self._dvr_process_ipv6_match(ofp, ofpp,
    +            vlan_tag=vlan_tag, gateway_mac=gateway_mac)
    +        self.install_drop(table_id=self.dvr_process_table_id, priority=3,
    +                          match=match)
    +
    +    def delete_dvr_process_ipv6(self, vlan_tag, gateway_mac):
    +        (_dp, ofp, ofpp) = self._get_dp()
    +        match = self._dvr_process_ipv6_match(ofp, ofpp,
    +            vlan_tag=vlan_tag, gateway_mac=gateway_mac)
    +        self.delete_flows(table_id=self.dvr_process_table_id, match=match)
    +
    +    @staticmethod
    +    def _dvr_process_in_match(ofp, ofpp, vlan_tag, vif_mac):
    +        return ofpp.OFPMatch(vlan_vid=vlan_tag | ofp.OFPVID_PRESENT,
    +                             eth_dst=vif_mac)
    +
    +    @staticmethod
    +    def _dvr_process_out_match(ofp, ofpp, vlan_tag, vif_mac):
    +        return ofpp.OFPMatch(vlan_vid=vlan_tag | ofp.OFPVID_PRESENT,
    +                             eth_src=vif_mac)
    +
    +    def install_dvr_process(self, vlan_tag, vif_mac, dvr_mac_address):
    +        (_dp, ofp, ofpp) = self._get_dp()
    +        match = self._dvr_process_in_match(ofp, ofpp,
    +                                           vlan_tag=vlan_tag, vif_mac=vif_mac)
    +        table_id = self.dvr_process_table_id
    +        self.install_drop(table_id=table_id,
    +                          priority=2,
    +                          match=match)
    +        match = self._dvr_process_out_match(ofp, ofpp,
    +                                            vlan_tag=vlan_tag, vif_mac=vif_mac)
    +        actions = [
    +            ofpp.OFPActionSetField(eth_src=dvr_mac_address),
    +        ]
    +        instructions = [
    +            ofpp.OFPInstructionActions(ofp.OFPIT_APPLY_ACTIONS, actions),
    +            ofpp.OFPInstructionGotoTable(
    +                table_id=self.dvr_process_next_table_id),
    +        ]
    +        self.install_instructions(table_id=table_id,
    +                                  priority=1,
    +                                  match=match,
    +                                  instructions=instructions)
    +
    +    def delete_dvr_process(self, vlan_tag, vif_mac):
    +        (_dp, ofp, ofpp) = self._get_dp()
    +        table_id = self.dvr_process_table_id
    +        match = self._dvr_process_in_match(ofp, ofpp,
    +                                           vlan_tag=vlan_tag, vif_mac=vif_mac)
    +        self.delete_flows(table_id=table_id, match=match)
    +        match = self._dvr_process_out_match(ofp, ofpp,
    +                                            vlan_tag=vlan_tag, vif_mac=vif_mac)
    +        self.delete_flows(table_id=table_id, match=match)
    
  • neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/br_int.py+210 0 added
    @@ -0,0 +1,210 @@
    +# Copyright (C) 2014,2015 VA Linux Systems Japan K.K.
    +# Copyright (C) 2014,2015 YAMAMOTO Takashi <yamamoto at valinux co jp>
    +# All Rights Reserved.
    +#
    +#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    +#    not use this file except in compliance with the License. You may obtain
    +#    a copy of the License at
    +#
    +#         http://www.apache.org/licenses/LICENSE-2.0
    +#
    +#    Unless required by applicable law or agreed to in writing, software
    +#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    +#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    +#    License for the specific language governing permissions and limitations
    +#    under the License.
    +
    +"""
    +* references
    +** OVS agent https://wiki.openstack.org/wiki/Ovs-flow-logic
    +"""
    +
    +from oslo_log import log as logging
    +from ryu.lib.packet import ether_types
    +from ryu.lib.packet import icmpv6
    +from ryu.lib.packet import in_proto
    +
    +from neutron.i18n import _LE
    +from neutron.plugins.common import constants as p_const
    +from neutron.plugins.ml2.drivers.openvswitch.agent.common import constants
    +from neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native \
    +    import ovs_bridge
    +
    +
    +LOG = logging.getLogger(__name__)
    +
    +
    +class OVSIntegrationBridge(ovs_bridge.OVSAgentBridge):
    +    """openvswitch agent br-int specific logic."""
    +
    +    def setup_default_table(self):
    +        self.install_normal()
    +        self.setup_canary_table()
    +        self.install_drop(table_id=constants.ARP_SPOOF_TABLE)
    +
    +    def setup_canary_table(self):
    +        self.install_drop(constants.CANARY_TABLE)
    +
    +    def check_canary_table(self):
    +        try:
    +            flows = self.dump_flows(constants.CANARY_TABLE)
    +        except RuntimeError:
    +            LOG.exception(_LE("Failed to communicate with the switch"))
    +            return constants.OVS_DEAD
    +        if flows == []:
    +            return constants.OVS_RESTARTED
    +        return constants.OVS_NORMAL
    +
    +    @staticmethod
    +    def _local_vlan_match(_ofp, ofpp, port, vlan_vid):
    +        return ofpp.OFPMatch(in_port=port, vlan_vid=vlan_vid)
    +
    +    def provision_local_vlan(self, port, lvid, segmentation_id):
    +        (_dp, ofp, ofpp) = self._get_dp()
    +        if segmentation_id is None:
    +            vlan_vid = ofp.OFPVID_NONE
    +            actions = [ofpp.OFPActionPushVlan()]
    +        else:
    +            vlan_vid = segmentation_id | ofp.OFPVID_PRESENT
    +            actions = []
    +        match = self._local_vlan_match(ofp, ofpp, port, vlan_vid)
    +        actions += [
    +            ofpp.OFPActionSetField(vlan_vid=lvid | ofp.OFPVID_PRESENT),
    +            ofpp.OFPActionOutput(ofp.OFPP_NORMAL, 0),
    +        ]
    +        self.install_apply_actions(priority=3,
    +                                   match=match,
    +                                   actions=actions)
    +
    +    def reclaim_local_vlan(self, port, segmentation_id):
    +        (_dp, ofp, ofpp) = self._get_dp()
    +        if segmentation_id is None:
    +            vlan_vid = ofp.OFPVID_NONE
    +        else:
    +            vlan_vid = segmentation_id | ofp.OFPVID_PRESENT
    +        match = self._local_vlan_match(ofp, ofpp, port, vlan_vid)
    +        self.delete_flows(match=match)
    +
    +    @staticmethod
    +    def _dvr_to_src_mac_match(ofp, ofpp, vlan_tag, dst_mac):
    +        return ofpp.OFPMatch(vlan_vid=vlan_tag | ofp.OFPVID_PRESENT,
    +                             eth_dst=dst_mac)
    +
    +    @staticmethod
    +    def _dvr_to_src_mac_table_id(network_type):
    +        if network_type == p_const.TYPE_VLAN:
    +            return constants.DVR_TO_SRC_MAC_VLAN
    +        else:
    +            return constants.DVR_TO_SRC_MAC
    +
    +    def install_dvr_to_src_mac(self, network_type,
    +                               vlan_tag, gateway_mac, dst_mac, dst_port):
    +        table_id = self._dvr_to_src_mac_table_id(network_type)
    +        (_dp, ofp, ofpp) = self._get_dp()
    +        match = self._dvr_to_src_mac_match(ofp, ofpp,
    +                                           vlan_tag=vlan_tag, dst_mac=dst_mac)
    +        actions = [
    +            ofpp.OFPActionPopVlan(),
    +            ofpp.OFPActionSetField(eth_src=gateway_mac),
    +            ofpp.OFPActionOutput(dst_port, 0),
    +        ]
    +        self.install_apply_actions(table_id=table_id,
    +                                   priority=4,
    +                                   match=match,
    +                                   actions=actions)
    +
    +    def delete_dvr_to_src_mac(self, network_type, vlan_tag, dst_mac):
    +        table_id = self._dvr_to_src_mac_table_id(network_type)
    +        (_dp, ofp, ofpp) = self._get_dp()
    +        match = self._dvr_to_src_mac_match(ofp, ofpp,
    +                                           vlan_tag=vlan_tag, dst_mac=dst_mac)
    +        self.delete_flows(table_id=table_id, match=match)
    +
    +    def add_dvr_mac_vlan(self, mac, port):
    +        self.install_goto(table_id=constants.LOCAL_SWITCHING,
    +                          priority=4,
    +                          in_port=port,
    +                          eth_src=mac,
    +                          dest_table_id=constants.DVR_TO_SRC_MAC_VLAN)
    +
    +    def remove_dvr_mac_vlan(self, mac):
    +        # REVISIT(yamamoto): match in_port as well?
    +        self.delete_flows(table_id=constants.LOCAL_SWITCHING,
    +                          eth_src=mac)
    +
    +    def add_dvr_mac_tun(self, mac, port):
    +        self.install_goto(table_id=constants.LOCAL_SWITCHING,
    +                          priority=2,
    +                          in_port=port,
    +                          eth_src=mac,
    +                          dest_table_id=constants.DVR_TO_SRC_MAC)
    +
    +    def remove_dvr_mac_tun(self, mac, port):
    +        self.delete_flows(table_id=constants.LOCAL_SWITCHING,
    +                          in_port=port, eth_src=mac)
    +
    +    @staticmethod
    +    def _arp_reply_match(ofp, ofpp, port):
    +        return ofpp.OFPMatch(in_port=port,
    +                             eth_type=ether_types.ETH_TYPE_ARP)
    +
    +    @staticmethod
    +    def _icmpv6_reply_match(ofp, ofpp, port):
    +        return ofpp.OFPMatch(in_port=port,
    +                             eth_type=ether_types.ETH_TYPE_IPV6,
    +                             ip_proto=in_proto.IPPROTO_ICMPV6,
    +                             icmpv6_type=icmpv6.ND_NEIGHBOR_ADVERT)
    +
    +    def install_icmpv6_na_spoofing_protection(self, port, ip_addresses):
    +        # Allow neighbor advertisements as long as they match addresses
    +        # that actually belong to the port.
    +        for ip in ip_addresses:
    +            masked_ip = self._cidr_to_ryu(ip)
    +            self.install_normal(
    +                table_id=constants.ARP_SPOOF_TABLE, priority=2,
    +                eth_type=ether_types.ETH_TYPE_IPV6,
    +                ip_proto=in_proto.IPPROTO_ICMPV6,
    +                icmpv6_type=icmpv6.ND_NEIGHBOR_ADVERT,
    +                ipv6_nd_target=masked_ip, in_port=port)
    +
    +        # Now that the rules are ready, direct icmpv6 neighbor advertisement
    +        # traffic from the port into the anti-spoof table.
    +        (_dp, ofp, ofpp) = self._get_dp()
    +        match = self._icmpv6_reply_match(ofp, ofpp, port=port)
    +        self.install_goto(table_id=constants.LOCAL_SWITCHING,
    +                          priority=10,
    +                          match=match,
    +                          dest_table_id=constants.ARP_SPOOF_TABLE)
    +
    +    def install_arp_spoofing_protection(self, port, ip_addresses):
    +        # allow ARP replies as long as they match addresses that actually
    +        # belong to the port.
    +        for ip in ip_addresses:
    +            masked_ip = self._cidr_to_ryu(ip)
    +            self.install_normal(table_id=constants.ARP_SPOOF_TABLE,
    +                                priority=2,
    +                                eth_type=ether_types.ETH_TYPE_ARP,
    +                                arp_spa=masked_ip,
    +                                in_port=port)
    +
    +        # Now that the rules are ready, direct ARP traffic from the port into
    +        # the anti-spoof table.
    +        # This strategy fails gracefully because OVS versions that can't match
    +        # on ARP headers will just process traffic normally.
    +        (_dp, ofp, ofpp) = self._get_dp()
    +        match = self._arp_reply_match(ofp, ofpp, port=port)
    +        self.install_goto(table_id=constants.LOCAL_SWITCHING,
    +                          priority=10,
    +                          match=match,
    +                          dest_table_id=constants.ARP_SPOOF_TABLE)
    +
    +    def delete_arp_spoofing_protection(self, port):
    +        (_dp, ofp, ofpp) = self._get_dp()
    +        match = self._arp_reply_match(ofp, ofpp, port=port)
    +        self.delete_flows(table_id=constants.LOCAL_SWITCHING,
    +                          match=match)
    +        match = self._icmpv6_reply_match(ofp, ofpp, port=port)
    +        self.delete_flows(table_id=constants.LOCAL_SWITCHING,
    +                          match=match)
    +        self.delete_flows(table_id=constants.ARP_SPOOF_TABLE,
    +                          in_port=port)
    
  • neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/br_phys.py+67 0 added
    @@ -0,0 +1,67 @@
    +# Copyright (C) 2014,2015 VA Linux Systems Japan K.K.
    +# Copyright (C) 2014,2015 YAMAMOTO Takashi <yamamoto at valinux co jp>
    +# All Rights Reserved.
    +#
    +#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    +#    not use this file except in compliance with the License. You may obtain
    +#    a copy of the License at
    +#
    +#         http://www.apache.org/licenses/LICENSE-2.0
    +#
    +#    Unless required by applicable law or agreed to in writing, software
    +#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    +#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    +#    License for the specific language governing permissions and limitations
    +#    under the License.
    +
    +from neutron.plugins.ml2.drivers.openvswitch.agent.common import constants
    +from neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native \
    +    import br_dvr_process
    +from neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native \
    +    import ovs_bridge
    +
    +
    +class OVSPhysicalBridge(ovs_bridge.OVSAgentBridge,
    +                        br_dvr_process.OVSDVRProcessMixin):
    +    """openvswitch agent physical bridge specific logic."""
    +
    +    # Used by OVSDVRProcessMixin
    +    dvr_process_table_id = constants.DVR_PROCESS_VLAN
    +    dvr_process_next_table_id = constants.LOCAL_VLAN_TRANSLATION
    +
    +    def setup_default_table(self):
    +        self.delete_flows()
    +        self.install_normal()
    +
    +    @staticmethod
    +    def _local_vlan_match(ofp, ofpp, port, lvid):
    +        return ofpp.OFPMatch(in_port=port, vlan_vid=lvid | ofp.OFPVID_PRESENT)
    +
    +    def provision_local_vlan(self, port, lvid, segmentation_id, distributed):
    +        table_id = constants.LOCAL_VLAN_TRANSLATION if distributed else 0
    +        (_dp, ofp, ofpp) = self._get_dp()
    +        match = self._local_vlan_match(ofp, ofpp, port, lvid)
    +        if segmentation_id is None:
    +            actions = [ofpp.OFPActionPopVlan()]
    +        else:
    +            vlan_vid = segmentation_id | ofp.OFPVID_PRESENT
    +            actions = [ofpp.OFPActionSetField(vlan_vid=vlan_vid)]
    +        actions += [ofpp.OFPActionOutput(ofp.OFPP_NORMAL, 0)]
    +        self.install_apply_actions(table_id=table_id,
    +                                   priority=4,
    +                                   match=match,
    +                                   actions=actions)
    +
    +    def reclaim_local_vlan(self, port, lvid):
    +        (_dp, ofp, ofpp) = self._get_dp()
    +        match = self._local_vlan_match(ofp, ofpp, port, lvid)
    +        self.delete_flows(match=match)
    +
    +    def add_dvr_mac_vlan(self, mac, port):
    +        self.install_output(table_id=constants.DVR_NOT_LEARN_VLAN,
    +            priority=2, eth_src=mac, port=port)
    +
    +    def remove_dvr_mac_vlan(self, mac):
    +        # REVISIT(yamamoto): match in_port as well?
    +        self.delete_flows(table_id=constants.DVR_NOT_LEARN_VLAN,
    +            eth_src=mac)
    
  • neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/br_tun.py+288 0 added
    @@ -0,0 +1,288 @@
    +# Copyright (C) 2014,2015 VA Linux Systems Japan K.K.
    +# Copyright (C) 2014,2015 YAMAMOTO Takashi <yamamoto at valinux co jp>
    +# All Rights Reserved.
    +#
    +#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    +#    not use this file except in compliance with the License. You may obtain
    +#    a copy of the License at
    +#
    +#         http://www.apache.org/licenses/LICENSE-2.0
    +#
    +#    Unless required by applicable law or agreed to in writing, software
    +#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    +#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    +#    License for the specific language governing permissions and limitations
    +#    under the License.
    +
    +# Copyright 2011 VMware, Inc.
    +# All Rights Reserved.
    +#
    +#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    +#    not use this file except in compliance with the License. You may obtain
    +#    a copy of the License at
    +#
    +#         http://www.apache.org/licenses/LICENSE-2.0
    +#
    +#    Unless required by applicable law or agreed to in writing, software
    +#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    +#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    +#    License for the specific language governing permissions and limitations
    +#    under the License.
    +
    +from ryu.lib.packet import arp
    +from ryu.lib.packet import ether_types
    +
    +from neutron.plugins.ml2.drivers.openvswitch.agent.common import constants
    +from neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native \
    +    import br_dvr_process
    +from neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native \
    +    import ovs_bridge
    +
    +
    +class OVSTunnelBridge(ovs_bridge.OVSAgentBridge,
    +                      br_dvr_process.OVSDVRProcessMixin):
    +    """openvswitch agent tunnel bridge specific logic."""
    +
    +    # Used by OVSDVRProcessMixin
    +    dvr_process_table_id = constants.DVR_PROCESS
    +    dvr_process_next_table_id = constants.PATCH_LV_TO_TUN
    +
    +    def setup_default_table(self, patch_int_ofport, arp_responder_enabled):
    +        (dp, ofp, ofpp) = self._get_dp()
    +
    +        # Table 0 (default) will sort incoming traffic depending on in_port
    +        self.install_goto(dest_table_id=constants.PATCH_LV_TO_TUN,
    +                          priority=1,
    +                          in_port=patch_int_ofport)
    +        self.install_drop()  # default drop
    +
    +        if arp_responder_enabled:
    +            # ARP broadcast-ed request go to the local ARP_RESPONDER table to
    +            # be locally resolved
    +            # REVISIT(yamamoto): add arp_op=arp.ARP_REQUEST matcher?
    +            self.install_goto(dest_table_id=constants.ARP_RESPONDER,
    +                              table_id=constants.PATCH_LV_TO_TUN,
    +                              priority=1,
    +                              eth_dst="ff:ff:ff:ff:ff:ff",
    +                              eth_type=ether_types.ETH_TYPE_ARP)
    +
    +        # PATCH_LV_TO_TUN table will handle packets coming from patch_int
    +        # unicasts go to table UCAST_TO_TUN where remote addresses are learnt
    +        self.install_goto(dest_table_id=constants.UCAST_TO_TUN,
    +                          table_id=constants.PATCH_LV_TO_TUN,
    +                          eth_dst=('00:00:00:00:00:00',
    +                                   '01:00:00:00:00:00'))
    +
    +        # Broadcasts/multicasts go to table FLOOD_TO_TUN that handles flooding
    +        self.install_goto(dest_table_id=constants.FLOOD_TO_TUN,
    +                          table_id=constants.PATCH_LV_TO_TUN,
    +                          eth_dst=('01:00:00:00:00:00',
    +                                   '01:00:00:00:00:00'))
    +
    +        # Tables [tunnel_type]_TUN_TO_LV will set lvid depending on tun_id
    +        # for each tunnel type, and resubmit to table LEARN_FROM_TUN where
    +        # remote mac addresses will be learnt
    +        for tunnel_type in constants.TUNNEL_NETWORK_TYPES:
    +            self.install_drop(table_id=constants.TUN_TABLE[tunnel_type])
    +
    +        # LEARN_FROM_TUN table will have a single flow using a learn action to
    +        # dynamically set-up flows in UCAST_TO_TUN corresponding to remote mac
    +        # addresses (assumes that lvid has already been set by a previous flow)
    +        # Once remote mac addresses are learnt, output packet to patch_int
    +        flow_specs = [
    +            ofpp.NXFlowSpecMatch(src=('vlan_vid', 0),
    +                                 dst=('vlan_vid', 0),
    +                                 n_bits=12),
    +            ofpp.NXFlowSpecMatch(src=('eth_src', 0),
    +                                 dst=('eth_dst', 0),
    +                                 n_bits=48),
    +            ofpp.NXFlowSpecLoad(src=0,
    +                                dst=('vlan_vid', 0),
    +                                n_bits=12),
    +            ofpp.NXFlowSpecLoad(src=('tunnel_id', 0),
    +                                dst=('tunnel_id', 0),
    +                                n_bits=64),
    +            ofpp.NXFlowSpecOutput(src=('in_port', 0),
    +                                  dst='',
    +                                  n_bits=32),
    +        ]
    +        actions = [
    +            ofpp.NXActionLearn(table_id=constants.UCAST_TO_TUN,
    +                               cookie=self.agent_uuid_stamp,
    +                               priority=1,
    +                               hard_timeout=300,
    +                               specs=flow_specs),
    +            ofpp.OFPActionOutput(patch_int_ofport, 0),
    +        ]
    +        self.install_apply_actions(table_id=constants.LEARN_FROM_TUN,
    +                                   priority=1,
    +                                   actions=actions)
    +
    +        # Egress unicast will be handled in table UCAST_TO_TUN, where remote
    +        # mac addresses will be learned. For now, just add a default flow that
    +        # will resubmit unknown unicasts to table FLOOD_TO_TUN to treat them
    +        # as broadcasts/multicasts
    +        self.install_goto(dest_table_id=constants.FLOOD_TO_TUN,
    +                          table_id=constants.UCAST_TO_TUN)
    +
    +        if arp_responder_enabled:
    +            # If none of the ARP entries correspond to the requested IP, the
    +            # broadcast-ed packet is resubmitted to the flooding table
    +            self.install_goto(dest_table_id=constants.FLOOD_TO_TUN,
    +                              table_id=constants.ARP_RESPONDER)
    +
    +        # FLOOD_TO_TUN will handle flooding in tunnels based on lvid,
    +        # for now, add a default drop action
    +        self.install_drop(table_id=constants.FLOOD_TO_TUN)
    +
    +    @staticmethod
    +    def _local_vlan_match(_ofp, ofpp, tun_id):
    +        return ofpp.OFPMatch(tunnel_id=tun_id)
    +
    +    def provision_local_vlan(self, network_type, lvid, segmentation_id,
    +                             distributed=False):
    +        (_dp, ofp, ofpp) = self._get_dp()
    +        match = self._local_vlan_match(ofp, ofpp, segmentation_id)
    +        table_id = constants.TUN_TABLE[network_type]
    +        if distributed:
    +            dest_table_id = constants.DVR_NOT_LEARN
    +        else:
    +            dest_table_id = constants.LEARN_FROM_TUN
    +        actions = [
    +            ofpp.OFPActionPushVlan(),
    +            ofpp.OFPActionSetField(vlan_vid=lvid | ofp.OFPVID_PRESENT),
    +        ]
    +        instructions = [
    +            ofpp.OFPInstructionActions(ofp.OFPIT_APPLY_ACTIONS, actions),
    +            ofpp.OFPInstructionGotoTable(table_id=dest_table_id)]
    +        self.install_instructions(table_id=table_id,
    +                                  priority=1,
    +                                  match=match,
    +                                  instructions=instructions)
    +
    +    def reclaim_local_vlan(self, network_type, segmentation_id):
    +        (_dp, ofp, ofpp) = self._get_dp()
    +        match = self._local_vlan_match(ofp, ofpp, segmentation_id)
    +        table_id = constants.TUN_TABLE[network_type]
    +        self.delete_flows(table_id=table_id, match=match)
    +
    +    @staticmethod
    +    def _flood_to_tun_match(ofp, ofpp, vlan):
    +        return ofpp.OFPMatch(vlan_vid=vlan | ofp.OFPVID_PRESENT)
    +
    +    def install_flood_to_tun(self, vlan, tun_id, ports):
    +        (_dp, ofp, ofpp) = self._get_dp()
    +        match = self._flood_to_tun_match(ofp, ofpp, vlan)
    +        actions = [ofpp.OFPActionPopVlan(),
    +                   ofpp.OFPActionSetField(tunnel_id=tun_id)]
    +        for port in ports:
    +            actions.append(ofpp.OFPActionOutput(port, 0))
    +        self.install_apply_actions(table_id=constants.FLOOD_TO_TUN,
    +                                   priority=1,
    +                                   match=match,
    +                                   actions=actions)
    +
    +    def delete_flood_to_tun(self, vlan):
    +        (_dp, ofp, ofpp) = self._get_dp()
    +        match = self._flood_to_tun_match(ofp, ofpp, vlan)
    +        self.delete_flows(table_id=constants.FLOOD_TO_TUN, match=match)
    +
    +    @staticmethod
    +    def _unicast_to_tun_match(ofp, ofpp, vlan, mac):
    +        return ofpp.OFPMatch(vlan_vid=vlan | ofp.OFPVID_PRESENT, eth_dst=mac)
    +
    +    def install_unicast_to_tun(self, vlan, tun_id, port, mac):
    +        (_dp, ofp, ofpp) = self._get_dp()
    +        match = self._unicast_to_tun_match(ofp, ofpp, vlan, mac)
    +        actions = [ofpp.OFPActionPopVlan(),
    +                   ofpp.OFPActionSetField(tunnel_id=tun_id),
    +                   ofpp.OFPActionOutput(port, 0)]
    +        self.install_apply_actions(table_id=constants.UCAST_TO_TUN,
    +                                   priority=2,
    +                                   match=match,
    +                                   actions=actions)
    +
    +    def delete_unicast_to_tun(self, vlan, mac):
    +        (_dp, ofp, ofpp) = self._get_dp()
    +        if mac is None:
    +            match = ofpp.OFPMatch(vlan_vid=vlan | ofp.OFPVID_PRESENT)
    +        else:
    +            match = self._unicast_to_tun_match(ofp, ofpp, vlan, mac)
    +        self.delete_flows(table_id=constants.UCAST_TO_TUN, match=match)
    +
    +    @staticmethod
    +    def _arp_responder_match(ofp, ofpp, vlan, ip):
    +        # REVISIT(yamamoto): add arp_op=arp.ARP_REQUEST matcher?
    +        return ofpp.OFPMatch(vlan_vid=vlan | ofp.OFPVID_PRESENT,
    +                             eth_type=ether_types.ETH_TYPE_ARP,
    +                             arp_tpa=ip)
    +
    +    def install_arp_responder(self, vlan, ip, mac):
    +        (dp, ofp, ofpp) = self._get_dp()
    +        match = self._arp_responder_match(ofp, ofpp, vlan, ip)
    +        actions = [ofpp.OFPActionSetField(arp_op=arp.ARP_REPLY),
    +                   ofpp.NXActionRegMove(src_field='arp_sha',
    +                                        dst_field='arp_tha',
    +                                        n_bits=48),
    +                   ofpp.NXActionRegMove(src_field='arp_spa',
    +                                        dst_field='arp_tpa',
    +                                        n_bits=32),
    +                   ofpp.OFPActionSetField(arp_sha=mac),
    +                   ofpp.OFPActionSetField(arp_spa=ip),
    +                   ofpp.OFPActionOutput(ofp.OFPP_IN_PORT, 0)]
    +        self.install_apply_actions(table_id=constants.ARP_RESPONDER,
    +                                   priority=1,
    +                                   match=match,
    +                                   actions=actions)
    +
    +    def delete_arp_responder(self, vlan, ip):
    +        (_dp, ofp, ofpp) = self._get_dp()
    +        if ip is None:
    +            # REVISIT(yamamoto): add arp_op=arp.ARP_REQUEST matcher?
    +            match = ofpp.OFPMatch(vlan_vid=vlan | ofp.OFPVID_PRESENT,
    +                                  eth_type=ether_types.ETH_TYPE_ARP)
    +        else:
    +            match = self._arp_responder_match(ofp, ofpp, vlan, ip)
    +        self.delete_flows(table_id=constants.ARP_RESPONDER, match=match)
    +
    +    def setup_tunnel_port(self, network_type, port):
    +        self.install_goto(dest_table_id=constants.TUN_TABLE[network_type],
    +                          priority=1,
    +                          in_port=port)
    +
    +    def cleanup_tunnel_port(self, port):
    +        self.delete_flows(in_port=port)
    +
    +    def add_dvr_mac_tun(self, mac, port):
    +        self.install_output(table_id=constants.DVR_NOT_LEARN,
    +                            priority=1,
    +                            eth_src=mac,
    +                            port=port)
    +
    +    def remove_dvr_mac_tun(self, mac):
    +        # REVISIT(yamamoto): match in_port as well?
    +        self.delete_flows(table_id=constants.DVR_NOT_LEARN,
    +                          eth_src=mac)
    +
    +    def deferred(self):
    +        # REVISIT(yamamoto): This is for API compat with "ovs-ofctl"
    +        # interface.  Consider removing this mechanism when obsoleting
    +        # "ovs-ofctl" interface.
    +        # For "ovs-ofctl" interface, "deferred" mechanism would improve
    +        # performance by batching flow-mods with a single ovs-ofctl command
    +        # invocation.
    +        # On the other hand, for this "native" interface, the overheads of
    +        # each flow-mods are already minimum and batching doesn't make much
    +        # sense.  Thus this method is left as no-op.
    +        # It might be possible to send multiple flow-mods with a single
    +        # barrier.  But it's unclear that level of performance optimization
    +        # is desirable while it would certainly complicate error handling.
    +        return self
    +
    +    def __enter__(self):
    +        # REVISIT(yamamoto): See the comment on deferred().
    +        return self
    +
    +    def __exit__(self, exc_type, exc_value, traceback):
    +        # REVISIT(yamamoto): See the comment on deferred().
    +        pass
    
  • neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/__init__.py+0 0 renamed
  • neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/main.py+37 0 added
    @@ -0,0 +1,37 @@
    +# Copyright (C) 2015 VA Linux Systems Japan K.K.
    +# Copyright (C) 2015 YAMAMOTO Takashi <yamamoto at valinux co jp>
    +# All Rights Reserved.
    +#
    +#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    +#    not use this file except in compliance with the License. You may obtain
    +#    a copy of the License at
    +#
    +#         http://www.apache.org/licenses/LICENSE-2.0
    +#
    +#    Unless required by applicable law or agreed to in writing, software
    +#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    +#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    +#    License for the specific language governing permissions and limitations
    +#    under the License.
    +
    +from oslo_config import cfg
    +from ryu.base import app_manager
    +from ryu import cfg as ryu_cfg
    +
    +
    +cfg.CONF.import_group(
    +    'OVS',
    +    'neutron.plugins.ml2.drivers.openvswitch.agent.common.config')
    +
    +
    +def init_config():
    +    ryu_cfg.CONF(project='ryu', args=[])
    +    ryu_cfg.CONF.ofp_listen_host = cfg.CONF.OVS.of_listen_address
    +    ryu_cfg.CONF.ofp_tcp_listen_port = cfg.CONF.OVS.of_listen_port
    +
    +
    +def main():
    +    app_manager.AppManager.run_apps([
    +        'neutron.plugins.ml2.drivers.openvswitch.agent.'
    +        'openflow.native.ovs_ryuapp',
    +    ])
    
  • neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py+202 0 added
    @@ -0,0 +1,202 @@
    +# Copyright (C) 2014,2015 VA Linux Systems Japan K.K.
    +# Copyright (C) 2014,2015 YAMAMOTO Takashi <yamamoto at valinux co jp>
    +# All Rights Reserved.
    +#
    +#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    +#    not use this file except in compliance with the License. You may obtain
    +#    a copy of the License at
    +#
    +#         http://www.apache.org/licenses/LICENSE-2.0
    +#
    +#    Unless required by applicable law or agreed to in writing, software
    +#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    +#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    +#    License for the specific language governing permissions and limitations
    +#    under the License.
    +
    +import eventlet
    +import netaddr
    +from oslo_config import cfg
    +from oslo_log import log as logging
    +from oslo_utils import excutils
    +from oslo_utils import timeutils
    +import ryu.app.ofctl.api as ofctl_api
    +import ryu.exception as ryu_exc
    +
    +from neutron.i18n import _LE, _LW
    +
    +LOG = logging.getLogger(__name__)
    +
    +
    +class OpenFlowSwitchMixin(object):
    +    """Mixin to provide common convenient routines for an openflow switch.
    +
    +    NOTE(yamamoto): super() points to ovs_lib.OVSBridge.
    +    See ovs_bridge.py how this class is actually used.
    +    """
    +
    +    @staticmethod
    +    def _cidr_to_ryu(ip):
    +        n = netaddr.IPNetwork(ip)
    +        if n.hostmask:
    +            return (str(n.ip), str(n.netmask))
    +        return str(n.ip)
    +
    +    def __init__(self, *args, **kwargs):
    +        self._app = kwargs.pop('ryu_app')
    +        super(OpenFlowSwitchMixin, self).__init__(*args, **kwargs)
    +
    +    def _get_dp_by_dpid(self, dpid_int):
    +        """Get Ryu datapath object for the switch."""
    +        timeout_sec = cfg.CONF.OVS.of_connect_timeout
    +        start_time = timeutils.now()
    +        while True:
    +            dp = ofctl_api.get_datapath(self._app, dpid_int)
    +            if dp is not None:
    +                break
    +            # The switch has not established a connection to us.
    +            # Wait for a little.
    +            if timeutils.now() > start_time + timeout_sec:
    +                m = _LE("Switch connection timeout")
    +                LOG.error(m)
    +                # NOTE(yamamoto): use RuntimeError for compat with ovs_lib
    +                raise RuntimeError(m)
    +            eventlet.sleep(1)
    +        return dp
    +
    +    def _send_msg(self, msg, reply_cls=None, reply_multi=False):
    +        timeout_sec = cfg.CONF.OVS.of_request_timeout
    +        timeout = eventlet.timeout.Timeout(seconds=timeout_sec)
    +        try:
    +            result = ofctl_api.send_msg(self._app, msg, reply_cls, reply_multi)
    +        except ryu_exc.RyuException as e:
    +            m = _LE("ofctl request %(request)s error %(error)s") % {
    +                "request": msg,
    +                "error": e,
    +            }
    +            LOG.error(m)
    +            # NOTE(yamamoto): use RuntimeError for compat with ovs_lib
    +            raise RuntimeError(m)
    +        except eventlet.timeout.Timeout as e:
    +            with excutils.save_and_reraise_exception() as ctx:
    +                if e is timeout:
    +                    ctx.reraise = False
    +                    m = _LE("ofctl request %(request)s timed out") % {
    +                        "request": msg,
    +                    }
    +                    LOG.error(m)
    +                    # NOTE(yamamoto): use RuntimeError for compat with ovs_lib
    +                    raise RuntimeError(m)
    +        finally:
    +            timeout.cancel()
    +        LOG.debug("ofctl request %(request)s result %(result)s",
    +                  {"request": msg, "result": result})
    +        return result
    +
    +    @staticmethod
    +    def _match(_ofp, ofpp, match, **match_kwargs):
    +        if match is not None:
    +            return match
    +        return ofpp.OFPMatch(**match_kwargs)
    +
    +    def delete_flows(self, table_id=None, strict=False, priority=0,
    +                     cookie=0, cookie_mask=0,
    +                     match=None, **match_kwargs):
    +        (dp, ofp, ofpp) = self._get_dp()
    +        if table_id is None:
    +            table_id = ofp.OFPTT_ALL
    +        match = self._match(ofp, ofpp, match, **match_kwargs)
    +        if strict:
    +            cmd = ofp.OFPFC_DELETE_STRICT
    +        else:
    +            cmd = ofp.OFPFC_DELETE
    +        msg = ofpp.OFPFlowMod(dp,
    +                              command=cmd,
    +                              cookie=cookie,
    +                              cookie_mask=cookie_mask,
    +                              table_id=table_id,
    +                              match=match,
    +                              priority=priority,
    +                              out_group=ofp.OFPG_ANY,
    +                              out_port=ofp.OFPP_ANY)
    +        self._send_msg(msg)
    +
    +    def dump_flows(self, table_id=None):
    +        (dp, ofp, ofpp) = self._get_dp()
    +        if table_id is None:
    +            table_id = ofp.OFPTT_ALL
    +        msg = ofpp.OFPFlowStatsRequest(dp, table_id=table_id)
    +        replies = self._send_msg(msg,
    +                                 reply_cls=ofpp.OFPFlowStatsReply,
    +                                 reply_multi=True)
    +        flows = []
    +        for rep in replies:
    +            flows += rep.body
    +        return flows
    +
    +    def cleanup_flows(self):
    +        cookies = set([f.cookie for f in self.dump_flows()])
    +        for c in cookies:
    +            if c == self.agent_uuid_stamp:
    +                continue
    +            LOG.warn(_LW("Deleting flow with cookie 0x%(cookie)x") % {
    +                'cookie': c})
    +            self.delete_flows(cookie=c, cookie_mask=((1 << 64) - 1))
    +
    +    def install_goto_next(self, table_id):
    +        self.install_goto(table_id=table_id, dest_table_id=table_id + 1)
    +
    +    def install_output(self, port, table_id=0, priority=0,
    +                       match=None, **match_kwargs):
    +        (_dp, ofp, ofpp) = self._get_dp()
    +        actions = [ofpp.OFPActionOutput(port, 0)]
    +        instructions = [ofpp.OFPInstructionActions(
    +                        ofp.OFPIT_APPLY_ACTIONS, actions)]
    +        self.install_instructions(table_id=table_id, priority=priority,
    +                                  instructions=instructions,
    +                                  match=match, **match_kwargs)
    +
    +    def install_normal(self, table_id=0, priority=0,
    +                       match=None, **match_kwargs):
    +        (_dp, ofp, _ofpp) = self._get_dp()
    +        self.install_output(port=ofp.OFPP_NORMAL,
    +                            table_id=table_id, priority=priority,
    +                            match=match, **match_kwargs)
    +
    +    def install_goto(self, dest_table_id, table_id=0, priority=0,
    +                     match=None, **match_kwargs):
    +        (_dp, _ofp, ofpp) = self._get_dp()
    +        instructions = [ofpp.OFPInstructionGotoTable(table_id=dest_table_id)]
    +        self.install_instructions(table_id=table_id, priority=priority,
    +                                  instructions=instructions,
    +                                  match=match, **match_kwargs)
    +
    +    def install_drop(self, table_id=0, priority=0, match=None, **match_kwargs):
    +        self.install_instructions(table_id=table_id, priority=priority,
    +                                  instructions=[], match=match, **match_kwargs)
    +
    +    def install_instructions(self, instructions,
    +                             table_id=0, priority=0,
    +                             match=None, **match_kwargs):
    +        (dp, ofp, ofpp) = self._get_dp()
    +        match = self._match(ofp, ofpp, match, **match_kwargs)
    +        msg = ofpp.OFPFlowMod(dp,
    +                              table_id=table_id,
    +                              cookie=self.agent_uuid_stamp,
    +                              match=match,
    +                              priority=priority,
    +                              instructions=instructions)
    +        self._send_msg(msg)
    +
    +    def install_apply_actions(self, actions,
    +                              table_id=0, priority=0,
    +                              match=None, **match_kwargs):
    +        (dp, ofp, ofpp) = self._get_dp()
    +        instructions = [
    +            ofpp.OFPInstructionActions(ofp.OFPIT_APPLY_ACTIONS, actions),
    +        ]
    +        self.install_instructions(table_id=table_id,
    +                                  priority=priority,
    +                                  match=match,
    +                                  instructions=instructions,
    +                                  **match_kwargs)
    
  • neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_bridge.py+79 0 added
    @@ -0,0 +1,79 @@
    +# Copyright (C) 2014,2015 VA Linux Systems Japan K.K.
    +# Copyright (C) 2014,2015 YAMAMOTO Takashi <yamamoto at valinux co jp>
    +# All Rights Reserved.
    +#
    +#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    +#    not use this file except in compliance with the License. You may obtain
    +#    a copy of the License at
    +#
    +#         http://www.apache.org/licenses/LICENSE-2.0
    +#
    +#    Unless required by applicable law or agreed to in writing, software
    +#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    +#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    +#    License for the specific language governing permissions and limitations
    +#    under the License.
    +
    +from oslo_log import log as logging
    +from oslo_utils import excutils
    +
    +from neutron.agent.common import ovs_lib
    +from neutron.i18n import _LI
    +from neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native \
    +    import ofswitch
    +
    +
    +LOG = logging.getLogger(__name__)
    +
    +
    +class OVSAgentBridge(ofswitch.OpenFlowSwitchMixin, ovs_lib.OVSBridge):
    +    """Common code for bridges used by OVS agent"""
    +
    +    _cached_dpid = None
    +
    +    def _get_dp(self):
    +        """Get (dp, ofp, ofpp) tuple for the switch.
    +
    +        A convenient method for openflow message composers.
    +        """
    +        while True:
    +            dpid_int = self._cached_dpid
    +            if dpid_int is None:
    +                dpid_str = self.get_datapath_id()
    +                LOG.info(_LI("Bridge %(br_name)s has datapath-ID %(dpid)s"),
    +                         {"br_name": self.br_name, "dpid": dpid_str})
    +                dpid_int = int(dpid_str, 16)
    +            try:
    +                dp = self._get_dp_by_dpid(dpid_int)
    +            except RuntimeError:
    +                with excutils.save_and_reraise_exception() as ctx:
    +                    self._cached_dpid = None
    +                    # Retry if dpid has been changed.
    +                    # NOTE(yamamoto): Open vSwitch change its dpid on
    +                    # some events.
    +                    # REVISIT(yamamoto): Consider to set dpid statically.
    +                    new_dpid_str = self.get_datapath_id()
    +                    if new_dpid_str != dpid_str:
    +                        LOG.info(_LI("Bridge %(br_name)s changed its "
    +                                     "datapath-ID from %(old)s to %(new)s"), {
    +                            "br_name": self.br_name,
    +                            "old": dpid_str,
    +                            "new": new_dpid_str,
    +                        })
    +                        ctx.reraise = False
    +            else:
    +                self._cached_dpid = dpid_int
    +                return dp, dp.ofproto, dp.ofproto_parser
    +
    +    def setup_controllers(self, conf):
    +        controllers = [
    +            "tcp:%(address)s:%(port)s" % {
    +                "address": conf.OVS.of_listen_address,
    +                "port": conf.OVS.of_listen_port,
    +            }
    +        ]
    +        self.set_protocols("OpenFlow13")
    +        self.set_controller(controllers)
    +
    +    def drop_port(self, in_port):
    +        self.install_drop(priority=2, in_port=in_port)
    
  • neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py+50 0 added
    @@ -0,0 +1,50 @@
    +# Copyright (C) 2015 VA Linux Systems Japan K.K.
    +# Copyright (C) 2015 YAMAMOTO Takashi <yamamoto at valinux co jp>
    +# All Rights Reserved.
    +#
    +#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    +#    not use this file except in compliance with the License. You may obtain
    +#    a copy of the License at
    +#
    +#         http://www.apache.org/licenses/LICENSE-2.0
    +#
    +#    Unless required by applicable law or agreed to in writing, software
    +#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    +#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    +#    License for the specific language governing permissions and limitations
    +#    under the License.
    +
    +import functools
    +
    +import ryu.app.ofctl.api  # noqa
    +from ryu.base import app_manager
    +from ryu.lib import hub
    +from ryu.ofproto import ofproto_v1_3
    +
    +from neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native \
    +    import br_int
    +from neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native \
    +    import br_phys
    +from neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native \
    +    import br_tun
    +from neutron.plugins.ml2.drivers.openvswitch.agent \
    +    import ovs_neutron_agent as ovs_agent
    +
    +
    +class OVSNeutronAgentRyuApp(app_manager.RyuApp):
    +    OFP_VERSIONS = [ofproto_v1_3.OFP_VERSION]
    +
    +    def start(self):
    +        # Start Ryu event loop thread
    +        super(OVSNeutronAgentRyuApp, self).start()
    +
    +        def _make_br_cls(br_cls):
    +            return functools.partial(br_cls, ryu_app=self)
    +
    +        # Start agent main loop thread
    +        bridge_classes = {
    +            'br_int': _make_br_cls(br_int.OVSIntegrationBridge),
    +            'br_phys': _make_br_cls(br_phys.OVSPhysicalBridge),
    +            'br_tun': _make_br_cls(br_tun.OVSTunnelBridge),
    +        }
    +        return hub.spawn(ovs_agent.main, bridge_classes)
    
  • neutron/plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/br_int.py+21 1 modified
    @@ -18,7 +18,7 @@
     * references
     ** OVS agent https://wiki.openstack.org/wiki/Ovs-flow-logic
     """
    -
    +from neutron.common import constants as const
     from neutron.plugins.common import constants as p_const
     from neutron.plugins.ml2.drivers.openvswitch.agent.common import constants
     from neutron.plugins.ml2.drivers.openvswitch.agent.openflow.ovs_ofctl \
    @@ -110,6 +110,23 @@ def remove_dvr_mac_tun(self, mac, port):
             self.delete_flows(table_id=constants.LOCAL_SWITCHING,
                               in_port=port, eth_src=mac)
     
    +    def install_icmpv6_na_spoofing_protection(self, port, ip_addresses):
    +        # Allow neighbor advertisements as long as they match addresses
    +        # that actually belong to the port.
    +        for ip in ip_addresses:
    +            self.install_normal(
    +                table_id=constants.ARP_SPOOF_TABLE, priority=2,
    +                dl_type=const.ETHERTYPE_IPV6, nw_proto=const.PROTO_NUM_ICMP_V6,
    +                icmp_type=const.ICMPV6_TYPE_NA, nd_target=ip, in_port=port)
    +
    +        # Now that the rules are ready, direct icmpv6 neighbor advertisement
    +        # traffic from the port into the anti-spoof table.
    +        self.add_flow(table=constants.LOCAL_SWITCHING,
    +                      priority=10, dl_type=const.ETHERTYPE_IPV6,
    +                      nw_proto=const.PROTO_NUM_ICMP_V6,
    +                      icmp_type=const.ICMPV6_TYPE_NA, in_port=port,
    +                      actions=("resubmit(,%s)" % constants.ARP_SPOOF_TABLE))
    +
         def install_arp_spoofing_protection(self, port, ip_addresses):
             # allow ARPs as long as they match addresses that actually
             # belong to the port.
    @@ -129,5 +146,8 @@ def install_arp_spoofing_protection(self, port, ip_addresses):
         def delete_arp_spoofing_protection(self, port):
             self.delete_flows(table_id=constants.LOCAL_SWITCHING,
                               in_port=port, proto='arp')
    +        self.delete_flows(table_id=constants.LOCAL_SWITCHING,
    +                          in_port=port, nw_proto=const.PROTO_NUM_ICMP_V6,
    +                          icmp_type=const.ICMPV6_TYPE_NA)
             self.delete_flows(table_id=constants.ARP_SPOOF_TABLE,
                               in_port=port)
    
  • neutron/plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/br_tun.py+1 1 modified
    @@ -62,7 +62,7 @@ def setup_default_table(self, patch_int_ofport, arp_responder_enabled):
                 if arp_responder_enabled:
                     # ARP broadcast-ed request go to the local ARP_RESPONDER
                     # table to be locally resolved
    -                # REVISIT(yamamoto): arp_op=arp.ARP_REQUEST
    +                # REVISIT(yamamoto): add arp_op=arp.ARP_REQUEST matcher?
                     deferred_br.add_flow(table=constants.PATCH_LV_TO_TUN,
                                          priority=1,
                                          proto='arp',
    
  • neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py+0 3 modified
    @@ -352,9 +352,6 @@ def dvr_mac_address_update(self, dvr_macs):
         def in_distributed_mode(self):
             return self.dvr_mac_address is not None
     
    -    def is_dvr_router_interface(self, device_owner):
    -        return device_owner == n_const.DEVICE_OWNER_DVR_INTERFACE
    -
         def process_tunneled_network(self, network_type, lvid, segmentation_id):
             self.tun_br.provision_local_vlan(
                 network_type=network_type,
    
  • neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py+99 43 modified
    @@ -40,6 +40,7 @@
     from neutron.common import config
     from neutron.common import constants as n_const
     from neutron.common import exceptions
    +from neutron.common import ipv6_utils as ipv6
     from neutron.common import topics
     from neutron.common import utils as n_utils
     from neutron import context
    @@ -96,6 +97,10 @@ class OVSPluginApi(agent_rpc.PluginApi):
         pass
     
     
    +def has_zero_prefixlen_address(ip_addresses):
    +    return any(netaddr.IPNetwork(ip).prefixlen == 0 for ip in ip_addresses)
    +
    +
     class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin,
                           l2population_rpc.L2populationRpcCallBackTunnelMixin,
                           dvr_rpc.DVRAgentRpcCallbackMixin):
    @@ -254,6 +259,7 @@ def __init__(self, bridge_classes, integ_br, tun_br, local_ip,
             self.tunnel_count = 0
             self.vxlan_udp_port = self.conf.AGENT.vxlan_udp_port
             self.dont_fragment = self.conf.AGENT.dont_fragment
    +        self.tunnel_csum = cfg.CONF.AGENT.tunnel_csum
             self.tun_br = None
             self.patch_int_ofport = constants.OFPORT_INVALID
             self.patch_tun_ofport = constants.OFPORT_INVALID
    @@ -351,7 +357,8 @@ def _restore_local_vlan_map(self):
                     self.provision_local_vlan(local_vlan_map['net_uuid'],
                                               local_vlan_map['network_type'],
                                               local_vlan_map['physical_network'],
    -                                          local_vlan_map['segmentation_id'],
    +                                          int(local_vlan_map[
    +                                              'segmentation_id']),
                                               local_vlan)
     
         def setup_rpc(self):
    @@ -375,8 +382,7 @@ def setup_rpc(self):
                          [topics.DVR, topics.UPDATE],
                          [topics.NETWORK, topics.UPDATE]]
             if self.l2_pop:
    -            consumers.append([topics.L2POPULATION,
    -                              topics.UPDATE, self.conf.host])
    +            consumers.append([topics.L2POPULATION, topics.UPDATE])
             self.connection = agent_rpc.create_consumers(self.endpoints,
                                                          self.topic,
                                                          consumers,
    @@ -432,21 +438,24 @@ def process_deleted_ports(self, port_info):
             # they are already gone
             if 'removed' in port_info:
                 self.deleted_ports -= port_info['removed']
    +        deleted_ports = list(self.deleted_ports)
             while self.deleted_ports:
                 port_id = self.deleted_ports.pop()
    -            # Flush firewall rules and move to dead VLAN so deleted ports no
    -            # longer have access to the network
    -            self.sg_agent.remove_devices_filter([port_id])
                 port = self.int_br.get_vif_port_by_id(port_id)
                 self._clean_network_ports(port_id)
                 self.ext_manager.delete_port(self.context,
                                              {"vif_port": port,
                                               "port_id": port_id})
    +            # move to dead VLAN so deleted ports no
    +            # longer have access to the network
                 if port:
                     # don't log errors since there is a chance someone will be
                     # removing the port from the bridge at the same time
                     self.port_dead(port, log_errors=False)
                 self.port_unbound(port_id)
    +        # Flush firewall rules after ports are put on dead VLAN to be
    +        # more secure
    +        self.sg_agent.remove_devices_filter(deleted_ports)
     
         def tunnel_update(self, context, **kwargs):
             LOG.debug("tunnel_update received")
    @@ -571,8 +580,12 @@ def setup_entry_for_arp_reply(self, br, action, local_vid, mac_address,
             if not self.arp_responder_enabled:
                 return
     
    +        ip = netaddr.IPAddress(ip_address)
    +        if ip.version == 6:
    +            return
    +
    +        ip = str(ip)
             mac = str(netaddr.EUI(mac_address, dialect=_mac_mydialect))
    -        ip = str(netaddr.IPAddress(ip_address))
     
             if action == 'add':
                 br.install_arp_responder(local_vid, ip, mac)
    @@ -792,7 +805,7 @@ def _bind_devices(self, need_binding_ports):
             devices_down = []
             port_names = [p['vif_port'].port_name for p in need_binding_ports]
             port_info = self.int_br.get_ports_attributes(
    -            "Port", columns=["name", "tag"], ports=port_names)
    +            "Port", columns=["name", "tag"], ports=port_names, if_exists=True)
             tags_by_name = {x['name']: x['tag'] for x in port_info}
             for port_detail in need_binding_ports:
                 lvm = self.local_vlan_map.get(port_detail['network_id'])
    @@ -804,6 +817,10 @@ def _bind_devices(self, need_binding_ports):
                 device = port_detail['device']
                 # Do not bind a port if it's already bound
                 cur_tag = tags_by_name.get(port.port_name)
    +            if cur_tag is None:
    +                LOG.info(_LI("Port %s was deleted concurrently, skipping it"),
    +                         port.port_name)
    +                continue
                 if cur_tag != lvm.vlan:
                     self.int_br.delete_flows(in_port=port.ofport)
                 if self.prevent_arp_spoofing:
    @@ -849,21 +866,41 @@ def setup_arp_spoofing_protection(bridge, vif, port_details):
                 LOG.info(_LI("Skipping ARP spoofing rules for port '%s' because "
                              "it has port security disabled"), vif.port_name)
                 return
    +        if port_details['device_owner'].startswith('network:'):
    +            LOG.debug("Skipping ARP spoofing rules for network owned port "
    +                      "'%s'.", vif.port_name)
    +            return
             # collect all of the addresses and cidrs that belong to the port
             addresses = {f['ip_address'] for f in port_details['fixed_ips']}
    +        mac_addresses = {vif.vif_mac}
             if port_details.get('allowed_address_pairs'):
                 addresses |= {p['ip_address']
                               for p in port_details['allowed_address_pairs']}
    -
    -        addresses = {ip for ip in addresses
    -                     if netaddr.IPNetwork(ip).version == 4}
    -        if any(netaddr.IPNetwork(ip).prefixlen == 0 for ip in addresses):
    -            # don't try to install protection because a /0 prefix allows any
    -            # address anyway and the ARP_SPA can only match on /1 or more.
    -            return
    -
    -        bridge.install_arp_spoofing_protection(port=vif.ofport,
    -                                               ip_addresses=addresses)
    +            mac_addresses |= {p['mac_address']
    +                              for p in port_details['allowed_address_pairs']
    +                              if p.get('mac_address')}
    +
    +        ipv6_addresses = {ip for ip in addresses
    +                          if netaddr.IPNetwork(ip).version == 6}
    +        # Allow neighbor advertisements for LLA address.
    +        ipv6_addresses |= {str(ipv6.get_ipv6_addr_by_EUI64(
    +                               n_const.IPV6_LLA_PREFIX, mac))
    +                           for mac in mac_addresses}
    +        if not has_zero_prefixlen_address(ipv6_addresses):
    +            # Install protection only when prefix is not zero because a /0
    +            # prefix allows any address anyway and the nd_target can only
    +            # match on /1 or more.
    +            bridge.install_icmpv6_na_spoofing_protection(port=vif.ofport,
    +                ip_addresses=ipv6_addresses)
    +
    +        ipv4_addresses = {ip for ip in addresses
    +                          if netaddr.IPNetwork(ip).version == 4}
    +        if not has_zero_prefixlen_address(ipv4_addresses):
    +            # Install protection only when prefix is not zero because a /0
    +            # prefix allows any address anyway and the ARP_SPA can only
    +            # match on /1 or more.
    +            bridge.install_arp_spoofing_protection(port=vif.ofport,
    +                                                   ip_addresses=ipv4_addresses)
     
         def port_unbound(self, vif_id, net_uuid=None):
             '''Unbind port.
    @@ -1005,9 +1042,13 @@ def get_peer_name(self, prefix, name):
             # Leave part of the bridge name on for easier identification
             hashlen = 6
             namelen = n_const.DEVICE_NAME_MAX_LEN - len(prefix) - hashlen
    +        if isinstance(name, six.text_type):
    +            hashed_name = hashlib.sha1(name.encode('utf-8'))
    +        else:
    +            hashed_name = hashlib.sha1(name)
             new_name = ('%(prefix)s%(truncated)s%(hash)s' %
                         {'prefix': prefix, 'truncated': name[0:namelen],
    -                     'hash': hashlib.sha1(name).hexdigest()[0:hashlen]})
    +                     'hash': hashed_name.hexdigest()[0:hashlen]})
             LOG.warning(_LW("Creating an interface named %(name)s exceeds the "
                             "%(limit)d character limitation. It was shortened to "
                             "%(new_name)s to fit."),
    @@ -1144,25 +1185,29 @@ def _get_ofport_moves(current, previous):
                     port_moves.append(name)
             return port_moves
     
    -    def _get_port_info(self, registered_ports, cur_ports):
    +    def _get_port_info(self, registered_ports, cur_ports,
    +                       readd_registered_ports):
             port_info = {'current': cur_ports}
             # FIXME(salv-orlando): It's not really necessary to return early
             # if nothing has changed.
    -        if cur_ports == registered_ports:
    -            # No added or removed ports to set, just return here
    +        if not readd_registered_ports and cur_ports == registered_ports:
                 return port_info
    -        port_info['added'] = cur_ports - registered_ports
    -        # Remove all the known ports not found on the integration bridge
    +
    +        if readd_registered_ports:
    +            port_info['added'] = cur_ports
    +        else:
    +            port_info['added'] = cur_ports - registered_ports
    +        # Update port_info with ports not found on the integration bridge
             port_info['removed'] = registered_ports - cur_ports
             return port_info
     
    -    def scan_ports(self, registered_ports, updated_ports=None):
    +    def scan_ports(self, registered_ports, sync, updated_ports=None):
             cur_ports = self.int_br.get_vif_port_set()
             self.int_br_device_count = len(cur_ports)
    -        port_info = self._get_port_info(registered_ports, cur_ports)
    +        port_info = self._get_port_info(registered_ports, cur_ports, sync)
             if updated_ports is None:
                 updated_ports = set()
    -        updated_ports.update(self.check_changed_vlans(registered_ports))
    +        updated_ports.update(self.check_changed_vlans())
             if updated_ports:
                 # Some updated ports might have been removed in the
                 # meanwhile, and therefore should not be processed.
    @@ -1173,13 +1218,13 @@ def scan_ports(self, registered_ports, updated_ports=None):
                     port_info['updated'] = updated_ports
             return port_info
     
    -    def scan_ancillary_ports(self, registered_ports):
    +    def scan_ancillary_ports(self, registered_ports, sync):
             cur_ports = set()
             for bridge in self.ancillary_brs:
                 cur_ports |= bridge.get_vif_port_set()
    -        return self._get_port_info(registered_ports, cur_ports)
    +        return self._get_port_info(registered_ports, cur_ports, sync)
     
    -    def check_changed_vlans(self, registered_ports):
    +    def check_changed_vlans(self):
             """Return ports which have lost their vlan tag.
     
             The returned value is a set of port ids of the ports concerned by a
    @@ -1188,19 +1233,18 @@ def check_changed_vlans(self, registered_ports):
             port_tags = self.int_br.get_port_tag_dict()
             changed_ports = set()
             for lvm in self.local_vlan_map.values():
    -            for port in registered_ports:
    +            for port in lvm.vif_ports.values():
                     if (
    -                    port in lvm.vif_ports
    -                    and lvm.vif_ports[port].port_name in port_tags
    -                    and port_tags[lvm.vif_ports[port].port_name] != lvm.vlan
    +                    port.port_name in port_tags
    +                    and port_tags[port.port_name] != lvm.vlan
                     ):
                         LOG.info(
                             _LI("Port '%(port_name)s' has lost "
                                 "its vlan tag '%(vlan_tag)d'!"),
    -                        {'port_name': lvm.vif_ports[port].port_name,
    +                        {'port_name': port.port_name,
                              'vlan_tag': lvm.vlan}
                         )
    -                    changed_ports.add(port)
    +                    changed_ports.add(port.vif_id)
             return changed_ports
     
         def treat_vif_port(self, vif_port, port_id, network_id, network_type,
    @@ -1232,7 +1276,8 @@ def _setup_tunnel_port(self, br, port_name, remote_ip, tunnel_type):
                                         self.local_ip,
                                         tunnel_type,
                                         self.vxlan_udp_port,
    -                                    self.dont_fragment)
    +                                    self.dont_fragment,
    +                                    self.tunnel_csum)
             if ofport == ovs_lib.INVALID_OFPORT:
                 LOG.error(_LE("Failed to set-up %(type)s tunnel port to %(ip)s"),
                           {'type': tunnel_type, 'ip': remote_ip})
    @@ -1539,6 +1584,7 @@ def tunnel_sync(self):
         def _agent_has_updates(self, polling_manager):
             return (polling_manager.is_polling_required or
                     self.updated_ports or
    +                self.deleted_ports or
                     self.sg_agent.firewall_refresh_needed())
     
         def _port_info_has_changes(self, port_info):
    @@ -1607,6 +1653,7 @@ def rpc_loop(self, polling_manager=None):
             ancillary_ports = set()
             tunnel_sync = True
             ovs_restarted = False
    +        consecutive_resyncs = 0
             while self._check_and_handle_signal():
                 port_info = {}
                 ancillary_port_info = {}
    @@ -1615,10 +1662,18 @@ def rpc_loop(self, polling_manager=None):
                           self.iter_num)
                 if sync:
                     LOG.info(_LI("Agent out of sync with plugin!"))
    -                ports.clear()
    -                ancillary_ports.clear()
    -                sync = False
                     polling_manager.force_polling()
    +                consecutive_resyncs = consecutive_resyncs + 1
    +                if consecutive_resyncs >= constants.MAX_DEVICE_RETRIES:
    +                    LOG.warn(_LW("Clearing cache of registered ports, retrials"
    +                                 " to resync were > %s"),
    +                             constants.MAX_DEVICE_RETRIES)
    +                    ports.clear()
    +                    ancillary_ports.clear()
    +                    sync = False
    +                    consecutive_resyncs = 0
    +            else:
    +                consecutive_resyncs = 0
                 ovs_status = self.check_ovs_status()
                 if ovs_status == constants.OVS_RESTARTED:
                     self.setup_integration_br()
    @@ -1663,7 +1718,8 @@ def rpc_loop(self, polling_manager=None):
                         updated_ports_copy = self.updated_ports
                         self.updated_ports = set()
                         reg_ports = (set() if ovs_restarted else ports)
    -                    port_info = self.scan_ports(reg_ports, updated_ports_copy)
    +                    port_info = self.scan_ports(reg_ports, sync,
    +                                                updated_ports_copy)
                         self.process_deleted_ports(port_info)
                         ofport_changed_ports = self.update_stale_ofport_rules()
                         if ofport_changed_ports:
    @@ -1674,16 +1730,16 @@ def rpc_loop(self, polling_manager=None):
                                   "Elapsed:%(elapsed).3f",
                                   {'iter_num': self.iter_num,
                                    'elapsed': time.time() - start})
    -
                         # Treat ancillary devices if they exist
                         if self.ancillary_brs:
                             ancillary_port_info = self.scan_ancillary_ports(
    -                            ancillary_ports)
    +                            ancillary_ports, sync)
                             LOG.debug("Agent rpc_loop - iteration:%(iter_num)d - "
                                       "ancillary port info retrieved. "
                                       "Elapsed:%(elapsed).3f",
                                       {'iter_num': self.iter_num,
                                        'elapsed': time.time() - start})
    +                    sync = False
                         # Secure and wire/unwire VIFs and update their status
                         # on Neutron server
                         if (self._port_info_has_changes(port_info) or
    
  • neutron/plugins/ml2/drivers/openvswitch/agent/xenapi/etc/xapi.d/plugins/netwrap+1 0 modified
    @@ -34,6 +34,7 @@ import XenAPIPlugin
     
     ALLOWED_CMDS = [
         'ip',
    +    # NOTE(yamamoto): of_interface=native doesn't use ovs-ofctl
         'ovs-ofctl',
         'ovs-vsctl',
         'ovsdb-client',
    
  • neutron/plugins/ml2/drivers/type_tunnel.py+2 1 modified
    @@ -124,7 +124,8 @@ def _parse_tunnel_ranges(self, tunnel_ranges, current_range):
                      {'type': self.get_type(), 'range': current_range})
     
         @oslo_db_api.wrap_db_retry(
    -        max_retries=db_api.MAX_RETRIES, retry_on_deadlock=True)
    +        max_retries=db_api.MAX_RETRIES,
    +        exception_checker=db_api.is_deadlock)
         def sync_allocations(self):
             # determine current configured allocatable tunnel ids
             tunnel_ids = set()
    
  • neutron/plugins/ml2/extensions/port_security.py+2 2 modified
    @@ -14,6 +14,7 @@
     #    under the License.
     
     from neutron.api.v2 import attributes as attrs
    +from neutron.common import utils
     from neutron.db import common_db_mixin
     from neutron.db import portsecurity_db_common as ps_db_common
     from neutron.extensions import portsecurity as psec
    @@ -80,8 +81,7 @@ def _determine_port_security(self, context, port):
             otherwise the value associated with the network is returned.
             """
             # we don't apply security groups for dhcp, router
    -        if (port.get('device_owner') and
    -                port['device_owner'].startswith('network:')):
    +        if port.get('device_owner') and utils.is_port_trusted(port):
                 return False
     
             if attrs.is_attr_set(port.get(psec.PORTSECURITY)):
    
  • neutron/plugins/ml2/managers.py+6 0 modified
    @@ -749,6 +749,12 @@ def _check_driver_to_bind(self, driver, segments_to_bind, binding_levels):
                     return False
             return True
     
    +    def get_workers(self):
    +        workers = []
    +        for driver in self.ordered_mech_drivers:
    +            workers += driver.obj.get_workers()
    +        return workers
    +
     
     class ExtensionManager(stevedore.named.NamedExtensionManager):
         """Manage extension drivers using drivers."""
    
  • neutron/plugins/ml2/plugin.py+26 21 modified
    @@ -118,7 +118,7 @@ class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2,
                                         "multi-provider", "allowed-address-pairs",
                                         "extra_dhcp_opt", "subnet_allocation",
                                         "net-mtu", "vlan-transparent",
    -                                    "address-scope", "dns-integration"]
    +                                    "dns-integration"]
     
         @property
         def supported_extension_aliases(self):
    @@ -194,9 +194,9 @@ def _filter_nets_provider(self, context, networks, filters):
     
         def _get_host_port_if_changed(self, mech_context, attrs):
             binding = mech_context._binding
    -        host = attrs and attrs.get(portbindings.HOST_ID)
    -        if (attributes.is_attr_set(host) and binding.host != host):
    -            return mech_context.current
    +        if attrs and portbindings.HOST_ID in attrs:
    +            if binding.host != attrs.get(portbindings.HOST_ID):
    +                return mech_context.current
     
         def _check_mac_update_allowed(self, orig_port, port, binding):
             unplugged_types = (portbindings.VIF_TYPE_BINDING_FAILED,
    @@ -708,32 +708,30 @@ def get_networks(self, context, filters=None, fields=None,
     
             return [self._fields(net, fields) for net in nets]
     
    -    def _delete_ports(self, context, ports):
    -        for port in ports:
    +    def _delete_ports(self, context, port_ids):
    +        for port_id in port_ids:
                 try:
    -                self.delete_port(context, port.id)
    +                self.delete_port(context, port_id)
                 except (exc.PortNotFound, sa_exc.ObjectDeletedError):
    -                context.session.expunge(port)
                     # concurrent port deletion can be performed by
                     # release_dhcp_port caused by concurrent subnet_delete
    -                LOG.info(_LI("Port %s was deleted concurrently"), port.id)
    +                LOG.info(_LI("Port %s was deleted concurrently"), port_id)
                 except Exception:
                     with excutils.save_and_reraise_exception():
                         LOG.exception(_LE("Exception auto-deleting port %s"),
    -                                  port.id)
    +                                  port_id)
     
    -    def _delete_subnets(self, context, subnets):
    -        for subnet in subnets:
    +    def _delete_subnets(self, context, subnet_ids):
    +        for subnet_id in subnet_ids:
                 try:
    -                self.delete_subnet(context, subnet.id)
    +                self.delete_subnet(context, subnet_id)
                 except (exc.SubnetNotFound, sa_exc.ObjectDeletedError):
    -                context.session.expunge(subnet)
                     LOG.info(_LI("Subnet %s was deleted concurrently"),
    -                         subnet.id)
    +                         subnet_id)
                 except Exception:
                     with excutils.save_and_reraise_exception():
                         LOG.exception(_LE("Exception auto-deleting subnet %s"),
    -                                  subnet.id)
    +                                  subnet_id)
     
         def delete_network(self, context, id):
             # REVISIT(rkukura) The super(Ml2Plugin, self).delete_network()
    @@ -797,15 +795,18 @@ def delete_network(self, context, id):
                             # network record, so explicit removal is not necessary.
                             LOG.debug("Committing transaction")
                             break
    +
    +                    port_ids = [port.id for port in ports]
    +                    subnet_ids = [subnet.id for subnet in subnets]
                 except os_db_exception.DBError as e:
                     with excutils.save_and_reraise_exception() as ctxt:
                         if isinstance(e.inner_exception, sql_exc.IntegrityError):
                             ctxt.reraise = False
                             LOG.warning(_LW("A concurrent port creation has "
                                             "occurred"))
                             continue
    -            self._delete_ports(context, ports)
    -            self._delete_subnets(context, subnets)
    +            self._delete_ports(context, port_ids)
    +            self._delete_subnets(context, subnet_ids)
     
             try:
                 self.mechanism_manager.delete_network_postcommit(mech_context)
    @@ -1207,6 +1208,7 @@ def update_port(self, context, id, port):
                 'context': context,
                 'port': new_host_port,
                 'mac_address_updated': mac_address_updated,
    +            'original_port': original_port,
             }
             registry.notify(resources.PORT, events.AFTER_UPDATE, self, **kwargs)
     
    @@ -1441,9 +1443,9 @@ def get_bound_port_context(self, plugin_context, port_id, host=None,
             return self._bind_port_if_needed(port_context)
     
         @oslo_db_api.wrap_db_retry(
    -        max_retries=db_api.MAX_RETRIES,
    -        retry_on_deadlock=True, retry_on_request=True,
    -        exception_checker=lambda e: isinstance(e, sa_exc.StaleDataError)
    +        max_retries=db_api.MAX_RETRIES, retry_on_request=True,
    +        exception_checker=lambda e: isinstance(e, (sa_exc.StaleDataError,
    +                                                   os_db_exception.DBDeadlock))
         )
         def update_port_status(self, context, port_id, status, host=None,
                                network=None):
    @@ -1556,3 +1558,6 @@ def _device_to_port_id(context, device):
                 if port:
                     return port.id
             return device
    +
    +    def get_workers(self):
    +        return self.mechanism_manager.get_workers()
    
  • neutron/plugins/ml2/rpc.py+4 4 modified
    @@ -109,9 +109,8 @@ def get_device_details(self, rpc_context, **kwargs):
                                               host,
                                               port_context.network.current)
     
    -        qos_policy_id = (port.get(qos_consts.QOS_POLICY_ID) or
    -                         port_context.network._network.get(
    -                             qos_consts.QOS_POLICY_ID))
    +        network_qos_policy_id = port_context.network._network.get(
    +            qos_consts.QOS_POLICY_ID)
             entry = {'device': device,
                      'network_id': port['network_id'],
                      'port_id': port['id'],
    @@ -124,7 +123,8 @@ def get_device_details(self, rpc_context, **kwargs):
                      'device_owner': port['device_owner'],
                      'allowed_address_pairs': port['allowed_address_pairs'],
                      'port_security_enabled': port.get(psec.PORTSECURITY, True),
    -                 'qos_policy_id': qos_policy_id,
    +                 'qos_policy_id': port.get(qos_consts.QOS_POLICY_ID),
    +                 'network_qos_policy_id': network_qos_policy_id,
                      'profile': port[portbindings.PROFILE]}
             if 'security_groups' in port:
                 entry['security_groups'] = port['security_groups']
    
  • neutron/policy.py+3 0 modified
    @@ -290,6 +290,7 @@ def __init__(self, kind, match):
     
             self.field = field
             self.value = conv_func(value)
    +        self.regex = re.compile(value[1:]) if value.startswith('~') else None
     
         def __call__(self, target_dict, cred_dict, enforcer):
             target_value = target_dict.get(self.field)
    @@ -299,6 +300,8 @@ def __call__(self, target_dict, cred_dict, enforcer):
                           "%(target_dict)s",
                           {'field': self.field, 'target_dict': target_dict})
                 return False
    +        if self.regex:
    +            return bool(self.regex.match(target_value))
             return target_value == self.value
     
     
    
  • neutron/quota/resource.py+36 32 modified
    @@ -102,7 +102,7 @@ def __init__(self, name, count, flag=None, plural_name=None):
             """Initializes a CountableResource.
     
             Countable resources are those resources which directly
    -        correspond to objects in the database, i.e., netowk, subnet,
    +        correspond to objects in the database, i.e., network, subnet,
             etc.,.  A CountableResource must be constructed with a counting
             function, which will be called to determine the current counts
             of the resource.
    @@ -114,7 +114,7 @@ def __init__(self, name, count, flag=None, plural_name=None):
     
             :param name: The name of the resource, i.e., "instances".
             :param count: A callable which returns the count of the
    -                      resource.  The arguments passed are as described
    +                      resource. The arguments passed are as described
                           above.
             :param flag: The name of the flag or configuration option
                          which specifies the default value of the quota
    @@ -131,7 +131,7 @@ def __init__(self, name, count, flag=None, plural_name=None):
                 name, flag=flag, plural_name=plural_name)
             self._count_func = count
     
    -    def count(self, context, plugin, tenant_id):
    +    def count(self, context, plugin, tenant_id, **kwargs):
             return self._count_func(context, plugin, self.plural_name, tenant_id)
     
     
    @@ -176,10 +176,10 @@ def __init__(self, name, model_class, flag, plural_name=None):
         def dirty(self):
             return self._dirty_tenants
     
    -    def mark_dirty(self, context, nested=False):
    +    def mark_dirty(self, context):
             if not self._dirty_tenants:
                 return
    -        with context.session.begin(nested=nested, subtransactions=True):
    +        with db_api.autonested_transaction(context.session):
                 # It is not necessary to protect this operation with a lock.
                 # Indeed when this method is called the request has been processed
                 # and therefore all resources created or deleted.
    @@ -194,7 +194,7 @@ def mark_dirty(self, context, nested=False):
                                "on resource:%(resource)s"),
                               {'tenant_id': tenant_id, 'resource': self.name})
             self._out_of_sync_tenants |= dirty_tenants_snap
    -        self._dirty_tenants = self._dirty_tenants - dirty_tenants_snap
    +        self._dirty_tenants -= dirty_tenants_snap
     
         def _db_event_handler(self, mapper, _conn, target):
             try:
    @@ -212,15 +212,15 @@ def _db_event_handler(self, mapper, _conn, target):
         @oslo_db_api.wrap_db_retry(
             max_retries=db_api.MAX_RETRIES,
             exception_checker=lambda exc:
    -        isinstance(exc, oslo_db_exception.DBDuplicateEntry))
    -    def _set_quota_usage(self, context, tenant_id, in_use, reserved):
    -        return quota_api.set_quota_usage(context, self.name, tenant_id,
    -                                         in_use=in_use, reserved=reserved)
    +        isinstance(exc, (oslo_db_exception.DBDuplicateEntry,
    +                         oslo_db_exception.DBDeadlock)))
    +    def _set_quota_usage(self, context, tenant_id, in_use):
    +        return quota_api.set_quota_usage(
    +            context, self.name, tenant_id, in_use=in_use)
     
    -    def _resync(self, context, tenant_id, in_use, reserved):
    +    def _resync(self, context, tenant_id, in_use):
             # Update quota usage
    -        usage_info = self._set_quota_usage(
    -            context, tenant_id, in_use, reserved)
    +        usage_info = self._set_quota_usage(context, tenant_id, in_use)
     
             self._dirty_tenants.discard(tenant_id)
             self._out_of_sync_tenants.discard(tenant_id)
    @@ -238,14 +238,17 @@ def resync(self, context, tenant_id):
             in_use = context.session.query(self._model_class).filter_by(
                 tenant_id=tenant_id).count()
             # Update quota usage
    -        return self._resync(context, tenant_id, in_use, reserved=0)
    +        return self._resync(context, tenant_id, in_use)
     
    -    def count(self, context, _plugin, tenant_id, resync_usage=False):
    +    def count(self, context, _plugin, tenant_id, resync_usage=True):
             """Return the current usage count for the resource.
     
    -        This method will fetch the information from resource usage data,
    -        unless usage data are marked as "dirty", in which case both used and
    -        reserved resource are explicitly counted.
    +        This method will fetch aggregate information for resource usage
    +        data, unless usage data are marked as "dirty".
    +        In the latter case resource usage will be calculated counting
    +        rows for tenant_id in the resource's database model.
    +        Active reserved amount are instead always calculated by summing
    +        amounts for matching records in the 'reservations' database model.
     
             The _plugin and _resource parameters are unused but kept for
             compatibility with the signature of the count method for
    @@ -254,6 +257,11 @@ def count(self, context, _plugin, tenant_id, resync_usage=False):
             # Load current usage data, setting a row-level lock on the DB
             usage_info = quota_api.get_quota_usage_by_resource_and_tenant(
                 context, self.name, tenant_id, lock_for_update=True)
    +        # Always fetch reservations, as they are not tracked by usage counters
    +        reservations = quota_api.get_reservations_for_resources(
    +            context, tenant_id, [self.name])
    +        reserved = reservations.get(self.name, 0)
    +
             # If dirty or missing, calculate actual resource usage querying
             # the database and set/create usage info data
             # NOTE: this routine "trusts" usage counters at service startup. This
    @@ -272,24 +280,20 @@ def count(self, context, _plugin, tenant_id, resync_usage=False):
                 # Update quota usage, if requested (by default do not do that, as
                 # typically one counts before adding a record, and that would mark
                 # the usage counter as dirty again)
    -            if resync_usage or not usage_info:
    -                usage_info = self._resync(context, tenant_id,
    -                                          in_use, reserved=0)
    +            if resync_usage:
    +                usage_info = self._resync(context, tenant_id, in_use)
                 else:
    -                # NOTE(salv-orlando): Passing 0 for reserved amount as
    -                # reservations are currently not supported
    -                usage_info = quota_api.QuotaUsageInfo(usage_info.resource,
    -                                                      usage_info.tenant_id,
    -                                                      in_use,
    -                                                      0,
    -                                                      usage_info.dirty)
    +                resource = usage_info.resource if usage_info else self.name
    +                tenant_id = usage_info.tenant_id if usage_info else tenant_id
    +                dirty = usage_info.dirty if usage_info else True
    +                usage_info = quota_api.QuotaUsageInfo(
    +                    resource, tenant_id, in_use, dirty)
     
                 LOG.debug(("Quota usage for %(resource)s was recalculated. "
    -                       "Used quota:%(used)d; Reserved quota:%(reserved)d"),
    +                       "Used quota:%(used)d."),
                           {'resource': self.name,
    -                       'used': usage_info.used,
    -                       'reserved': usage_info.reserved})
    -        return usage_info.total
    +                       'used': usage_info.used})
    +        return usage_info.used + reserved
     
         def register_events(self):
             event.listen(self._model_class, 'after_insert', self._db_event_handler)
    
  • neutron/quota/resource_registry.py+1 1 modified
    @@ -67,7 +67,7 @@ def set_resources_dirty(context):
         for res in get_all_resources().values():
             with context.session.begin(subtransactions=True):
                 if is_tracked(res.name) and res.dirty:
    -                res.mark_dirty(context, nested=True)
    +                res.mark_dirty(context)
     
     
     def resync_resource(context, resource_name, tenant_id):
    
  • neutron/scheduler/l3_agent_scheduler.py+2 2 modified
    @@ -174,12 +174,12 @@ def _get_candidates(self, plugin, context, sync_router):
                               ' by L3 agent %(agent_id)s',
                               {'router_id': sync_router['id'],
                                'agent_id': l3_agents[0]['id']})
    -                return
    +                return []
     
                 active_l3_agents = plugin.get_l3_agents(context, active=True)
                 if not active_l3_agents:
                     LOG.warn(_LW('No active L3 agents'))
    -                return
    +                return []
                 new_l3agents = plugin.get_l3_agent_candidates(context,
                                                               sync_router,
                                                               active_l3_agents)
    
  • neutron/server/__init__.py+4 0 modified
    @@ -50,6 +50,10 @@ def main():
             else:
                 rpc_thread = pool.spawn(neutron_rpc.wait)
     
    +            plugin_workers = service.start_plugin_workers()
    +            for worker in plugin_workers:
    +                pool.spawn(worker.wait)
    +
                 # api and rpc should die together.  When one dies, kill the other.
                 rpc_thread.link(lambda gt: api_thread.kill())
                 api_thread.link(lambda gt: rpc_thread.kill())
    
  • neutron/service.py+30 19 modified
    @@ -32,6 +32,7 @@
     from neutron.db import api as session
     from neutron.i18n import _LE, _LI
     from neutron import manager
    +from neutron import worker
     from neutron import wsgi
     
     
    @@ -44,7 +45,7 @@
                           'If not specified, the default is equal to the number '
                           'of CPUs available for best performance.')),
         cfg.IntOpt('rpc_workers',
    -               default=0,
    +               default=1,
                    help=_('Number of RPC worker processes for service')),
         cfg.IntOpt('periodic_fuzzy_delay',
                    default=5,
    @@ -108,13 +109,28 @@ def serve_wsgi(cls):
         return service
     
     
    -class RpcWorker(common_service.ServiceBase):
    +def start_plugin_workers():
    +    launchers = []
    +    # NOTE(twilson) get_service_plugins also returns the core plugin
    +    for plugin in manager.NeutronManager.get_unique_service_plugins():
    +        # TODO(twilson) Instead of defaulting here, come up with a good way to
    +        # share a common get_workers default between NeutronPluginBaseV2 and
    +        # ServicePluginBase
    +        for plugin_worker in getattr(plugin, 'get_workers', tuple)():
    +            launcher = common_service.ProcessLauncher(cfg.CONF)
    +            launcher.launch_service(plugin_worker)
    +            launchers.append(launcher)
    +    return launchers
    +
    +
    +class RpcWorker(worker.NeutronWorker):
         """Wraps a worker to be handled by ProcessLauncher"""
         def __init__(self, plugin):
             self._plugin = plugin
             self._servers = []
     
         def start(self):
    +        super(RpcWorker, self).start()
             self._servers = self._plugin.start_rpc_listeners()
     
         def wait(self):
    @@ -149,6 +165,9 @@ def reset():
     def serve_rpc():
         plugin = manager.NeutronManager.get_plugin()
     
    +    if cfg.CONF.rpc_workers < 1:
    +        cfg.CONF.set_override('rpc_workers', 1)
    +
         # If 0 < rpc_workers then start_rpc_listeners would be called in a
         # subprocess and we cannot simply catch the NotImplementedError.  It is
         # simpler to check this up front by testing whether the plugin supports
    @@ -164,22 +183,14 @@ def serve_rpc():
         try:
             rpc = RpcWorker(plugin)
     
    -        if cfg.CONF.rpc_workers < 1:
    -            LOG.debug('starting rpc directly, workers=%s',
    -                      cfg.CONF.rpc_workers)
    -            rpc.start()
    -            return rpc
    -        else:
    -            # dispose the whole pool before os.fork, otherwise there will
    -            # be shared DB connections in child processes which may cause
    -            # DB errors.
    -            LOG.debug('using launcher for rpc, workers=%s',
    -                      cfg.CONF.rpc_workers)
    -            session.dispose()
    -            launcher = common_service.ProcessLauncher(cfg.CONF,
    -                                                      wait_interval=1.0)
    -            launcher.launch_service(rpc, workers=cfg.CONF.rpc_workers)
    -            return launcher
    +        # dispose the whole pool before os.fork, otherwise there will
    +        # be shared DB connections in child processes which may cause
    +        # DB errors.
    +        LOG.debug('using launcher for rpc, workers=%s', cfg.CONF.rpc_workers)
    +        session.dispose()
    +        launcher = common_service.ProcessLauncher(cfg.CONF, wait_interval=1.0)
    +        launcher.launch_service(rpc, workers=cfg.CONF.rpc_workers)
    +        return launcher
         except Exception:
             with excutils.save_and_reraise_exception():
                 LOG.exception(_LE('Unrecoverable error: please check log for '
    @@ -188,7 +199,7 @@ def serve_rpc():
     
     def _get_api_workers():
         workers = cfg.CONF.api_workers
    -    if workers is None:
    +    if not workers:
             workers = processutils.get_worker_count()
         return workers
     
    
  • neutron/services/l3_router/l3_router_plugin.py+3 1 modified
    @@ -31,9 +31,11 @@
     from neutron.db import l3_hascheduler_db
     from neutron.plugins.common import constants
     from neutron.quota import resource_registry
    +from neutron.services import service_base
     
     
    -class L3RouterPlugin(common_db_mixin.CommonDbMixin,
    +class L3RouterPlugin(service_base.ServicePluginBase,
    +                     common_db_mixin.CommonDbMixin,
                          extraroute_db.ExtraRoute_db_mixin,
                          l3_hamode_db.L3_HA_NAT_db_mixin,
                          l3_gwmode_db.L3_NAT_db_mixin,
    
  • neutron/services/metering/agents/metering_agent.py+7 6 modified
    @@ -77,7 +77,6 @@ def __init__(self, host, conf=None):
             self.conf = conf or cfg.CONF
             self._load_drivers()
             self.context = context.get_admin_context_without_session()
    -        self.metering_info = {}
             self.metering_loop = loopingcall.FixedIntervalLoopingCall(
                 self._metering_loop
             )
    @@ -118,11 +117,13 @@ def _metering_notification(self):
                 info['time'] = 0
     
         def _purge_metering_info(self):
    -        ts = int(time.time())
    -        report_interval = self.conf.report_interval
    -        for label_id, info in self.metering_info.items():
    -            if info['last_update'] > ts + report_interval:
    -                del self.metering_info[label_id]
    +        deadline_timestamp = int(time.time()) - self.conf.report_interval
    +        label_ids = [
    +            label_id
    +            for label_id, info in self.metering_infos.items()
    +            if info['last_update'] < deadline_timestamp]
    +        for label_id in label_ids:
    +            del self.metering_infos[label_id]
     
         def _add_metering_info(self, label_id, pkts, bytes):
             ts = int(time.time())
    
  • neutron/services/provider_configuration.py+26 12 modified
    @@ -61,10 +61,7 @@ def installed(self):
         def module(self):
             return self.repo['mod']
     
    -    # Return an INI parser for the child module. oslo.config is a bit too
    -    # magical in its INI loading, and in one notable case, we need to merge
    -    # together the [service_providers] section across at least four
    -    # repositories.
    +    # Return an INI parser for the child module
         def ini(self):
             if self.repo['ini'] is None:
                 neutron_dir = None
    @@ -86,17 +83,34 @@ def ini(self):
             return self.repo['ini']
     
         def service_providers(self):
    -        ini = self.ini()
    -
    -        sp = []
    +        """Return the service providers for the extension module."""
    +        providers = []
    +        # Attempt to read the config from cfg.CONF; this is possible
    +        # when passing --config-dir. Since the multiStr config option
    +        # gets merged, extract only the providers pertinent to this
    +        # service module.
             try:
    -            for name, value in ini.items('service_providers'):
    -                if name == 'service_provider':
    -                    sp.append(value)
    -        except ConfigParser.NoSectionError:
    +            providers = [
    +                sp for sp in cfg.CONF.service_providers.service_provider
    +                if self.module_name in sp
    +            ]
    +        except cfg.NoSuchOptError:
                 pass
     
    -        return sp
    +        # Alternatively, if the option is not avaialable, load the
    +        # config option using the module's built-in ini parser.
    +        # this may be necessary, if modules are loaded on the fly
    +        # (DevStack may be an example)
    +        if not providers:
    +            ini = self.ini()
    +            try:
    +                for name, value in ini.items('service_providers'):
    +                    if name == 'service_provider':
    +                        providers.append(value)
    +            except ConfigParser.NoSectionError:
    +                pass
    +
    +        return providers
     
     
     #global scope function that should be used in service APIs
    
  • neutron/services/service_base.py+4 0 modified
    @@ -46,6 +46,10 @@ def get_plugin_description(self):
             """Return string description of the plugin."""
             pass
     
    +    def get_workers(self):
    +        """Returns a collection of NeutronWorkers"""
    +        return ()
    +
     
     def load_drivers(service_type, plugin):
         """Loads drivers for specific service.
    
  • neutron/tests/api/admin/test_shared_network_extension.py+19 9 modified
    @@ -182,18 +182,9 @@ class RBACSharedNetworksTest(base.BaseAdminNetworkTest):
         @classmethod
         def resource_setup(cls):
             super(RBACSharedNetworksTest, cls).resource_setup()
    -        extensions = cls.admin_client.list_extensions()
             if not test.is_extension_enabled('rbac_policies', 'network'):
                 msg = "rbac extension not enabled."
                 raise cls.skipException(msg)
    -        # NOTE(kevinbenton): the following test seems to be necessary
    -        # since the default is 'all' for the above check and these tests
    -        # need to get into the gate and be disabled until the service plugin
    -        # is enabled in devstack. Is there a better way to do this?
    -        if 'rbac-policies' not in [x['alias']
    -                                   for x in extensions['extensions']]:
    -            msg = "rbac extension is not in extension listing."
    -            raise cls.skipException(msg)
             creds = cls.isolated_creds.get_alt_creds()
             cls.client2 = clients.Manager(credentials=creds).network_client
     
    @@ -350,3 +341,22 @@ def test_regular_client_blocked_from_sharing_with_wildcard(self):
             with testtools.ExpectedException(lib_exc.Forbidden):
                 self.client.update_rbac_policy(pol['rbac_policy']['id'],
                                                target_tenant='*')
    +
    +    @test.attr(type='smoke')
    +    @test.idempotent_id('86c3529b-1231-40de-803c-aeeeeeee7fff')
    +    def test_filtering_works_with_rbac_records_present(self):
    +        resp = self._make_admin_net_and_subnet_shared_to_tenant_id(
    +            self.client.tenant_id)
    +        net = resp['network']
    +        sub = resp['subnet']
    +        self.admin_client.create_rbac_policy(
    +            object_type='network', object_id=net['id'],
    +            action='access_as_shared', target_tenant='*')
    +        for state, assertion in ((False, self.assertNotIn),
    +                                 (True, self.assertIn)):
    +            nets = [n['id'] for n in
    +                    self.admin_client.list_networks(shared=state)['networks']]
    +            assertion(net['id'], nets)
    +            subs = [s['id'] for s in
    +                    self.admin_client.list_subnets(shared=state)['subnets']]
    +            assertion(sub['id'], subs)
    
  • neutron/tests/api/clients.py+7 8 modified
    @@ -15,13 +15,14 @@
     
     from oslo_log import log as logging
     
    +from tempest_lib.services.identity.v2.token_client import TokenClient
    +from tempest_lib.services.identity.v3.token_client import V3TokenClient
    +
     from neutron.tests.tempest.common import cred_provider
     from neutron.tests.tempest import config
     from neutron.tests.tempest import manager
     from neutron.tests.tempest.services.identity.v2.json.identity_client import \
         IdentityClientJSON
    -from neutron.tests.tempest.services.identity.v2.json.token_client import \
    -     TokenClientJSON
     from neutron.tests.tempest.services.identity.v3.json.credentials_client \
          import CredentialsClientJSON
     from neutron.tests.tempest.services.identity.v3.json.endpoints_client import \
    @@ -34,8 +35,6 @@
          RegionClientJSON
     from neutron.tests.tempest.services.identity.v3.json.service_client import \
         ServiceClientJSON
    -from neutron.tests.tempest.services.identity.v3.json.token_client import \
    -     V3TokenClientJSON
     from neutron.tests.tempest.services.network.json.network_client import \
          NetworkClientJSON
     
    @@ -99,11 +98,11 @@ def _set_identity_clients(self):
             self.credentials_client = CredentialsClientJSON(self.auth_provider,
                                                             **params)
             # Token clients do not use the catalog. They only need default_params.
    -        self.token_client = TokenClientJSON(CONF.identity.uri,
    -                                            **self.default_params)
    +        self.token_client = TokenClient(CONF.identity.uri,
    +                                        **self.default_params)
             if CONF.identity_feature_enabled.api_v3:
    -            self.token_v3_client = V3TokenClientJSON(CONF.identity.uri_v3,
    -                                                     **self.default_params)
    +            self.token_v3_client = V3TokenClient(CONF.identity.uri_v3,
    +                                                 **self.default_params)
     
     
     class AdminManager(Manager):
    
  • neutron/tests/api/test_address_scopes.py+6 0 modified
    @@ -26,6 +26,12 @@
     
     class AddressScopeTestBase(base.BaseNetworkTest):
     
    +    @classmethod
    +    def skip_checks(cls):
    +        super(AddressScopeTestBase, cls).skip_checks()
    +        msg = "Address-Scope extension not enabled."
    +        raise cls.skipException(msg)
    +
         @classmethod
         def resource_setup(cls):
             super(AddressScopeTestBase, cls).resource_setup()
    
  • neutron/tests/api/test_security_groups_negative.py+1 0 modified
    @@ -148,6 +148,7 @@ def test_create_security_group_rule_with_invalid_ports(self):
     
             # Create rule for icmp protocol with invalid ports
             states = [(1, 256, 'Invalid value for ICMP code'),
    +                  (-1, 25, 'Invalid value'),
                       (None, 6, 'ICMP type (port-range-min) is missing'),
                       (300, 1, 'Invalid value for ICMP type')]
             for pmin, pmax, msg in states:
    
  • neutron/tests/api/test_subnetpools_negative.py+10 0 modified
    @@ -144,6 +144,7 @@ def test_create_subnet_different_pools_same_network(self):
         @test.attr(type=['negative', 'smoke'])
         @test.idempotent_id('9589e332-638e-476e-81bd-013d964aa3cb')
         def test_create_subnetpool_associate_invalid_address_scope(self):
    +        self.skipTest("until extension address-scope is re-enabled")
             subnetpool_data = copy.deepcopy(self._subnetpool_data)
             subnetpool_data['subnetpool']['address_scope_id'] = 'foo-addr-scope'
             self.assertRaises(lib_exc.BadRequest, self.client.create_subnetpool,
    @@ -152,6 +153,7 @@ def test_create_subnetpool_associate_invalid_address_scope(self):
         @test.attr(type=['negative', 'smoke'])
         @test.idempotent_id('3b6c5942-485d-4964-a560-55608af020b5')
         def test_create_subnetpool_associate_non_exist_address_scope(self):
    +        self.skipTest("until extension address-scope is re-enabled")
             subnetpool_data = copy.deepcopy(self._subnetpool_data)
             non_exist_address_scope_id = str(uuid.uuid4())
             subnetpool_data['subnetpool']['address_scope_id'] = (
    @@ -162,6 +164,7 @@ def test_create_subnetpool_associate_non_exist_address_scope(self):
         @test.attr(type=['negative', 'smoke'])
         @test.idempotent_id('2dfb4269-8657-485a-a053-b022e911456e')
         def test_create_subnetpool_associate_address_scope_prefix_intersect(self):
    +        self.skipTest("until extension address-scope is re-enabled")
             address_scope = self.create_address_scope(
                 name=data_utils.rand_name('smoke-address-scope'))
             addr_scope_id = address_scope['id']
    @@ -178,6 +181,7 @@ def test_create_subnetpool_associate_address_scope_prefix_intersect(self):
         @test.attr(type=['negative', 'smoke'])
         @test.idempotent_id('83a19a13-5384-42e2-b579-43fc69c80914')
         def test_create_sp_associate_address_scope_multiple_prefix_intersect(self):
    +        self.skipTest("until extension address-scope is re-enabled")
             address_scope = self.create_address_scope(
                 name=data_utils.rand_name('smoke-address-scope'))
             addr_scope_id = address_scope['id']
    @@ -198,6 +202,7 @@ def test_create_sp_associate_address_scope_multiple_prefix_intersect(self):
         @test.attr(type=['negative', 'smoke'])
         @test.idempotent_id('f06d8e7b-908b-4e94-b570-8156be6a4bf1')
         def test_create_subnetpool_associate_address_scope_of_other_owner(self):
    +        self.skipTest("until extension address-scope is re-enabled")
             address_scope = self.create_address_scope(
                 name=data_utils.rand_name('smoke-address-scope'), is_admin=True)
             address_scope_id = address_scope['id']
    @@ -209,6 +214,7 @@ def test_create_subnetpool_associate_address_scope_of_other_owner(self):
         @test.attr(type=['negative', 'smoke'])
         @test.idempotent_id('3396ec6c-cb80-4ebe-b897-84e904580bdf')
         def test_tenant_create_subnetpool_associate_shared_address_scope(self):
    +        self.skipTest("until extension address-scope is re-enabled")
             address_scope = self.create_address_scope(
                 name=data_utils.rand_name('smoke-address-scope'), is_admin=True,
                 shared=True)
    @@ -221,6 +227,7 @@ def test_tenant_create_subnetpool_associate_shared_address_scope(self):
         @test.attr(type='smoke')
         @test.idempotent_id('6d3d9ad5-32d4-4d63-aa00-8c62f73e2881')
         def test_update_subnetpool_associate_address_scope_of_other_owner(self):
    +        self.skipTest("until extension address-scope is re-enabled")
             address_scope = self.create_address_scope(
                 name=data_utils.rand_name('smoke-address-scope'), is_admin=True)
             address_scope_id = address_scope['id']
    @@ -261,6 +268,7 @@ def _test_update_subnetpool_prefix_intersect_helper(
         @test.attr(type=['negative', 'smoke'])
         @test.idempotent_id('96006292-7214-40e0-a471-153fb76e6b31')
         def test_update_subnetpool_prefix_intersect(self):
    +        self.skipTest("until extension address-scope is re-enabled")
             pool_1_prefix = [u'20.0.0.0/18']
             pool_2_prefix = [u'20.10.0.0/24']
             pool_1_updated_prefix = [u'20.0.0.0/12']
    @@ -270,6 +278,7 @@ def test_update_subnetpool_prefix_intersect(self):
         @test.attr(type=['negative', 'smoke'])
         @test.idempotent_id('4d3f8a79-c530-4e59-9acf-6c05968adbfe')
         def test_update_subnetpool_multiple_prefix_intersect(self):
    +        self.skipTest("until extension address-scope is re-enabled")
             pool_1_prefixes = [u'20.0.0.0/18', u'30.0.0.0/18']
             pool_2_prefixes = [u'20.10.0.0/24', u'40.0.0.0/18', '50.0.0.0/18']
             pool_1_updated_prefixes = [u'20.0.0.0/18', u'30.0.0.0/18',
    @@ -280,6 +289,7 @@ def test_update_subnetpool_multiple_prefix_intersect(self):
         @test.attr(type=['negative', 'smoke'])
         @test.idempotent_id('7438e49e-1351-45d8-937b-892059fb97f5')
         def test_tenant_update_sp_prefix_associated_with_shared_addr_scope(self):
    +        self.skipTest("until extension address-scope is re-enabled")
             address_scope = self.create_address_scope(
                 name=data_utils.rand_name('smoke-address-scope'), is_admin=True,
                 shared=True)
    
  • neutron/tests/api/test_subnetpools.py+4 0 modified
    @@ -242,6 +242,7 @@ def test_create_subnet_from_pool_with_quota(self):
         @test.attr(type='smoke')
         @test.idempotent_id('49b44c64-1619-4b29-b527-ffc3c3115dc4')
         def test_create_subnetpool_associate_address_scope(self):
    +        self.skipTest("until extension address-scope is re-enabled")
             address_scope = self.create_address_scope(
                 name=data_utils.rand_name('smoke-address-scope'))
             name, pool_id = self._create_subnetpool(
    @@ -254,6 +255,7 @@ def test_create_subnetpool_associate_address_scope(self):
         @test.attr(type='smoke')
         @test.idempotent_id('910b6393-db24-4f6f-87dc-b36892ad6c8c')
         def test_update_subnetpool_associate_address_scope(self):
    +        self.skipTest("until extension address-scope is re-enabled")
             address_scope = self.create_address_scope(
                 name=data_utils.rand_name('smoke-address-scope'))
             name, pool_id = self._create_subnetpool(self.client)
    @@ -270,6 +272,7 @@ def test_update_subnetpool_associate_address_scope(self):
         @test.attr(type='smoke')
         @test.idempotent_id('18302e80-46a3-4563-82ac-ccd1dd57f652')
         def test_update_subnetpool_associate_another_address_scope(self):
    +        self.skipTest("until extension address-scope is re-enabled")
             address_scope = self.create_address_scope(
                 name=data_utils.rand_name('smoke-address-scope'))
             another_address_scope = self.create_address_scope(
    @@ -292,6 +295,7 @@ def test_update_subnetpool_associate_another_address_scope(self):
         @test.attr(type='smoke')
         @test.idempotent_id('f8970048-e41b-42d6-934b-a1297b07706a')
         def test_update_subnetpool_disassociate_address_scope(self):
    +        self.skipTest("until extension address-scope is re-enabled")
             address_scope = self.create_address_scope(
                 name=data_utils.rand_name('smoke-address-scope'))
             name, pool_id = self._create_subnetpool(
    
  • neutron/tests/base.py+5 0 modified
    @@ -103,6 +103,11 @@ def get_test_timeout(default=0):
         return int(os.environ.get('OS_TEST_TIMEOUT', 0))
     
     
    +def sanitize_log_path(path):
    +    # Sanitize the string so that its log path is shell friendly
    +    return path.replace(' ', '-').replace('(', '_').replace(')', '_')
    +
    +
     class AttributeDict(dict):
     
         """
    
  • neutron/tests/common/net_helpers.py+17 9 modified
    @@ -252,7 +252,6 @@ def child_is_running():
     
     
     class NetcatTester(object):
    -    TESTING_STRING = 'foo'
         TCP = n_const.PROTO_NAME_TCP
         UDP = n_const.PROTO_NAME_UDP
     
    @@ -314,7 +313,7 @@ def is_established(self):
     
         def establish_connection(self):
             if self._client_process:
    -            raise RuntimeError('%(proto)s connection to $(ip_addr)s is already'
    +            raise RuntimeError('%(proto)s connection to %(ip_addr)s is already'
                                    ' established' %
                                    {'proto': self.protocol,
                                     'ip_addr': self.address})
    @@ -325,21 +324,30 @@ def establish_connection(self):
                 self.client_namespace,
                 address=self.address)
             if self.protocol == self.UDP:
    -            # Create an entry in conntrack table for UDP packets
    -            self.client_process.writeline(self.TESTING_STRING)
    +            # Create an ASSURED entry in conntrack table for UDP packets,
    +            # that requires 3-way communcation
    +            # 1st transmission creates UNREPLIED
    +            # 2nd transmission removes UNREPLIED
    +            # 3rd transmission creates ASSURED
    +            data = 'foo'
    +            self.client_process.writeline(data)
    +            self.server_process.read_stdout(READ_TIMEOUT)
    +            self.server_process.writeline(data)
    +            self.client_process.read_stdout(READ_TIMEOUT)
    +            self.client_process.writeline(data)
    +            self.server_process.read_stdout(READ_TIMEOUT)
     
         def test_connectivity(self, respawn=False):
    -        stop_required = (respawn and self._client_process and
    -                         self._client_process.poll() is not None)
    -        if stop_required:
    +        testing_string = uuidutils.generate_uuid()
    +        if respawn:
                 self.stop_processes()
     
    -        self.client_process.writeline(self.TESTING_STRING)
    +        self.client_process.writeline(testing_string)
             message = self.server_process.read_stdout(READ_TIMEOUT).strip()
             self.server_process.writeline(message)
             message = self.client_process.read_stdout(READ_TIMEOUT).strip()
     
    -        return message == self.TESTING_STRING
    +        return message == testing_string
     
         def _spawn_nc_in_namespace(self, namespace, address, listen=False):
             cmd = ['nc', address, self.dst_port]
    
  • neutron/tests/contrib/functional-testing.filters+1 3 modified
    @@ -9,9 +9,7 @@ ping_filter: CommandFilter, ping, root
     ping6_filter: CommandFilter, ping6, root
     
     # enable curl from namespace
    -curl_filter: CommandFilter, curl, root
    -tee_filter: CommandFilter, tee, root
    -tee_kill: KillFilter, root, tee, -9
    +curl_filter: RegExpFilter, /usr/bin/curl, root, curl, --max-time, \d+, -D-, http://[0-9a-z:./-]+
     nc_filter: CommandFilter, nc, root
     # netcat has different binaries depending on linux distribution
     nc_kill: KillFilter, root, nc, -9
    
  • neutron/tests/contrib/post_test_hook.sh+16 7 modified
    @@ -8,6 +8,19 @@ SCRIPTS_DIR="/usr/os-testr-env/bin/"
     
     venv=${1:-"dsvm-functional"}
     
    +function generate_test_logs {
    +    local path="$1"
    +    # Compress all $path/*.txt files and move the directories holding those
    +    # files to /opt/stack/logs. Files with .log suffix have their
    +    # suffix changed to .txt (so browsers will know to open the compressed
    +    # files and not download them).
    +    if [ -d "$path" ]
    +    then
    +        sudo find $path -iname "*.log" -type f -exec mv {} {}.txt \; -exec gzip -9 {}.txt \;
    +        sudo mv $path/* /opt/stack/logs/
    +    fi
    +}
    +
     function generate_testr_results {
         # Give job user rights to access tox logs
         sudo -H -u $owner chmod o+rw .
    @@ -20,13 +33,9 @@ function generate_testr_results {
             sudo mv ./*.gz /opt/stack/logs/
         fi
     
    -    # Compress all /tmp/fullstack-*/*.txt files and move the directories
    -    # holding those files to /opt/stack/logs. Files with .log suffix have their
    -    # suffix changed to .txt (so browsers will know to open the compressed
    -    # files and not download them).
    -    if [ "$venv" == "dsvm-fullstack" ] && [ -d /tmp/fullstack-logs/ ]; then
    -        sudo find /tmp/fullstack-logs -iname "*.log" -type f -exec mv {} {}.txt \; -exec gzip -9 {}.txt \;
    -        sudo mv /tmp/fullstack-logs/* /opt/stack/logs/
    +    if [ "$venv" == "dsvm-functional" ] || [ "$venv" == "dsvm-fullstack" ]
    +    then
    +        generate_test_logs "/tmp/${venv}-logs"
         fi
     }
     
    
  • neutron/tests/etc/policy.json+3 0 modified
    @@ -56,7 +56,9 @@
         "update_network:router:external": "rule:admin_only",
         "delete_network": "rule:admin_or_owner",
     
    +    "network_device": "field:port:device_owner=~^network:",
         "create_port": "",
    +    "create_port:device_owner": "not rule:network_device or rule:admin_or_network_owner or rule:context_is_advsvc",
         "create_port:mac_address": "rule:admin_or_network_owner or rule:context_is_advsvc",
         "create_port:fixed_ips": "rule:admin_or_network_owner or rule:context_is_advsvc",
         "create_port:port_security_enabled": "rule:admin_or_network_owner or rule:context_is_advsvc",
    @@ -71,6 +73,7 @@
         "get_port:binding:host_id": "rule:admin_only",
         "get_port:binding:profile": "rule:admin_only",
         "update_port": "rule:admin_or_owner or rule:context_is_advsvc",
    +    "update_port:device_owner": "not rule:network_device or rule:admin_or_network_owner or rule:context_is_advsvc",
         "update_port:mac_address": "rule:admin_only or rule:context_is_advsvc",
         "update_port:fixed_ips": "rule:admin_or_network_owner or rule:context_is_advsvc",
         "update_port:port_security_enabled": "rule:admin_or_network_owner or rule:context_is_advsvc",
    
  • neutron/tests/fullstack/resources/client.py+49 6 modified
    @@ -77,15 +77,58 @@ def create_subnet(self, tenant_id, network_id,
     
             return self._create_resource(resource_type, spec)
     
    -    def create_port(self, tenant_id, network_id, hostname):
    -        return self._create_resource(
    -            'port',
    -            {'network_id': network_id,
    -             'tenant_id': tenant_id,
    -             'binding:host_id': hostname})
    +    def create_port(self, tenant_id, network_id, hostname, qos_policy_id=None):
    +        spec = {
    +            'network_id': network_id,
    +            'tenant_id': tenant_id,
    +            'binding:host_id': hostname,
    +        }
    +        if qos_policy_id:
    +            spec['qos_policy_id'] = qos_policy_id
    +        return self._create_resource('port', spec)
     
         def add_router_interface(self, router_id, subnet_id):
             body = {'subnet_id': subnet_id}
             self.client.add_interface_router(router=router_id, body=body)
             self.addCleanup(_safe_method(self.client.remove_interface_router),
                             router=router_id, body=body)
    +
    +    def create_qos_policy(self, tenant_id, name, description, shared):
    +        policy = self.client.create_qos_policy(
    +            body={'policy': {'name': name,
    +                             'description': description,
    +                             'shared': shared,
    +                             'tenant_id': tenant_id}})
    +
    +        def detach_and_delete_policy():
    +            qos_policy_id = policy['policy']['id']
    +            ports_with_policy = self.client.list_ports(
    +                qos_policy_id=qos_policy_id)['ports']
    +            for port in ports_with_policy:
    +                self.client.update_port(
    +                    port['id'],
    +                    body={'port': {'qos_policy_id': None}})
    +            self.client.delete_qos_policy(qos_policy_id)
    +
    +        # NOTE: We'll need to add support for detaching from network once
    +        # create_network() supports qos_policy_id.
    +        self.addCleanup(_safe_method(detach_and_delete_policy))
    +
    +        return policy['policy']
    +
    +    def create_bandwidth_limit_rule(self, tenant_id, qos_policy_id, limit=None,
    +                                    burst=None):
    +        rule = {'tenant_id': tenant_id}
    +        if limit:
    +            rule['max_kbps'] = limit
    +        if burst:
    +            rule['max_burst_kbps'] = burst
    +        rule = self.client.create_bandwidth_limit_rule(
    +            policy=qos_policy_id,
    +            body={'bandwidth_limit_rule': rule})
    +
    +        self.addCleanup(_safe_method(self.client.delete_bandwidth_limit_rule),
    +                        rule['bandwidth_limit_rule']['id'],
    +                        qos_policy_id)
    +
    +        return rule['bandwidth_limit_rule']
    
  • neutron/tests/fullstack/resources/config.py+46 7 modified
    @@ -103,6 +103,10 @@ def __init__(self, env_desc, host_desc, temp_dir,
             super(NeutronConfigFixture, self).__init__(
                 env_desc, host_desc, temp_dir, base_filename='neutron.conf')
     
    +        service_plugins = ['router']
    +        if env_desc.qos:
    +            service_plugins.append('qos')
    +
             self.config.update({
                 'DEFAULT': {
                     'host': self._generate_host(),
    @@ -112,8 +116,7 @@ def __init__(self, env_desc, host_desc, temp_dir,
                     'api_paste_config': self._generate_api_paste(),
                     'policy_file': self._generate_policy_json(),
                     'core_plugin': 'neutron.plugins.ml2.plugin.Ml2Plugin',
    -                'service_plugins': ('neutron.services.l3_router.'
    -                                    'l3_router_plugin.L3RouterPlugin'),
    +                'service_plugins': ','.join(service_plugins),
                     'auth_strategy': 'noauth',
                     'verbose': 'True',
                     'debug': 'True',
    @@ -159,10 +162,14 @@ def __init__(self, env_desc, host_desc, temp_dir, tenant_network_types):
             super(ML2ConfigFixture, self).__init__(
                 env_desc, host_desc, temp_dir, base_filename='ml2_conf.ini')
     
    +        mechanism_drivers = 'openvswitch'
    +        if self.env_desc.l2_pop:
    +            mechanism_drivers += ',l2population'
    +
             self.config.update({
                 'ml2': {
                     'tenant_network_types': tenant_network_types,
    -                'mechanism_drivers': 'openvswitch',
    +                'mechanism_drivers': mechanism_drivers,
                 },
                 'ml2_type_vlan': {
                     'network_vlan_ranges': 'physnet1:1000:2999',
    @@ -175,39 +182,71 @@ def __init__(self, env_desc, host_desc, temp_dir, tenant_network_types):
                 },
             })
     
    +        if env_desc.qos:
    +            self.config['ml2']['extension_drivers'] = 'qos'
    +
     
     class OVSConfigFixture(ConfigFixture):
     
    -    def __init__(self, env_desc, host_desc, temp_dir):
    +    def __init__(self, env_desc, host_desc, temp_dir, local_ip):
             super(OVSConfigFixture, self).__init__(
                 env_desc, host_desc, temp_dir,
                 base_filename='openvswitch_agent.ini')
     
    +        self.tunneling_enabled = self.env_desc.tunneling_enabled
             self.config.update({
                 'ovs': {
    -                'enable_tunneling': 'False',
    -                'local_ip': '127.0.0.1',
    -                'bridge_mappings': self._generate_bridge_mappings(),
    +                'enable_tunneling': str(self.tunneling_enabled),
    +                'local_ip': local_ip,
                     'integration_bridge': self._generate_integration_bridge(),
                 },
                 'securitygroup': {
                     'firewall_driver': ('neutron.agent.linux.iptables_firewall.'
                                         'OVSHybridIptablesFirewallDriver'),
    +            },
    +            'agent': {
    +                'l2_population': str(self.env_desc.l2_pop),
                 }
             })
     
    +        if self.tunneling_enabled:
    +            self.config['agent'].update({
    +                'tunnel_types': self.env_desc.network_type})
    +            self.config['ovs'].update({
    +                'tunnel_bridge': self._generate_tunnel_bridge(),
    +                'int_peer_patch_port': self._generate_int_peer(),
    +                'tun_peer_patch_port': self._generate_tun_peer()})
    +        else:
    +            self.config['ovs']['bridge_mappings'] = (
    +                self._generate_bridge_mappings())
    +
    +        if env_desc.qos:
    +            self.config['agent']['extensions'] = 'qos'
    +
         def _generate_bridge_mappings(self):
             return 'physnet1:%s' % base.get_rand_device_name(prefix='br-eth')
     
         def _generate_integration_bridge(self):
             return base.get_rand_device_name(prefix='br-int')
     
    +    def _generate_tunnel_bridge(self):
    +        return base.get_rand_device_name(prefix='br-tun')
    +
    +    def _generate_int_peer(self):
    +        return base.get_rand_device_name(prefix='patch-tun')
    +
    +    def _generate_tun_peer(self):
    +        return base.get_rand_device_name(prefix='patch-int')
    +
         def get_br_int_name(self):
             return self.config.ovs.integration_bridge
     
         def get_br_phys_name(self):
             return self.config.ovs.bridge_mappings.split(':')[1]
     
    +    def get_br_tun_name(self):
    +        return self.config.ovs.tunnel_bridge
    +
     
     class L3ConfigFixture(ConfigFixture):
     
    
  • neutron/tests/fullstack/resources/environment.py+44 7 modified
    @@ -12,12 +12,16 @@
     #    License for the specific language governing permissions and limitations
     #    under the License.
     
    +import random
    +
     import fixtures
    +import netaddr
     from neutronclient.common import exceptions as nc_exc
     from oslo_config import cfg
     from oslo_log import log as logging
     
     from neutron.agent.linux import utils
    +from neutron.common import utils as common_utils
     from neutron.tests.common import net_helpers
     from neutron.tests.fullstack.resources import config
     from neutron.tests.fullstack.resources import process
    @@ -30,7 +34,14 @@ class EnvironmentDescription(object):
     
         Does the setup, as a whole, support tunneling? How about l2pop?
         """
    -    pass
    +    def __init__(self, network_type='vxlan', l2_pop=True, qos=False):
    +        self.network_type = network_type
    +        self.l2_pop = l2_pop
    +        self.qos = qos
    +
    +    @property
    +    def tunneling_enabled(self):
    +        return self.network_type in ('vxlan', 'gre')
     
     
     class HostDescription(object):
    @@ -65,19 +76,28 @@ def __init__(self, env_desc, host_desc,
             self.host_desc = host_desc
             self.test_name = test_name
             self.neutron_config = neutron_config
    +        # Use reserved class E addresses
    +        self.local_ip = self.get_random_ip('240.0.0.1', '255.255.255.254')
             self.central_data_bridge = central_data_bridge
             self.central_external_bridge = central_external_bridge
             self.agents = {}
     
         def _setUp(self):
             agent_cfg_fixture = config.OVSConfigFixture(
    -            self.env_desc, self.host_desc, self.neutron_config.temp_dir)
    +            self.env_desc, self.host_desc, self.neutron_config.temp_dir,
    +            self.local_ip)
             self.useFixture(agent_cfg_fixture)
     
    -        br_phys = self.useFixture(
    -            net_helpers.OVSBridgeFixture(
    -                agent_cfg_fixture.get_br_phys_name())).bridge
    -        self.connect_to_internal_network_via_vlans(br_phys)
    +        if self.env_desc.tunneling_enabled:
    +            self.useFixture(
    +                net_helpers.OVSBridgeFixture(
    +                    agent_cfg_fixture.get_br_tun_name())).bridge
    +            self.connect_to_internal_network_via_tunneling()
    +        else:
    +            br_phys = self.useFixture(
    +                net_helpers.OVSBridgeFixture(
    +                    agent_cfg_fixture.get_br_phys_name())).bridge
    +            self.connect_to_internal_network_via_vlans(br_phys)
     
             self.ovs_agent = self.useFixture(
                 process.OVSAgentFixture(
    @@ -101,6 +121,17 @@ def _setUp(self):
                         self.neutron_config,
                         l3_agent_cfg_fixture))
     
    +    def connect_to_internal_network_via_tunneling(self):
    +        veth_1, veth_2 = self.useFixture(
    +            net_helpers.VethFixture()).ports
    +
    +        # NOTE: This sets an IP address on the host's root namespace
    +        # which is cleaned up when the device is deleted.
    +        veth_1.addr.add(common_utils.ip_to_cidr(self.local_ip, 32))
    +
    +        veth_1.link.set_up()
    +        veth_2.link.set_up()
    +
         def connect_to_internal_network_via_vlans(self, host_data_bridge):
             # If using VLANs as a segmentation device, it's needed to connect
             # a provider bridge to a centralized, shared bridge.
    @@ -111,6 +142,11 @@ def connect_to_external_network(self, host_external_bridge):
             net_helpers.create_patch_ports(
                 self.central_external_bridge, host_external_bridge)
     
    +    @staticmethod
    +    def get_random_ip(low, high):
    +        parent_range = netaddr.IPRange(low, high)
    +        return str(random.choice(parent_range))
    +
         @property
         def hostname(self):
             return self.neutron_config.config.DEFAULT.host
    @@ -186,7 +222,8 @@ def _setUp(self):
     
             plugin_cfg_fixture = self.useFixture(
                 config.ML2ConfigFixture(
    -                self.env_desc, None, self.temp_dir, 'vlan'))
    +                self.env_desc, None, self.temp_dir,
    +                self.env_desc.network_type))
             neutron_cfg_fixture = self.useFixture(
                 config.NeutronConfigFixture(
                     self.env_desc, None, self.temp_dir,
    
  • neutron/tests/fullstack/resources/machine.py+8 5 modified
    @@ -20,21 +20,24 @@
     
     
     class FakeFullstackMachine(machine_fixtures.FakeMachineBase):
    -    def __init__(self, host, network_id, tenant_id, safe_client):
    +    def __init__(self, host, network_id, tenant_id, safe_client,
    +                 neutron_port=None):
             super(FakeFullstackMachine, self).__init__()
             self.bridge = host.ovs_agent.br_int
             self.host_binding = host.hostname
             self.tenant_id = tenant_id
             self.network_id = network_id
             self.safe_client = safe_client
    +        self.neutron_port = neutron_port
     
         def _setUp(self):
             super(FakeFullstackMachine, self)._setUp()
     
    -        self.neutron_port = self.safe_client.create_port(
    -            network_id=self.network_id,
    -            tenant_id=self.tenant_id,
    -            hostname=self.host_binding)
    +        if not self.neutron_port:
    +            self.neutron_port = self.safe_client.create_port(
    +                network_id=self.network_id,
    +                tenant_id=self.tenant_id,
    +                hostname=self.host_binding)
             self.neutron_port_id = self.neutron_port['id']
             mac_address = self.neutron_port['mac_address']
     
    
  • neutron/tests/fullstack/resources/process.py+5 3 modified
    @@ -29,8 +29,8 @@
     
     LOG = logging.getLogger(__name__)
     
    -# This should correspond the directory from which infra retrieves log files
    -DEFAULT_LOG_DIR = '/tmp/fullstack-logs/'
    +# This is the directory from which infra fetches log files for fullstack tests
    +DEFAULT_LOG_DIR = '/tmp/dsvm-fullstack-logs/'
     
     
     class ProcessFixture(fixtures.Fixture):
    @@ -47,7 +47,9 @@ def _setUp(self):
             self.addCleanup(self.stop)
     
         def start(self):
    -        log_dir = os.path.join(DEFAULT_LOG_DIR, self.test_name)
    +        test_name = base.sanitize_log_path(self.test_name)
    +
    +        log_dir = os.path.join(DEFAULT_LOG_DIR, test_name)
             common_utils.ensure_dir(log_dir)
     
             timestamp = datetime.datetime.now().strftime("%Y-%m-%d--%H-%M-%S-%f")
    
  • neutron/tests/fullstack/test_connectivity.py+18 2 modified
    @@ -12,20 +12,36 @@
     #    License for the specific language governing permissions and limitations
     #    under the License.
     
    +import testscenarios
    +
     from oslo_utils import uuidutils
     
     from neutron.tests.fullstack import base
     from neutron.tests.fullstack.resources import environment
     from neutron.tests.fullstack.resources import machine
     
     
    +load_tests = testscenarios.load_tests_apply_scenarios
    +
    +
     class TestConnectivitySameNetwork(base.BaseFullStackTestCase):
     
    +    scenarios = [
    +        ('VXLAN', {'network_type': 'vxlan',
    +                   'l2_pop': False}),
    +        ('GRE and l2pop', {'network_type': 'gre',
    +                           'l2_pop': True}),
    +        ('VLANs', {'network_type': 'vlan',
    +                   'l2_pop': False})]
    +
         def setUp(self):
             host_descriptions = [
                 environment.HostDescription() for _ in range(2)]
    -        env = environment.Environment(environment.EnvironmentDescription(),
    -                                      host_descriptions)
    +        env = environment.Environment(
    +            environment.EnvironmentDescription(
    +                network_type=self.network_type,
    +                l2_pop=self.l2_pop),
    +            host_descriptions)
             super(TestConnectivitySameNetwork, self).setUp(env)
     
         def test_connectivity(self):
    
  • neutron/tests/fullstack/test_l3_agent.py+8 4 modified
    @@ -28,8 +28,10 @@ class TestLegacyL3Agent(base.BaseFullStackTestCase):
     
         def setUp(self):
             host_descriptions = [environment.HostDescription(l3_agent=True)]
    -        env = environment.Environment(environment.EnvironmentDescription(),
    -                                      host_descriptions)
    +        env = environment.Environment(
    +            environment.EnvironmentDescription(
    +                network_type='vlan', l2_pop=False),
    +            host_descriptions)
             super(TestLegacyL3Agent, self).setUp(env)
     
         def _get_namespace(self, router_id):
    @@ -59,8 +61,10 @@ class TestHAL3Agent(base.BaseFullStackTestCase):
         def setUp(self):
             host_descriptions = [
                 environment.HostDescription(l3_agent=True) for _ in range(2)]
    -        env = environment.Environment(environment.EnvironmentDescription(),
    -                                      host_descriptions)
    +        env = environment.Environment(
    +            environment.EnvironmentDescription(
    +                network_type='vxlan', l2_pop=True),
    +            host_descriptions)
             super(TestHAL3Agent, self).setUp(env)
     
         def _is_ha_router_active_on_one_agent(self, router_id):
    
  • neutron/tests/fullstack/test_qos.py+119 0 added
    @@ -0,0 +1,119 @@
    +# Copyright 2015 Red Hat, Inc.
    +#
    +#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    +#    not use this file except in compliance with the License. You may obtain
    +#    a copy of the License at
    +#
    +#         http://www.apache.org/licenses/LICENSE-2.0
    +#
    +#    Unless required by applicable law or agreed to in writing, software
    +#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    +#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    +#    License for the specific language governing permissions and limitations
    +#    under the License.
    +
    +from oslo_utils import uuidutils
    +
    +from neutron.agent.linux import utils
    +from neutron.services.qos import qos_consts
    +from neutron.tests.fullstack import base
    +from neutron.tests.fullstack.resources import environment
    +from neutron.tests.fullstack.resources import machine
    +
    +
    +BANDWIDTH_LIMIT = 500
    +BANDWIDTH_BURST = 100
    +
    +
    +def _wait_for_rule_applied(vm, limit, burst):
    +    utils.wait_until_true(
    +        lambda: vm.bridge.get_egress_bw_limit_for_port(
    +            vm.port.name) == (limit, burst))
    +
    +
    +def _wait_for_rule_removed(vm):
    +    # No values are provided when port doesn't have qos policy
    +    _wait_for_rule_applied(vm, None, None)
    +
    +
    +class TestQoSWithOvsAgent(base.BaseFullStackTestCase):
    +
    +    def setUp(self):
    +        host_desc = [environment.HostDescription(l3_agent=False)]
    +        env_desc = environment.EnvironmentDescription(qos=True)
    +        env = environment.Environment(env_desc, host_desc)
    +        super(TestQoSWithOvsAgent, self).setUp(env)
    +
    +    def _create_qos_policy(self):
    +        return self.safe_client.create_qos_policy(
    +            self.tenant_id, 'fs_policy', 'Fullstack testing policy',
    +            shared='False')
    +
    +    def _prepare_vm_with_qos_policy(self, limit, burst):
    +        qos_policy = self._create_qos_policy()
    +        qos_policy_id = qos_policy['id']
    +
    +        rule = self.safe_client.create_bandwidth_limit_rule(
    +            self.tenant_id, qos_policy_id, limit, burst)
    +        # Make it consistent with GET reply
    +        qos_policy['rules'].append(rule)
    +        rule['type'] = qos_consts.RULE_TYPE_BANDWIDTH_LIMIT
    +        rule['qos_policy_id'] = qos_policy_id
    +
    +        port = self.safe_client.create_port(
    +            self.tenant_id, self.network['id'],
    +            self.environment.hosts[0].hostname,
    +            qos_policy_id)
    +
    +        vm = self.useFixture(
    +            machine.FakeFullstackMachine(
    +                self.environment.hosts[0],
    +                self.network['id'],
    +                self.tenant_id,
    +                self.safe_client,
    +                neutron_port=port))
    +
    +        return vm, qos_policy
    +
    +    def test_qos_policy_rule_lifecycle(self):
    +        new_limit = BANDWIDTH_LIMIT + 100
    +        new_burst = BANDWIDTH_BURST + 50
    +
    +        self.tenant_id = uuidutils.generate_uuid()
    +        self.network = self.safe_client.create_network(self.tenant_id,
    +                                                       'network-test')
    +        self.subnet = self.safe_client.create_subnet(
    +            self.tenant_id, self.network['id'],
    +            cidr='10.0.0.0/24',
    +            gateway_ip='10.0.0.1',
    +            name='subnet-test',
    +            enable_dhcp=False)
    +
    +        # Create port with qos policy attached
    +        vm, qos_policy = self._prepare_vm_with_qos_policy(BANDWIDTH_LIMIT,
    +                                                          BANDWIDTH_BURST)
    +        _wait_for_rule_applied(vm, BANDWIDTH_LIMIT, BANDWIDTH_BURST)
    +        qos_policy_id = qos_policy['id']
    +        rule = qos_policy['rules'][0]
    +
    +        # Remove rule from qos policy
    +        self.client.delete_bandwidth_limit_rule(rule['id'], qos_policy_id)
    +        _wait_for_rule_removed(vm)
    +
    +        # Create new rule
    +        new_rule = self.safe_client.create_bandwidth_limit_rule(
    +            self.tenant_id, qos_policy_id, new_limit, new_burst)
    +        _wait_for_rule_applied(vm, new_limit, new_burst)
    +
    +        # Update qos policy rule id
    +        self.client.update_bandwidth_limit_rule(
    +            new_rule['id'], qos_policy_id,
    +            body={'bandwidth_limit_rule': {'max_kbps': BANDWIDTH_LIMIT,
    +                                           'max_burst_kbps': BANDWIDTH_BURST}})
    +        _wait_for_rule_applied(vm, BANDWIDTH_LIMIT, BANDWIDTH_BURST)
    +
    +        # Remove qos policy from port
    +        self.client.update_port(
    +            vm.neutron_port['id'],
    +            body={'port': {'qos_policy_id': None}})
    +        _wait_for_rule_removed(vm)
    
  • neutron/tests/functional/agent/l2/extensions/test_ovs_agent_qos_extension.py+3 0 modified
    @@ -31,11 +31,13 @@
     TEST_POLICY_ID2 = "46ebaec0-0570-43ac-82f6-60d2b03168c5"
     TEST_BW_LIMIT_RULE_1 = rule.QosBandwidthLimitRule(
             context=None,
    +        qos_policy_id=TEST_POLICY_ID1,
             id="5f126d84-551a-4dcf-bb01-0e9c0df0c793",
             max_kbps=1000,
             max_burst_kbps=10)
     TEST_BW_LIMIT_RULE_2 = rule.QosBandwidthLimitRule(
             context=None,
    +        qos_policy_id=TEST_POLICY_ID2,
             id="fa9128d9-44af-49b2-99bb-96548378ad42",
             max_kbps=900,
             max_burst_kbps=9)
    @@ -80,6 +82,7 @@ def _create_test_port_dict(self, policy_id=None):
             port_dict = super(OVSAgentQoSExtensionTestFramework,
                               self)._create_test_port_dict()
             port_dict['qos_policy_id'] = policy_id
    +        port_dict['network_qos_policy_id'] = None
             return port_dict
     
         def _get_device_details(self, port, network):
    
  • neutron/tests/functional/agent/linux/test_async_process.py+8 1 modified
    @@ -15,6 +15,7 @@
     import eventlet
     
     from neutron.agent.linux import async_process
    +from neutron.agent.linux import utils
     from neutron.tests import base
     
     
    @@ -67,5 +68,11 @@ def test_async_process_respawns(self):
     
             # Ensure that the same output is read twice
             self._check_stdout(proc)
    -        proc._kill_process(proc.pid)
    +        pid = proc.pid
    +        utils.execute(['kill', '-9', pid])
    +        utils.wait_until_true(
    +            lambda: proc.is_active() and pid != proc.pid,
    +            timeout=5,
    +            sleep=0.01,
    +            exception=RuntimeError(_("Async process didn't respawn")))
             self._check_stdout(proc)
    
  • neutron/tests/functional/agent/linux/test_ebtables_driver.py+0 127 removed
    @@ -1,127 +0,0 @@
    -# Copyright (c) 2015 OpenStack Foundation.
    -# All Rights Reserved.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -from neutron.agent.linux import bridge_lib
    -from neutron.agent.linux import ebtables_driver
    -from neutron.tests.common import machine_fixtures
    -from neutron.tests.common import net_helpers
    -from neutron.tests.functional import base
    -
    -
    -NO_FILTER_APPLY = (
    -    "*filter\n"
    -    ":INPUT ACCEPT\n"
    -    ":FORWARD ACCEPT\n"
    -    ":OUTPUT ACCEPT\n"
    -    ":neutron-nwfilter-OUTPUT ACCEPT\n"
    -    ":neutron-nwfilter-INPUT ACCEPT\n"
    -    ":neutron-nwfilter-FORWARD ACCEPT\n"
    -    ":neutron-nwfilter-spoofing-fallb ACCEPT\n"
    -    "[0:0] -A INPUT -j neutron-nwfilter-INPUT\n"
    -    "[0:0] -A FORWARD -j neutron-nwfilter-FORWARD\n"
    -    "[2:140] -A OUTPUT -j neutron-nwfilter-OUTPUT\n"
    -    "[0:0] -A neutron-nwfilter-spoofing-fallb -j DROP\n"
    -    "COMMIT")
    -
    -FILTER_APPLY_TEMPLATE = (
    -    "*filter\n"
    -    ":INPUT ACCEPT\n"
    -    ":FORWARD ACCEPT\n"
    -    ":OUTPUT ACCEPT\n"
    -    ":neutron-nwfilter-OUTPUT ACCEPT\n"
    -    ":neutron-nwfilter-isome-port-id ACCEPT\n"
    -    ":neutron-nwfilter-i-arp-some-por ACCEPT\n"
    -    ":neutron-nwfilter-i-ip-some-port ACCEPT\n"
    -    ":neutron-nwfilter-spoofing-fallb ACCEPT\n"
    -    ":neutron-nwfilter-INPUT ACCEPT\n"
    -    ":neutron-nwfilter-FORWARD ACCEPT\n"
    -    "[0:0] -A neutron-nwfilter-OUTPUT -j neutron-nwfilter-isome-port-id\n"
    -    "[0:0] -A INPUT -j neutron-nwfilter-INPUT\n"
    -    "[2:140] -A OUTPUT -j neutron-nwfilter-OUTPUT\n"
    -    "[0:0] -A FORWARD -j neutron-nwfilter-FORWARD\n"
    -    "[0:0] -A neutron-nwfilter-spoofing-fallb -j DROP\n"
    -    "[0:0] -A neutron-nwfilter-i-arp-some-por "
    -    "-p arp --arp-opcode 2 --arp-mac-src %(mac_addr)s "
    -    "--arp-ip-src %(ip_addr)s -j RETURN\n"
    -    "[0:0] -A neutron-nwfilter-i-arp-some-por -p ARP --arp-op Request "
    -    "-j ACCEPT\n"
    -    "[0:0] -A neutron-nwfilter-i-arp-some-por "
    -    "-j neutron-nwfilter-spoofing-fallb\n"
    -    "[0:0] -A neutron-nwfilter-isome-port-id "
    -    "-p arp -j neutron-nwfilter-i-arp-some-por\n"
    -    "[0:0] -A neutron-nwfilter-i-ip-some-port "
    -    "-s %(mac_addr)s -p IPv4 --ip-source %(ip_addr)s -j RETURN\n"
    -    "[0:0] -A neutron-nwfilter-i-ip-some-port "
    -    "-j neutron-nwfilter-spoofing-fallb\n"
    -    "[0:0] -A neutron-nwfilter-isome-port-id "
    -    "-p IPv4 -j neutron-nwfilter-i-ip-some-port\n"
    -    "COMMIT")
    -
    -
    -class EbtablesLowLevelTestCase(base.BaseSudoTestCase):
    -
    -    def setUp(self):
    -        super(EbtablesLowLevelTestCase, self).setUp()
    -
    -        bridge = self.useFixture(net_helpers.VethBridgeFixture()).bridge
    -        self.source, self.destination = self.useFixture(
    -            machine_fixtures.PeerMachines(bridge)).machines
    -
    -        # Extract MAC and IP address of one of my interfaces
    -        self.mac = self.source.port.link.address
    -        self.addr = self.source.ip
    -
    -        # Pick one of the namespaces and setup a bridge for the local ethernet
    -        # interface there, because ebtables only works on bridged interfaces.
    -        dev_mybridge = bridge_lib.BridgeDevice.addbr(
    -            'mybridge', self.source.namespace)
    -        dev_mybridge.addif(self.source.port.name)
    -
    -        # Take the IP addrss off one of the interfaces and apply it to the
    -        # bridge interface instead.
    -        self.source.port.addr.delete(self.source.ip_cidr)
    -        dev_mybridge.link.set_up()
    -        dev_mybridge.addr.add(self.source.ip_cidr)
    -
    -    def _test_basic_port_filter_wrong_mac(self):
    -        # Setup filter with wrong IP/MAC address pair. Basic filters only allow
    -        # packets with specified address combinations, thus all packets will
    -        # be dropped.
    -        mac_ip_pair = dict(mac_addr="11:11:11:22:22:22", ip_addr=self.addr)
    -        filter_apply = FILTER_APPLY_TEMPLATE % mac_ip_pair
    -        ebtables_driver.ebtables_restore(filter_apply,
    -                                         self.source.execute)
    -        self.source.assert_no_ping(self.destination.ip)
    -
    -        # Assure that ping will work once we unfilter the instance
    -        ebtables_driver.ebtables_restore(NO_FILTER_APPLY,
    -                                         self.source.execute)
    -        self.source.assert_ping(self.destination.ip)
    -
    -    def _test_basic_port_filter_correct_mac(self):
    -        # Use the correct IP/MAC address pair for this one.
    -        mac_ip_pair = dict(mac_addr=self.mac, ip_addr=self.addr)
    -
    -        filter_apply = FILTER_APPLY_TEMPLATE % mac_ip_pair
    -        ebtables_driver.ebtables_restore(filter_apply,
    -                                         self.source.execute)
    -
    -        self.source.assert_ping(self.destination.ip)
    -
    -    def test_ebtables_filtering(self):
    -        # Cannot parallelize those tests. Therefore need to call them
    -        # in order from a single function.
    -        self._test_basic_port_filter_wrong_mac()
    -        self._test_basic_port_filter_correct_mac()
    
  • neutron/tests/functional/agent/linux/test_ip_lib.py+0 1 modified
    @@ -42,7 +42,6 @@ def setUp(self):
             self._configure()
     
         def _configure(self):
    -        config.setup_logging()
             config.register_interface_driver_opts_helper(cfg.CONF)
             cfg.CONF.set_override(
                 'interface_driver',
    
  • neutron/tests/functional/agent/linux/test_iptables.py+5 0 modified
    @@ -88,6 +88,8 @@ def _test_with_nc(self, fw_manager, direction, port, protocol):
             self.assertTrue(netcat.test_connectivity(),
                             'Failed connectivity check before applying a filter '
                             'with %s' % filter_params)
    +        # REVISIT(jlibosva): Make sure we have ASSURED conntrack entry for
    +        #                    given connection
             self.filter_add_rule(
                 fw_manager, self.server.ip, direction, protocol, port)
             with testtools.ExpectedException(
    @@ -97,6 +99,9 @@ def _test_with_nc(self, fw_manager, direction, port, protocol):
                 netcat.test_connectivity()
             self.filter_remove_rule(
                 fw_manager, self.server.ip, direction, protocol, port)
    +        # With TCP packets will get through after firewall was removed, so we
    +        # would get old data on socket and with UDP process died, so we need to
    +        # respawn processes to have clean sockets
             self.assertTrue(netcat.test_connectivity(True),
                             'Failed connectivity check after removing a filter '
                             'with %s' % filter_params)
    
  • neutron/tests/functional/agent/linux/test_keepalived.py+14 4 modified
    @@ -14,6 +14,7 @@
     #    under the License.
     
     from oslo_config import cfg
    +from oslo_log import log as logging
     
     from neutron.agent.linux import external_process
     from neutron.agent.linux import keepalived
    @@ -22,6 +23,9 @@
     from neutron.tests.unit.agent.linux import test_keepalived
     
     
    +LOG = logging.getLogger(__name__)
    +
    +
     class KeepalivedManagerTestCase(base.BaseTestCase,
                                     test_keepalived.KeepalivedConfBaseMixin):
     
    @@ -52,12 +56,18 @@ def test_keepalived_spawn(self):
         def test_keepalived_respawns(self):
             self.manager.spawn()
             process = self.manager.get_process()
    -        self.assertTrue(process.active)
    -
    -        process.disable(sig='15')
    -
    +        pid = process.pid
             utils.wait_until_true(
                 lambda: process.active,
                 timeout=5,
                 sleep=0.01,
    +            exception=RuntimeError(_("Keepalived didn't spawn")))
    +
    +        # force process crash, and see that when it comes back
    +        # it's indeed a different process
    +        utils.execute(['kill', '-9', pid], run_as_root=True)
    +        utils.wait_until_true(
    +            lambda: process.active and pid != process.pid,
    +            timeout=5,
    +            sleep=0.01,
                 exception=RuntimeError(_("Keepalived didn't respawn")))
    
  • neutron/tests/functional/agent/linux/test_linuxbridge_arp_protect.py+9 1 modified
    @@ -36,7 +36,8 @@ def setUp(self):
                 machine_fixtures.PeerMachines(bridge, amount=3)).machines
     
         def _add_arp_protection(self, machine, addresses, extra_port_dict=None):
    -        port_dict = {'fixed_ips': [{'ip_address': a} for a in addresses]}
    +        port_dict = {'fixed_ips': [{'ip_address': a} for a in addresses],
    +                     'device_owner': 'nobody'}
             if extra_port_dict:
                 port_dict.update(extra_port_dict)
             name = net_helpers.VethFixture.get_peer_name(machine.port.name)
    @@ -88,6 +89,13 @@ def test_arp_protection_port_security_disabled(self):
                                      {'port_security_enabled': False})
             arping(self.observer.namespace, self.source.ip)
     
    +    def test_arp_protection_network_owner(self):
    +        self._add_arp_protection(self.source, ['1.1.1.1'])
    +        no_arping(self.observer.namespace, self.source.ip)
    +        self._add_arp_protection(self.source, ['1.1.1.1'],
    +                                 {'device_owner': 'network:router_gateway'})
    +        arping(self.observer.namespace, self.source.ip)
    +
         def test_arp_protection_dead_reference_removal(self):
             self._add_arp_protection(self.source, ['1.1.1.1'])
             self._add_arp_protection(self.destination, ['2.2.2.2'])
    
  • neutron/tests/functional/agent/linux/test_process_monitor.py+1 0 modified
    @@ -96,6 +96,7 @@ def all_children_active():
                 exception=RuntimeError('Not all children respawned.'))
     
         def cleanup_spawned_children(self):
    +        self._process_monitor.stop()
             for pm in self._child_processes:
                 pm.disable()
     
    
  • neutron/tests/functional/agent/test_l2_lb_agent.py+12 3 modified
    @@ -35,13 +35,22 @@ def setUp(self):
         def test_validate_interface_mappings(self):
             mappings = {'physnet1': 'int1', 'physnet2': 'int2'}
             with testtools.ExpectedException(SystemExit):
    -            lba.LinuxBridgeManager(mappings)
    +            lba.LinuxBridgeManager({}, mappings)
             self.manage_device(
                 self.generate_device_details()._replace(namespace=None,
                                                         name='int1'))
             with testtools.ExpectedException(SystemExit):
    -            lba.LinuxBridgeManager(mappings)
    +            lba.LinuxBridgeManager({}, mappings)
             self.manage_device(
                 self.generate_device_details()._replace(namespace=None,
                                                         name='int2'))
    -        lba.LinuxBridgeManager(mappings)
    +        lba.LinuxBridgeManager({}, mappings)
    +
    +    def test_validate_bridge_mappings(self):
    +        mappings = {'physnet1': 'br-eth1'}
    +        with testtools.ExpectedException(SystemExit):
    +            lba.LinuxBridgeManager(mappings, {})
    +        self.manage_device(
    +            self.generate_device_details()._replace(namespace=None,
    +                                                    name='br-eth1'))
    +        lba.LinuxBridgeManager(mappings, {})
    
  • neutron/tests/functional/agent/test_l2_ovs_agent.py+11 2 modified
    @@ -67,6 +67,15 @@ def test_resync_devices_set_up_after_exception(self):
                 trigger_resync=True)
             self.wait_until_ports_state(self.ports, up=True)
     
    +    def test_reprocess_port_when_ovs_restarts(self):
    +        self.setup_agent_and_ports(
    +            port_dicts=self.create_test_ports())
    +        self.wait_until_ports_state(self.ports, up=True)
    +        self.agent.check_ovs_status.return_value = constants.OVS_RESTARTED
    +        # OVS restarted, the agent should reprocess all the ports
    +        self.agent.plugin_rpc.update_device_list.reset_mock()
    +        self.wait_until_ports_state(self.ports, up=True)
    +
         def test_port_vlan_tags(self):
             self.setup_agent_and_ports(
                 port_dicts=self.create_test_ports(),
    @@ -91,7 +100,7 @@ def test_assert_pings_during_br_int_setup_not_lost(self):
                                        create_tunnels=False)
             self.wait_until_ports_state(self.ports, up=True)
             ips = [port['fixed_ips'][0]['ip_address'] for port in self.ports]
    -        with net_helpers.async_ping(self.namespace, ips) as running:
    -            while running():
    +        with net_helpers.async_ping(self.namespace, ips) as done:
    +            while not done():
                     self.agent.setup_integration_br()
                     time.sleep(0.25)
    
  • neutron/tests/functional/agent/test_l3_agent.py+106 10 modified
    @@ -16,6 +16,7 @@
     import copy
     import functools
     import os.path
    +import time
     
     import mock
     import netaddr
    @@ -56,6 +57,7 @@
     _uuid = uuidutils.generate_uuid
     
     METADATA_REQUEST_TIMEOUT = 60
    +METADATA_REQUEST_SLEEP = 5
     
     
     def get_ovs_bridge(br_name):
    @@ -81,8 +83,6 @@ def _get_config_opts(self):
         def _configure_agent(self, host):
             conf = self._get_config_opts()
             l3_agent_main.register_opts(conf)
    -        cfg.CONF.set_override('debug', False)
    -        agent_config.setup_logging()
             conf.set_override(
                 'interface_driver',
                 'neutron.agent.linux.interface.OVSInterfaceDriver')
    @@ -914,6 +914,28 @@ def _create_metadata_fake_server(self, status):
                          self.agent.conf.metadata_proxy_socket,
                          workers=0, backlog=4096, mode=self.SOCKET_MODE)
     
    +    def _query_metadata_proxy(self, machine):
    +        url = 'http://%(host)s:%(port)s' % {'host': dhcp.METADATA_DEFAULT_IP,
    +                                            'port': dhcp.METADATA_PORT}
    +        cmd = 'curl', '--max-time', METADATA_REQUEST_TIMEOUT, '-D-', url
    +        i = 0
    +        CONNECTION_REFUSED_TIMEOUT = METADATA_REQUEST_TIMEOUT // 2
    +        while i <= CONNECTION_REFUSED_TIMEOUT:
    +            try:
    +                raw_headers = machine.execute(cmd)
    +                break
    +            except RuntimeError as e:
    +                if 'Connection refused' in str(e):
    +                    time.sleep(METADATA_REQUEST_SLEEP)
    +                    i += METADATA_REQUEST_SLEEP
    +                else:
    +                    self.fail('metadata proxy unreachable '
    +                              'on %s before timeout' % url)
    +
    +        if i > CONNECTION_REFUSED_TIMEOUT:
    +            self.fail('Timed out waiting metadata proxy to become available')
    +        return raw_headers.splitlines()[0]
    +
         def test_access_to_metadata_proxy(self):
             """Test access to the l3-agent metadata proxy.
     
    @@ -945,16 +967,9 @@ def test_access_to_metadata_proxy(self):
                     router_ip_cidr.partition('/')[0]))
     
             # Query metadata proxy
    -        url = 'http://%(host)s:%(port)s' % {'host': dhcp.METADATA_DEFAULT_IP,
    -                                            'port': dhcp.METADATA_PORT}
    -        cmd = 'curl', '--max-time', METADATA_REQUEST_TIMEOUT, '-D-', url
    -        try:
    -            raw_headers = machine.execute(cmd)
    -        except RuntimeError:
    -            self.fail('metadata proxy unreachable on %s before timeout' % url)
    +        firstline = self._query_metadata_proxy(machine)
     
             # Check status code
    -        firstline = raw_headers.splitlines()[0]
             self.assertIn(str(webob.exc.HTTPOk.code), firstline.split())
     
     
    @@ -1388,3 +1403,84 @@ def test_dvr_router_fip_late_binding(self):
             self.assertTrue(self._namespace_exists(router1.ns_name))
             self.assertTrue(self._namespace_exists(fip_ns))
             self._assert_snat_namespace_does_not_exist(router1)
    +
    +    def _assert_snat_namespace_exists(self, router):
    +        namespace = dvr_snat_ns.SnatNamespace.get_snat_ns_name(
    +            router.router_id)
    +        self.assertTrue(self._namespace_exists(namespace))
    +
    +    def _get_dvr_snat_namespace_device_status(
    +        self, router, internal_dev_name=None):
    +        """Function returns the internal and external device status."""
    +        snat_ns = dvr_snat_ns.SnatNamespace.get_snat_ns_name(
    +            router.router_id)
    +        external_port = router.get_ex_gw_port()
    +        external_device_name = router.get_external_device_name(
    +            external_port['id'])
    +        qg_device_created_successfully = ip_lib.device_exists(
    +            external_device_name, namespace=snat_ns)
    +        sg_device_created_successfully = ip_lib.device_exists(
    +            internal_dev_name, namespace=snat_ns)
    +        return qg_device_created_successfully, sg_device_created_successfully
    +
    +    def test_dvr_router_snat_namespace_with_interface_remove(self):
    +        """Test to validate the snat namespace with interface remove.
    +
    +        This test validates the snat namespace for all the external
    +        and internal devices. It also validates if the internal
    +        device corresponding to the router interface is removed
    +        when the router interface is deleted.
    +        """
    +        self.agent.conf.agent_mode = 'dvr_snat'
    +        router_info = self.generate_dvr_router_info()
    +        snat_internal_port = router_info[l3_constants.SNAT_ROUTER_INTF_KEY]
    +        router1 = self.manage_router(self.agent, router_info)
    +        csnat_internal_port = (
    +            router1.router[l3_constants.SNAT_ROUTER_INTF_KEY])
    +        # Now save the internal device name to verify later
    +        internal_device_name = router1._get_snat_int_device_name(
    +            csnat_internal_port[0]['id'])
    +        self._assert_snat_namespace_exists(router1)
    +        qg_device, sg_device = self._get_dvr_snat_namespace_device_status(
    +            router1, internal_dev_name=internal_device_name)
    +        self.assertTrue(qg_device)
    +        self.assertTrue(sg_device)
    +        self.assertEqual(router1.snat_ports, snat_internal_port)
    +        # Now let us not pass INTERFACE_KEY, to emulate
    +        # the interface has been removed.
    +        router1.router[l3_constants.INTERFACE_KEY] = []
    +        # Now let us not pass the SNAT_ROUTER_INTF_KEY, to emulate
    +        # that the server did not send it, since the interface has been
    +        # removed.
    +        router1.router[l3_constants.SNAT_ROUTER_INTF_KEY] = []
    +        self.agent._process_updated_router(router1.router)
    +        router_updated = self.agent.router_info[router_info['id']]
    +        self._assert_snat_namespace_exists(router_updated)
    +        qg_device, sg_device = self._get_dvr_snat_namespace_device_status(
    +            router_updated, internal_dev_name=internal_device_name)
    +        self.assertFalse(sg_device)
    +        self.assertTrue(qg_device)
    +
    +    def test_dvr_router_calls_delete_agent_gateway_if_last_fip(self):
    +        """Test to validate delete fip if it is last fip managed by agent."""
    +        self.agent.conf.agent_mode = 'dvr_snat'
    +        router_info = self.generate_dvr_router_info(enable_snat=True)
    +        router1 = self.manage_router(self.agent, router_info)
    +        floating_agent_gw_port = (
    +            router1.router[l3_constants.FLOATINGIP_AGENT_INTF_KEY])
    +        self.assertTrue(floating_agent_gw_port)
    +        fip_ns = router1.fip_ns.get_name()
    +        router1.fip_ns.agent_gw_port = floating_agent_gw_port
    +        self.assertTrue(self._namespace_exists(router1.ns_name))
    +        self.assertTrue(self._namespace_exists(fip_ns))
    +        self._assert_dvr_floating_ips(router1)
    +        self._assert_dvr_snat_gateway(router1)
    +        router1.router[l3_constants.FLOATINGIP_KEY] = []
    +        rpc_mock = mock.patch.object(
    +            self.agent.plugin_rpc, 'delete_agent_gateway_port').start()
    +        self.agent._process_updated_router(router1.router)
    +        self.assertTrue(rpc_mock.called)
    +        rpc_mock.assert_called_once_with(
    +            self.agent.context,
    +            floating_agent_gw_port[0]['network_id'])
    +        self.assertFalse(self._namespace_exists(fip_ns))
    
  • neutron/tests/functional/agent/test_ovs_flows.py+81 11 modified
    @@ -14,13 +14,15 @@
     #    under the License.
     
     import eventlet
    +import fixtures
     import mock
     
     from oslo_config import cfg
     from oslo_utils import importutils
     
     from neutron.agent.linux import ip_lib
     from neutron.cmd.sanity import checks
    +from neutron.common import constants as n_const
     from neutron.plugins.ml2.drivers.openvswitch.agent.common import constants
     from neutron.plugins.ml2.drivers.openvswitch.agent \
         import ovs_neutron_agent as ovsagt
    @@ -47,18 +49,34 @@ def setUp(self):
             self.br_int = None
             self.init_done = False
             self.init_done_ev = eventlet.event.Event()
    -        self._main_thread = eventlet.spawn(self._kick_main)
             self.addCleanup(self._kill_main)
    +        retry_count = 3
    +        while True:
    +            cfg.CONF.set_override('of_listen_port',
    +                                  net_helpers.get_free_namespace_port(
    +                                      n_const.PROTO_NAME_TCP),
    +                                  group='OVS')
    +            self.of_interface_mod.init_config()
    +            self._main_thread = eventlet.spawn(self._kick_main)
     
    -        # Wait for _kick_main -> of_interface main -> _agent_main
    -        # NOTE(yamamoto): This complexity came from how "native" of_interface
    -        # runs its openflow controller.  "native" of_interface's main routine
    -        # blocks while running the embedded openflow controller.  In that case,
    -        # the agent rpc_loop runs in another thread.  However, for FT we need
    -        # to run setUp() and test_xxx() in the same thread.  So I made this
    -        # run of_interface's main in a separate thread instead.
    -        while not self.init_done:
    -            self.init_done_ev.wait()
    +            # Wait for _kick_main -> of_interface main -> _agent_main
    +            # NOTE(yamamoto): This complexity came from how "native"
    +            # of_interface runs its openflow controller.  "native"
    +            # of_interface's main routine blocks while running the
    +            # embedded openflow controller.  In that case, the agent
    +            # rpc_loop runs in another thread.  However, for FT we
    +            # need to run setUp() and test_xxx() in the same thread.
    +            # So I made this run of_interface's main in a separate
    +            # thread instead.
    +            try:
    +                while not self.init_done:
    +                    self.init_done_ev.wait()
    +                break
    +            except fixtures.TimeoutException:
    +                self._kill_main()
    +            retry_count -= 1
    +            if retry_count < 0:
    +                raise Exception('port allocation failed')
     
         def _kick_main(self):
             with mock.patch.object(ovsagt, 'main', self._agent_main):
    @@ -87,6 +105,11 @@ class _OVSAgentOFCtlTestBase(_OVSAgentTestBase):
                         'openflow.ovs_ofctl.main')
     
     
    +class _OVSAgentNativeTestBase(_OVSAgentTestBase):
    +    _MAIN_MODULE = ('neutron.plugins.ml2.drivers.openvswitch.agent.'
    +                    'openflow.native.main')
    +
    +
     class _ARPSpoofTestCase(object):
         def setUp(self):
             # NOTE(kevinbenton): it would be way cooler to use scapy for
    @@ -139,6 +162,21 @@ def test_arp_spoof_blocks_response(self):
             self.dst_p.addr.add('%s/24' % self.dst_addr)
             net_helpers.assert_no_ping(self.src_namespace, self.dst_addr, count=2)
     
    +    def test_arp_spoof_blocks_icmpv6_neigh_advt(self):
    +        self.src_addr = '2000::1'
    +        self.dst_addr = '2000::2'
    +        # this will prevent the destination from responding (i.e., icmpv6
    +        # neighbour advertisement) to the icmpv6 neighbour solicitation
    +        # request for it's own address (2000::2) as spoofing rules added
    +        # below only allow '2000::3'.
    +        self._setup_arp_spoof_for_port(self.dst_p.name, ['2000::3'])
    +        self.src_p.addr.add('%s/64' % self.src_addr)
    +        self.dst_p.addr.add('%s/64' % self.dst_addr)
    +        # make sure the IPv6 addresses are ready before pinging
    +        self.src_p.addr.wait_until_address_ready(self.src_addr)
    +        self.dst_p.addr.wait_until_address_ready(self.dst_addr)
    +        net_helpers.assert_no_ping(self.src_namespace, self.dst_addr, count=2)
    +
         def test_arp_spoof_blocks_request(self):
             # this will prevent the source from sending an ARP
             # request with its own address
    @@ -161,6 +199,18 @@ def test_arp_spoof_allowed_address_pairs(self):
             self.dst_p.addr.add('%s/24' % self.dst_addr)
             net_helpers.assert_ping(self.src_namespace, self.dst_addr, count=2)
     
    +    def test_arp_spoof_icmpv6_neigh_advt_allowed_address_pairs(self):
    +        self.src_addr = '2000::1'
    +        self.dst_addr = '2000::2'
    +        self._setup_arp_spoof_for_port(self.dst_p.name, ['2000::3',
    +                                                         self.dst_addr])
    +        self.src_p.addr.add('%s/64' % self.src_addr)
    +        self.dst_p.addr.add('%s/64' % self.dst_addr)
    +        # make sure the IPv6 addresses are ready before pinging
    +        self.src_p.addr.wait_until_address_ready(self.src_addr)
    +        self.dst_p.addr.wait_until_address_ready(self.dst_addr)
    +        net_helpers.assert_ping(self.src_namespace, self.dst_addr, count=2)
    +
         def test_arp_spoof_allowed_address_pairs_0cidr(self):
             self._setup_arp_spoof_for_port(self.dst_p.name, ['9.9.9.9/0',
                                                              '1.2.3.4'])
    @@ -178,12 +228,24 @@ def test_arp_spoof_disable_port_security(self):
             self.dst_p.addr.add('%s/24' % self.dst_addr)
             net_helpers.assert_ping(self.src_namespace, self.dst_addr, count=2)
     
    -    def _setup_arp_spoof_for_port(self, port, addrs, psec=True):
    +    def test_arp_spoof_disable_network_port(self):
    +        # block first and then disable port security to make sure old rules
    +        # are cleared
    +        self._setup_arp_spoof_for_port(self.dst_p.name, ['192.168.0.3'])
    +        self._setup_arp_spoof_for_port(self.dst_p.name, ['192.168.0.3'],
    +                                       device_owner='network:router_gateway')
    +        self.src_p.addr.add('%s/24' % self.src_addr)
    +        self.dst_p.addr.add('%s/24' % self.dst_addr)
    +        net_helpers.assert_ping(self.src_namespace, self.dst_addr, count=2)
    +
    +    def _setup_arp_spoof_for_port(self, port, addrs, psec=True,
    +                                  device_owner='nobody'):
             vif = next(
                 vif for vif in self.br.get_vif_ports() if vif.port_name == port)
             ip_addr = addrs.pop()
             details = {'port_security_enabled': psec,
                        'fixed_ips': [{'ip_address': ip_addr}],
    +                   'device_owner': device_owner,
                        'allowed_address_pairs': [
                             dict(ip_address=ip) for ip in addrs]}
             ovsagt.OVSNeutronAgent.setup_arp_spoofing_protection(
    @@ -194,6 +256,10 @@ class ARPSpoofOFCtlTestCase(_ARPSpoofTestCase, _OVSAgentOFCtlTestBase):
         pass
     
     
    +class ARPSpoofNativeTestCase(_ARPSpoofTestCase, _OVSAgentNativeTestBase):
    +    pass
    +
    +
     class _CanaryTableTestCase(object):
         def test_canary_table(self):
             self.br_int.delete_flows()
    @@ -206,3 +272,7 @@ def test_canary_table(self):
     
     class CanaryTableOFCtlTestCase(_CanaryTableTestCase, _OVSAgentOFCtlTestBase):
         pass
    +
    +
    +class CanaryTableNativeTestCase(_CanaryTableTestCase, _OVSAgentNativeTestBase):
    +    pass
    
  • neutron/tests/functional/agent/test_ovs_lib.py+0 10 modified
    @@ -274,16 +274,6 @@ def test_delete_ports(self):
             self.br.delete_ports(all_ports=True)
             self.assertEqual(len(self.br.get_port_name_list()), 0)
     
    -    def test_reset_bridge(self):
    -        self.create_ovs_port()
    -        self.br.reset_bridge()
    -        self.assertEqual(len(self.br.get_port_name_list()), 0)
    -        self._assert_br_fail_mode([])
    -
    -    def test_reset_bridge_secure_mode(self):
    -        self.br.reset_bridge(secure_mode=True)
    -        self._assert_br_fail_mode(ovs_lib.FAILMODE_SECURE)
    -
         def test_set_controller_connection_mode(self):
             controllers = ['tcp:192.0.2.0:6633']
             self._set_controllers_connection_mode(controllers)
    
  • neutron/tests/functional/base.py+12 1 modified
    @@ -19,11 +19,15 @@
     
     from neutron.agent.common import config
     from neutron.agent.linux import utils
    +from neutron.common import utils as common_utils
     from neutron.tests import base
     from neutron.tests.common import base as common_base
     
     SUDO_CMD = 'sudo -n'
     
    +# This is the directory from which infra fetches log files for functional tests
    +DEFAULT_LOG_DIR = '/tmp/dsvm-functional-logs/'
    +
     
     class BaseSudoTestCase(base.BaseTestCase):
         """
    @@ -48,10 +52,17 @@ class BaseSudoTestCase(base.BaseTestCase):
     
         def setUp(self):
             super(BaseSudoTestCase, self).setUp()
    -
             if not base.bool_from_env('OS_SUDO_TESTING'):
                 self.skipTest('Testing with sudo is not enabled')
     
    +        # Have each test log into its own log file
    +        cfg.CONF.set_override('debug', True)
    +        common_utils.ensure_dir(DEFAULT_LOG_DIR)
    +        log_file = base.sanitize_log_path(
    +            os.path.join(DEFAULT_LOG_DIR, "%s.log" % self.id()))
    +        cfg.CONF.set_override('log_file', log_file)
    +        config.setup_logging()
    +
             config.register_root_helper(cfg.CONF)
             self.config(group='AGENT',
                         root_helper=os.environ.get('OS_ROOTWRAP_CMD', SUDO_CMD))
    
  • neutron/tests/functional/db/test_migrations.py+69 9 modified
    @@ -20,6 +20,7 @@
     import alembic.autogenerate
     import alembic.migration
     from alembic import script as alembic_script
    +from contextlib import contextmanager
     import mock
     from oslo_config import cfg
     from oslo_config import fixture as config_fixture
    @@ -28,6 +29,7 @@
     import sqlalchemy
     from sqlalchemy import event
     
    +import neutron.db.migration as migration_help
     from neutron.db.migration.alembic_migrations import external
     from neutron.db.migration import cli as migration
     from neutron.db.migration.models import head as head_models
    @@ -106,8 +108,6 @@ class _TestModelsMigrations(test_migrations.ModelsMigrationsSync):
         def setUp(self):
             patch = mock.patch.dict('sys.modules', {
                 'heleosapi': mock.MagicMock(),
    -            'midonetclient': mock.MagicMock(),
    -            'midonetclient.neutron': mock.MagicMock(),
             })
             patch.start()
             self.addCleanup(patch.stop)
    @@ -207,6 +207,14 @@ def remove_unrelated_errors(self, element):
     
     class TestModelsMigrationsMysql(_TestModelsMigrations,
                                     base.MySQLTestCase):
    +    @contextmanager
    +    def _listener(self, engine, listener_func):
    +        try:
    +            event.listen(engine, 'before_execute', listener_func)
    +            yield
    +        finally:
    +            event.remove(engine, 'before_execute',
    +                         listener_func)
     
         # There is no use to run this against both dialects, so add this test just
         # for MySQL tests
    @@ -222,20 +230,72 @@ def block_external_tables(conn, clauseelement, multiparams, params):
                               "migration.")
     
                 if hasattr(clauseelement, 'element'):
    -                if (clauseelement.element.name in external.TABLES or
    +                element = clauseelement.element
    +                if (element.name in external.TABLES or
                             (hasattr(clauseelement, 'table') and
    -                         clauseelement.element.table.name in external.TABLES)):
    +                            element.table.name in external.TABLES)):
    +                    # Table 'nsxv_vdr_dhcp_bindings' was created in liberty,
    +                    # before NSXV has moved to separate repo.
    +                    if ((isinstance(clauseelement,
    +                                    sqlalchemy.sql.ddl.CreateTable) and
    +                            element.name == 'nsxv_vdr_dhcp_bindings')):
    +                        return
                         self.fail("External table referenced by neutron core "
                                   "migration.")
     
             engine = self.get_engine()
             cfg.CONF.set_override('connection', engine.url, group='database')
    -        migration.do_alembic_command(self.alembic_config, 'upgrade', 'kilo')
    +        with engine.begin() as connection:
    +            self.alembic_config.attributes['connection'] = connection
    +            migration.do_alembic_command(self.alembic_config, 'upgrade',
    +                                         'kilo')
    +
    +            with self._listener(engine,
    +                                block_external_tables):
    +                migration.do_alembic_command(self.alembic_config, 'upgrade',
    +                                             'heads')
    +
    +    def test_branches(self):
    +
    +        def check_expand_branch(conn, clauseelement, multiparams, params):
    +            if isinstance(clauseelement, migration_help.DROP_OPERATIONS):
    +                self.fail("Migration from expand branch contains drop command")
    +
    +        def check_contract_branch(conn, clauseelement, multiparams, params):
    +            if isinstance(clauseelement, migration_help.CREATION_OPERATIONS):
    +                # Skip tables that were created by mistake in contract branch
    +                if hasattr(clauseelement, 'element'):
    +                    element = clauseelement.element
    +                    if any([
    +                        isinstance(element, sqlalchemy.Table) and
    +                        element.name in ['ml2_geneve_allocations',
    +                                         'ml2_geneve_endpoints'],
    +                        isinstance(element, sqlalchemy.ForeignKeyConstraint)
    +                        and
    +                        element.table.name == 'flavorserviceprofilebindings',
    +                        isinstance(element, sqlalchemy.Index) and
    +                        element.table.name == 'ml2_geneve_allocations'
    +                    ]):
    +                        return
    +                self.fail("Migration from contract branch contains create "
    +                          "command")
     
    -        event.listen(engine, 'before_execute', block_external_tables)
    -        migration.do_alembic_command(self.alembic_config, 'upgrade', 'heads')
    -
    -        event.remove(engine, 'before_execute', block_external_tables)
    +        engine = self.get_engine()
    +        cfg.CONF.set_override('connection', engine.url, group='database')
    +        with engine.begin() as connection:
    +            self.alembic_config.attributes['connection'] = connection
    +            migration.do_alembic_command(self.alembic_config, 'upgrade',
    +                                         'kilo')
    +
    +            with self._listener(engine, check_expand_branch):
    +                migration.do_alembic_command(
    +                    self.alembic_config, 'upgrade',
    +                    '%s@head' % migration.EXPAND_BRANCH)
    +
    +            with self._listener(engine, check_contract_branch):
    +                migration.do_alembic_command(
    +                    self.alembic_config, 'upgrade',
    +                    '%s@head' % migration.CONTRACT_BRANCH)
     
     
     class TestModelsMigrationsPsql(_TestModelsMigrations,
    
  • neutron/tests/functional/sanity/test_sanity.py+3 0 modified
    @@ -65,6 +65,9 @@ def test_arp_responder_runs(self):
         def test_arp_header_match_runs(self):
             checks.arp_header_match_supported()
     
    +    def test_icmpv6_header_match_runs(self):
    +        checks.icmpv6_header_match_supported()
    +
         def test_vf_management_runs(self):
             checks.vf_management_supported()
     
    
  • neutron/tests/functional/services/l3_router/test_l3_dvr_router_plugin.py+55 3 modified
    @@ -112,8 +112,9 @@ def test_remove_router_interface_by_port_leaves_snat_intact(self):
             self._test_remove_router_interface_leaves_snat_intact(
                 by_subnet=False)
     
    -    def setup_create_agent_gw_port_for_network(self):
    -        network = self._make_network(self.fmt, '', True)
    +    def setup_create_agent_gw_port_for_network(self, network=None):
    +        if not network:
    +            network = self._make_network(self.fmt, '', True)
             network_id = network['network']['id']
             port = self.core_plugin.create_port(
                 self.context,
    @@ -141,7 +142,7 @@ def test_delete_agent_gw_port_for_network(self):
             network_id, port = (
                 self.setup_create_agent_gw_port_for_network())
     
    -        self.l3_plugin._delete_floatingip_agent_gateway_port(
    +        self.l3_plugin.delete_floatingip_agent_gateway_port(
                 self.context, "", network_id)
             self.assertIsNone(
                 self.l3_plugin._get_agent_gw_ports_exist_for_network(
    @@ -168,3 +169,54 @@ def test_get_router_ids(self):
             self._create_router()
             self.assertEqual(
                 2, len(self.l3_plugin._get_router_ids(self.context)))
    +
    +    def test_agent_gw_port_delete_when_last_gateway_for_ext_net_removed(self):
    +        kwargs = {'arg_list': (external_net.EXTERNAL,),
    +                  external_net.EXTERNAL: True}
    +        net1 = self._make_network(self.fmt, 'net1', True)
    +        net2 = self._make_network(self.fmt, 'net2', True)
    +        subnet1 = self._make_subnet(
    +            self.fmt, net1, '10.1.0.1', '10.1.0.0/24', enable_dhcp=True)
    +        subnet2 = self._make_subnet(
    +            self.fmt, net2, '10.1.0.1', '10.1.0.0/24', enable_dhcp=True)
    +        ext_net = self._make_network(self.fmt, 'ext_net', True, **kwargs)
    +        self._make_subnet(
    +            self.fmt, ext_net, '20.0.0.1', '20.0.0.0/24', enable_dhcp=True)
    +        # Create first router and add an interface
    +        router1 = self._create_router()
    +        ext_net_id = ext_net['network']['id']
    +        self.l3_plugin.add_router_interface(
    +            self.context, router1['id'],
    +            {'subnet_id': subnet1['subnet']['id']})
    +        # Set gateway to first router
    +        self.l3_plugin._update_router_gw_info(
    +            self.context, router1['id'],
    +            {'network_id': ext_net_id})
    +        # Create second router and add an interface
    +        router2 = self._create_router()
    +        self.l3_plugin.add_router_interface(
    +            self.context, router2['id'],
    +            {'subnet_id': subnet2['subnet']['id']})
    +        # Set gateway to second router
    +        self.l3_plugin._update_router_gw_info(
    +            self.context, router2['id'],
    +            {'network_id': ext_net_id})
    +        # Create an agent gateway port for the external network
    +        net_id, agent_gw_port = (
    +            self.setup_create_agent_gw_port_for_network(network=ext_net))
    +        # Check for agent gateway ports
    +        self.assertIsNotNone(
    +            self.l3_plugin._get_agent_gw_ports_exist_for_network(
    +                self.context, ext_net_id, "", self.l3_agent['id']))
    +        self.l3_plugin._update_router_gw_info(
    +            self.context, router1['id'], {})
    +        # Check for agent gateway port after deleting one of the gw
    +        self.assertIsNotNone(
    +            self.l3_plugin._get_agent_gw_ports_exist_for_network(
    +                self.context, ext_net_id, "", self.l3_agent['id']))
    +        self.l3_plugin._update_router_gw_info(
    +            self.context, router2['id'], {})
    +        # Check for agent gateway port after deleting last gw
    +        self.assertIsNone(
    +            self.l3_plugin._get_agent_gw_ports_exist_for_network(
    +                self.context, ext_net_id, "", self.l3_agent['id']))
    
  • neutron/tests/functional/test_server.py+46 1 modified
    @@ -27,6 +27,7 @@
     from neutron.agent.linux import utils
     from neutron import service
     from neutron.tests import base
    +from neutron import worker
     from neutron import wsgi
     
     
    @@ -104,9 +105,15 @@ def _start_server(self, callback, workers):
         def _get_workers(self):
             """Get the list of processes in which WSGI server is running."""
     
    +        def safe_ppid(proc):
    +            try:
    +                return proc.ppid
    +            except psutil.NoSuchProcess:
    +                return None
    +
             if self.workers > 0:
                 return [proc.pid for proc in psutil.process_iter()
    -                    if proc.ppid == self.service_pid]
    +                    if safe_ppid(proc) == self.service_pid]
             else:
                 return [proc.pid for proc in psutil.process_iter()
                         if proc.pid == self.service_pid]
    @@ -245,3 +252,41 @@ def _serve_rpc(self, workers=0):
         def test_restart_rpc_on_sighup_multiple_workers(self):
             self._test_restart_service_on_sighup(service=self._serve_rpc,
                                                  workers=2)
    +
    +
    +class TestPluginWorker(TestNeutronServer):
    +    """Ensure that a plugin returning Workers spawns workers"""
    +
    +    def setUp(self):
    +        super(TestPluginWorker, self).setUp()
    +        self.setup_coreplugin(TARGET_PLUGIN)
    +        self._plugin_patcher = mock.patch(TARGET_PLUGIN, autospec=True)
    +        self.plugin = self._plugin_patcher.start()
    +
    +    def _start_plugin(self, workers=0):
    +        with mock.patch('neutron.manager.NeutronManager.get_plugin') as gp:
    +            gp.return_value = self.plugin
    +            launchers = service.start_plugin_workers()
    +            for launcher in launchers:
    +                launcher.wait()
    +
    +    def test_start(self):
    +        class FakeWorker(worker.NeutronWorker):
    +            def start(self):
    +                pass
    +
    +            def wait(self):
    +                pass
    +
    +            def stop(self):
    +                pass
    +
    +            def reset(self):
    +                pass
    +
    +        # Make both ABC happy and ensure 'self' is correct
    +        FakeWorker.reset = self._fake_reset
    +        workers = [FakeWorker()]
    +        self.plugin.return_value.get_workers.return_value = workers
    +        self._test_restart_service_on_sighup(service=self._start_plugin,
    +                                             workers=len(workers))
    
  • neutron/tests/tempest/auth.py+2 3 modified
    @@ -23,9 +23,8 @@
     from oslo_log import log as logging
     import six
     
    -from neutron.tests.tempest.services.identity.v2.json import token_client as json_v2id
    -from neutron.tests.tempest.services.identity.v3.json import token_client as json_v3id
    -
    +from tempest_lib.services.identity.v2 import token_client as json_v2id
    +from tempest_lib.services.identity.v3 import token_client as json_v3id
     
     LOG = logging.getLogger(__name__)
     
    
  • neutron/tests/tempest/exceptions.py+1 1 modified
    @@ -64,7 +64,7 @@ class InvalidServiceTag(TempestException):
     
     
     class InvalidIdentityVersion(TempestException):
    -    message = "Invalid version %(identity_version) of the identity service"
    +    message = "Invalid version %(identity_version)s of the identity service"
     
     
     class TimeoutException(TempestException):
    
  • neutron/tests/tempest/services/identity/v2/json/token_client.py+0 110 removed
    @@ -1,110 +0,0 @@
    -# Copyright 2015 NEC Corporation.  All rights reserved.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -from oslo_serialization import jsonutils as json
    -from tempest_lib.common import rest_client
    -from tempest_lib import exceptions as lib_exc
    -
    -from neutron.tests.tempest.common import service_client
    -from neutron.tests.tempest import exceptions
    -
    -
    -class TokenClientJSON(rest_client.RestClient):
    -
    -    def __init__(self, auth_url, disable_ssl_certificate_validation=None,
    -                 ca_certs=None, trace_requests=None):
    -        dscv = disable_ssl_certificate_validation
    -        super(TokenClientJSON, self).__init__(
    -            None, None, None, disable_ssl_certificate_validation=dscv,
    -            ca_certs=ca_certs, trace_requests=trace_requests)
    -
    -        # Normalize URI to ensure /tokens is in it.
    -        if 'tokens' not in auth_url:
    -            auth_url = auth_url.rstrip('/') + '/tokens'
    -
    -        self.auth_url = auth_url
    -
    -    def auth(self, user, password, tenant=None):
    -        creds = {
    -            'auth': {
    -                'passwordCredentials': {
    -                    'username': user,
    -                    'password': password,
    -                },
    -            }
    -        }
    -
    -        if tenant:
    -            creds['auth']['tenantName'] = tenant
    -
    -        body = json.dumps(creds)
    -        resp, body = self.post(self.auth_url, body=body)
    -        self.expected_success(200, resp.status)
    -
    -        return service_client.ResponseBody(resp, body['access'])
    -
    -    def auth_token(self, token_id, tenant=None):
    -        creds = {
    -            'auth': {
    -                'token': {
    -                    'id': token_id,
    -                },
    -            }
    -        }
    -
    -        if tenant:
    -            creds['auth']['tenantName'] = tenant
    -
    -        body = json.dumps(creds)
    -        resp, body = self.post(self.auth_url, body=body)
    -        self.expected_success(200, resp.status)
    -
    -        return service_client.ResponseBody(resp, body['access'])
    -
    -    def request(self, method, url, extra_headers=False, headers=None,
    -                body=None):
    -        """A simple HTTP request interface."""
    -        if headers is None:
    -            headers = self.get_headers(accept_type="json")
    -        elif extra_headers:
    -            try:
    -                headers.update(self.get_headers(accept_type="json"))
    -            except (ValueError, TypeError):
    -                headers = self.get_headers(accept_type="json")
    -
    -        resp, resp_body = self.raw_request(url, method,
    -                                           headers=headers, body=body)
    -        self._log_request(method, url, resp)
    -
    -        if resp.status in [401, 403]:
    -            resp_body = json.loads(resp_body)
    -            raise lib_exc.Unauthorized(resp_body['error']['message'])
    -        elif resp.status not in [200, 201]:
    -            raise exceptions.IdentityError(
    -                'Unexpected status code {0}'.format(resp.status))
    -
    -        if isinstance(resp_body, str):
    -            resp_body = json.loads(resp_body)
    -        return resp, resp_body
    -
    -    def get_token(self, user, password, tenant, auth_data=False):
    -        """
    -        Returns (token id, token data) for supplied credentials
    -        """
    -        body = self.auth(user, password, tenant)
    -
    -        if auth_data:
    -            return body['token']['id'], body
    -        else:
    -            return body['token']['id']
    
  • neutron/tests/tempest/services/identity/v3/json/token_client.py+0 172 removed
    @@ -1,172 +0,0 @@
    -# Copyright 2015 NEC Corporation.  All rights reserved.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -from oslo_serialization import jsonutils as json
    -from tempest_lib.common import rest_client
    -from tempest_lib import exceptions as lib_exc
    -
    -from neutron.tests.tempest.common import service_client
    -from neutron.tests.tempest import exceptions
    -
    -
    -class V3TokenClientJSON(rest_client.RestClient):
    -
    -    def __init__(self, auth_url, disable_ssl_certificate_validation=None,
    -                 ca_certs=None, trace_requests=None):
    -        dscv = disable_ssl_certificate_validation
    -        super(V3TokenClientJSON, self).__init__(
    -            None, None, None, disable_ssl_certificate_validation=dscv,
    -            ca_certs=ca_certs, trace_requests=trace_requests)
    -        if not auth_url:
    -            raise exceptions.InvalidConfiguration('you must specify a v3 uri '
    -                                                  'if using the v3 identity '
    -                                                  'api')
    -        if 'auth/tokens' not in auth_url:
    -            auth_url = auth_url.rstrip('/') + '/auth/tokens'
    -
    -        self.auth_url = auth_url
    -
    -    def auth(self, user_id=None, username=None, password=None, project_id=None,
    -             project_name=None, user_domain_id=None, user_domain_name=None,
    -             project_domain_id=None, project_domain_name=None, domain_id=None,
    -             domain_name=None, token=None):
    -        """
    -        :param user_id: user id
    -        :param username: user name
    -        :param user_domain_id: the user domain id
    -        :param user_domain_name: the user domain name
    -        :param project_domain_id: the project domain id
    -        :param project_domain_name: the project domain name
    -        :param domain_id: a domain id to scope to
    -        :param domain_name: a domain name to scope to
    -        :param project_id: a project id to scope to
    -        :param project_name: a project name to scope to
    -        :param token: a token to re-scope.
    -
    -        Accepts different combinations of credentials.
    -        Sample sample valid combinations:
    -        - token
    -        - token, project_name, project_domain_id
    -        - user_id, password
    -        - username, password, user_domain_id
    -        - username, password, project_name, user_domain_id, project_domain_id
    -        Validation is left to the server side.
    -        """
    -        creds = {
    -            'auth': {
    -                'identity': {
    -                    'methods': [],
    -                }
    -            }
    -        }
    -        id_obj = creds['auth']['identity']
    -        if token:
    -            id_obj['methods'].append('token')
    -            id_obj['token'] = {
    -                'id': token
    -            }
    -
    -        if (user_id or username) and password:
    -            id_obj['methods'].append('password')
    -            id_obj['password'] = {
    -                'user': {
    -                    'password': password,
    -                }
    -            }
    -            if user_id:
    -                id_obj['password']['user']['id'] = user_id
    -            else:
    -                id_obj['password']['user']['name'] = username
    -
    -            _domain = None
    -            if user_domain_id is not None:
    -                _domain = dict(id=user_domain_id)
    -            elif user_domain_name is not None:
    -                _domain = dict(name=user_domain_name)
    -            if _domain:
    -                id_obj['password']['user']['domain'] = _domain
    -
    -        if (project_id or project_name):
    -            _project = dict()
    -
    -            if project_id:
    -                _project['id'] = project_id
    -            elif project_name:
    -                _project['name'] = project_name
    -
    -                if project_domain_id is not None:
    -                    _project['domain'] = {'id': project_domain_id}
    -                elif project_domain_name is not None:
    -                    _project['domain'] = {'name': project_domain_name}
    -
    -            creds['auth']['scope'] = dict(project=_project)
    -        elif domain_id:
    -            creds['auth']['scope'] = dict(domain={'id': domain_id})
    -        elif domain_name:
    -            creds['auth']['scope'] = dict(domain={'name': domain_name})
    -
    -        body = json.dumps(creds)
    -        resp, body = self.post(self.auth_url, body=body)
    -        self.expected_success(201, resp.status)
    -        return service_client.ResponseBody(resp, body)
    -
    -    def request(self, method, url, extra_headers=False, headers=None,
    -                body=None):
    -        """A simple HTTP request interface."""
    -        if headers is None:
    -            # Always accept 'json', for xml token client too.
    -            # Because XML response is not easily
    -            # converted to the corresponding JSON one
    -            headers = self.get_headers(accept_type="json")
    -        elif extra_headers:
    -            try:
    -                headers.update(self.get_headers(accept_type="json"))
    -            except (ValueError, TypeError):
    -                headers = self.get_headers(accept_type="json")
    -
    -        resp, resp_body = self.raw_request(url, method,
    -                                           headers=headers, body=body)
    -        self._log_request(method, url, resp)
    -
    -        if resp.status in [401, 403]:
    -            resp_body = json.loads(resp_body)
    -            raise lib_exc.Unauthorized(resp_body['error']['message'])
    -        elif resp.status not in [200, 201, 204]:
    -            raise exceptions.IdentityError(
    -                'Unexpected status code {0}'.format(resp.status))
    -
    -        return resp, json.loads(resp_body)
    -
    -    def get_token(self, **kwargs):
    -        """
    -        Returns (token id, token data) for supplied credentials
    -        """
    -
    -        auth_data = kwargs.pop('auth_data', False)
    -
    -        if not (kwargs.get('user_domain_id') or
    -                kwargs.get('user_domain_name')):
    -            kwargs['user_domain_name'] = 'Default'
    -
    -        if not (kwargs.get('project_domain_id') or
    -                kwargs.get('project_domain_name')):
    -            kwargs['project_domain_name'] = 'Default'
    -
    -        body = self.auth(**kwargs)
    -
    -        token = body.response.get('x-subject-token')
    -        if auth_data:
    -            return token, body['token']
    -        else:
    -            return token
    
  • neutron/tests/unit/agent/common/test_ovs_lib.py+34 0 modified
    @@ -400,6 +400,40 @@ def test_add_vxlan_fragmented_tunnel_port(self):
     
             tools.verify_mock_calls(self.execute, expected_calls_and_values)
     
    +    def test_add_vxlan_csum_tunnel_port(self):
    +        pname = "tap99"
    +        local_ip = "1.1.1.1"
    +        remote_ip = "9.9.9.9"
    +        ofport = 6
    +        vxlan_udp_port = "9999"
    +        dont_fragment = True
    +        tunnel_csum = True
    +        command = ["--may-exist", "add-port", self.BR_NAME, pname]
    +        command.extend(["--", "set", "Interface", pname])
    +        command.extend(["type=" + constants.TYPE_VXLAN,
    +                        "options:dst_port=" + vxlan_udp_port,
    +                        "options:df_default=true",
    +                        "options:remote_ip=" + remote_ip,
    +                        "options:local_ip=" + local_ip,
    +                        "options:in_key=flow",
    +                        "options:out_key=flow",
    +                        "options:csum=true"])
    +        # Each element is a tuple of (expected mock call, return_value)
    +        expected_calls_and_values = [
    +            (self._vsctl_mock(*command), None),
    +            (self._vsctl_mock("--columns=ofport", "list", "Interface", pname),
    +             self._encode_ovs_json(['ofport'], [[ofport]])),
    +        ]
    +        tools.setup_mock_calls(self.execute, expected_calls_and_values)
    +
    +        self.assertEqual(
    +            self.br.add_tunnel_port(pname, remote_ip, local_ip,
    +                                    constants.TYPE_VXLAN, vxlan_udp_port,
    +                                    dont_fragment, tunnel_csum),
    +            ofport)
    +
    +        tools.verify_mock_calls(self.execute, expected_calls_and_values)
    +
         def _test_get_vif_ports(self, is_xen=False):
             pname = "tap99"
             ofport = 6
    
  • neutron/tests/unit/agent/dhcp/test_agent.py+21 5 modified
    @@ -215,7 +215,7 @@
     class TestDhcpAgent(base.BaseTestCase):
         def setUp(self):
             super(TestDhcpAgent, self).setUp()
    -        entry.register_options()
    +        entry.register_options(cfg.CONF)
             cfg.CONF.set_override('interface_driver',
                                   'neutron.agent.linux.interface.NullDriver')
             # disable setting up periodic state reporting
    @@ -384,7 +384,7 @@ def test_sync_state_waitall(self):
                 self._test_sync_state_helper(known_net_ids, active_net_ids)
                 w.assert_called_once_with()
     
    -    def test_sync_state_plugin_error(self):
    +    def test_sync_state_for_all_networks_plugin_error(self):
             with mock.patch(DHCP_PLUGIN) as plug:
                 mock_plugin = mock.Mock()
                 mock_plugin.get_active_networks_info.side_effect = Exception
    @@ -399,6 +399,22 @@ def test_sync_state_plugin_error(self):
                         self.assertTrue(log.called)
                         self.assertTrue(schedule_resync.called)
     
    +    def test_sync_state_for_one_network_plugin_error(self):
    +        with mock.patch(DHCP_PLUGIN) as plug:
    +            mock_plugin = mock.Mock()
    +            exc = Exception()
    +            mock_plugin.get_active_networks_info.side_effect = exc
    +            plug.return_value = mock_plugin
    +
    +            with mock.patch.object(dhcp_agent.LOG, 'exception') as log:
    +                dhcp = dhcp_agent.DhcpAgent(HOSTNAME)
    +                with mock.patch.object(dhcp,
    +                                       'schedule_resync') as schedule_resync:
    +                    dhcp.sync_state(['foo_network'])
    +
    +                    self.assertTrue(log.called)
    +                    schedule_resync.assert_called_with(exc, 'foo_network')
    +
         def test_periodic_resync(self):
             dhcp = dhcp_agent.DhcpAgent(HOSTNAME)
             with mock.patch.object(dhcp_agent.eventlet, 'spawn') as spawn:
    @@ -541,7 +557,7 @@ def setUp(self):
             config.register_interface_driver_opts_helper(cfg.CONF)
             cfg.CONF.set_override('interface_driver',
                                   'neutron.agent.linux.interface.NullDriver')
    -        entry.register_options()  # register all dhcp cfg options
    +        entry.register_options(cfg.CONF)  # register all dhcp cfg options
     
             self.plugin_p = mock.patch(DHCP_PLUGIN)
             plugin_cls = self.plugin_p.start()
    @@ -978,8 +994,7 @@ def test_port_delete_end_unknown_port(self):
     class TestDhcpPluginApiProxy(base.BaseTestCase):
         def _test_dhcp_api(self, method, **kwargs):
             ctxt = context.get_admin_context()
    -        proxy = dhcp_agent.DhcpPluginApi('foo', ctxt, None)
    -        proxy.host = 'foo'
    +        proxy = dhcp_agent.DhcpPluginApi('foo', ctxt, None, host='foo')
     
             with mock.patch.object(proxy.client, 'call') as rpc_mock,\
                     mock.patch.object(proxy.client, 'prepare') as prepare_mock:
    @@ -1193,6 +1208,7 @@ def setUp(self):
             self.mock_driver = mock.MagicMock()
             self.mock_driver.DEV_NAME_LEN = (
                 interface.LinuxInterfaceDriver.DEV_NAME_LEN)
    +        self.mock_driver.use_gateway_ips = False
             self.mock_iproute = mock.MagicMock()
             driver_cls.return_value = self.mock_driver
             iproute_cls.return_value = self.mock_iproute
    
  • neutron/tests/unit/agent/l2/extensions/test_qos.py+174 42 modified
    @@ -21,12 +21,95 @@
     from neutron.api.rpc.callbacks import events
     from neutron.api.rpc.callbacks import resources
     from neutron.api.rpc.handlers import resources_rpc
    +from neutron.common import exceptions
     from neutron import context
    +from neutron.objects.qos import policy
    +from neutron.objects.qos import rule
     from neutron.plugins.ml2.drivers.openvswitch.agent.common import constants
    +from neutron.services.qos import qos_consts
     from neutron.tests import base
     
     
    -TEST_POLICY = object()
    +TEST_POLICY = policy.QosPolicy(context=None,
    +                               name='test1', id='fake_policy_id')
    +TEST_POLICY2 = policy.QosPolicy(context=None,
    +                                name='test2', id='fake_policy_id_2')
    +
    +TEST_PORT = {'port_id': 'test_port_id',
    +             'qos_policy_id': TEST_POLICY.id}
    +
    +TEST_PORT2 = {'port_id': 'test_port_id_2',
    +             'qos_policy_id': TEST_POLICY2.id}
    +
    +
    +class FakeDriver(qos.QosAgentDriver):
    +
    +    SUPPORTED_RULES = {qos_consts.RULE_TYPE_BANDWIDTH_LIMIT}
    +
    +    def __init__(self):
    +        super(FakeDriver, self).__init__()
    +        self.create_bandwidth_limit = mock.Mock()
    +        self.update_bandwidth_limit = mock.Mock()
    +        self.delete_bandwidth_limit = mock.Mock()
    +
    +    def initialize(self):
    +        pass
    +
    +
    +class QosFakeRule(rule.QosRule):
    +
    +    rule_type = 'fake_type'
    +
    +
    +class QosAgentDriverTestCase(base.BaseTestCase):
    +
    +    def setUp(self):
    +        super(QosAgentDriverTestCase, self).setUp()
    +        self.driver = FakeDriver()
    +        self.policy = TEST_POLICY
    +        self.rule = (
    +            rule.QosBandwidthLimitRule(context=None, id='fake_rule_id',
    +                                       qos_policy_id=self.policy.id,
    +                                       max_kbps=100, max_burst_kbps=200))
    +        self.policy.rules = [self.rule]
    +        self.port = {'qos_policy_id': None, 'network_qos_policy_id': None,
    +                     'device_owner': 'random-device-owner'}
    +
    +        self.fake_rule = QosFakeRule(context=None, id='really_fake_rule_id',
    +                                     qos_policy_id=self.policy.id)
    +
    +    def test_create(self):
    +        self.driver.create(self.port, self.policy)
    +        self.driver.create_bandwidth_limit.assert_called_with(
    +            self.port, self.rule)
    +
    +    def test_update(self):
    +        self.driver.update(self.port, self.policy)
    +        self.driver.update_bandwidth_limit.assert_called_with(
    +            self.port, self.rule)
    +
    +    def test_delete(self):
    +        self.driver.delete(self.port, self.policy)
    +        self.driver.delete_bandwidth_limit.assert_called_with(self.port)
    +
    +    def test_delete_no_policy(self):
    +        self.driver.delete(self.port, qos_policy=None)
    +        self.driver.delete_bandwidth_limit.assert_called_with(self.port)
    +
    +    def test__iterate_rules_with_unknown_rule_type(self):
    +        self.policy.rules.append(self.fake_rule)
    +        rules = list(self.driver._iterate_rules(self.policy.rules))
    +        self.assertEqual(1, len(rules))
    +        self.assertIsInstance(rules[0], rule.QosBandwidthLimitRule)
    +
    +    def test__handle_update_create_rules_checks_should_apply_to_port(self):
    +        self.rule.should_apply_to_port = mock.Mock(return_value=False)
    +        self.driver.create(self.port, self.policy)
    +        self.assertFalse(self.driver.create_bandwidth_limit.called)
    +
    +        self.rule.should_apply_to_port = mock.Mock(return_value=True)
    +        self.driver.create(self.port, self.policy)
    +        self.assertTrue(self.driver.create_bandwidth_limit.called)
     
     
     class QosExtensionBaseTestCase(base.BaseTestCase):
    @@ -55,9 +138,9 @@ def setUp(self):
                 self.qos_ext.resource_rpc, 'pull',
                 return_value=TEST_POLICY).start()
     
    -    def _create_test_port_dict(self):
    +    def _create_test_port_dict(self, qos_policy_id=None):
             return {'port_id': uuidutils.generate_uuid(),
    -                'qos_policy_id': uuidutils.generate_uuid()}
    +                'qos_policy_id': qos_policy_id or TEST_POLICY.id}
     
         def test_handle_port_with_no_policy(self):
             port = self._create_test_port_dict()
    @@ -76,8 +159,10 @@ def test_handle_unknown_port(self):
             self.qos_ext.qos_driver.create.assert_called_once_with(
                 port, TEST_POLICY)
             self.assertEqual(port,
    -            self.qos_ext.qos_policy_ports[qos_policy_id][port_id])
    -        self.assertTrue(port_id in self.qos_ext.known_ports)
    +            self.qos_ext.policy_map.qos_policy_ports[qos_policy_id][port_id])
    +        self.assertIn(port_id, self.qos_ext.policy_map.port_policies)
    +        self.assertEqual(TEST_POLICY,
    +            self.qos_ext.policy_map.known_policies[qos_policy_id])
     
         def test_handle_known_port(self):
             port_obj1 = self._create_test_port_dict()
    @@ -96,24 +181,20 @@ def test_handle_known_port_change_policy_id(self):
             self.pull_mock.assert_called_once_with(
                  self.context, resources.QOS_POLICY,
                  port['qos_policy_id'])
    -        #TODO(QoS): handle qos_driver.update call check when
    -        #           we do that
     
         def test_delete_known_port(self):
             port = self._create_test_port_dict()
    -        port_id = port['port_id']
             self.qos_ext.handle_port(self.context, port)
             self.qos_ext.qos_driver.reset_mock()
             self.qos_ext.delete_port(self.context, port)
    -        self.qos_ext.qos_driver.delete.assert_called_with(port, None)
    -        self.assertNotIn(port_id, self.qos_ext.known_ports)
    +        self.qos_ext.qos_driver.delete.assert_called_with(port)
    +        self.assertIsNone(self.qos_ext.policy_map.get_port_policy(port))
     
         def test_delete_unknown_port(self):
             port = self._create_test_port_dict()
    -        port_id = port['port_id']
             self.qos_ext.delete_port(self.context, port)
             self.assertFalse(self.qos_ext.qos_driver.delete.called)
    -        self.assertNotIn(port_id, self.qos_ext.known_ports)
    +        self.assertIsNone(self.qos_ext.policy_map.get_port_policy(port))
     
         def test__handle_notification_ignores_all_event_types_except_updated(self):
             with mock.patch.object(
    @@ -127,47 +208,41 @@ def test__handle_notification_passes_update_events(self):
             with mock.patch.object(
                 self.qos_ext, '_process_update_policy') as update_mock:
     
    -            policy = mock.Mock()
    -            self.qos_ext._handle_notification(policy, events.UPDATED)
    -            update_mock.assert_called_with(policy)
    +            policy_obj = mock.Mock()
    +            self.qos_ext._handle_notification(policy_obj, events.UPDATED)
    +            update_mock.assert_called_with(policy_obj)
     
         def test__process_update_policy(self):
    -        port1 = self._create_test_port_dict()
    -        port2 = self._create_test_port_dict()
    -        self.qos_ext.qos_policy_ports = {
    -            port1['qos_policy_id']: {port1['port_id']: port1},
    -            port2['qos_policy_id']: {port2['port_id']: port2},
    -        }
    -        policy = mock.Mock()
    -        policy.id = port1['qos_policy_id']
    -        self.qos_ext._process_update_policy(policy)
    -        self.qos_ext.qos_driver.update.assert_called_with(port1, policy)
    +        port1 = self._create_test_port_dict(qos_policy_id=TEST_POLICY.id)
    +        port2 = self._create_test_port_dict(qos_policy_id=TEST_POLICY2.id)
    +        self.qos_ext.policy_map.set_port_policy(port1, TEST_POLICY)
    +        self.qos_ext.policy_map.set_port_policy(port2, TEST_POLICY2)
    +
    +        policy_obj = mock.Mock()
    +        policy_obj.id = port1['qos_policy_id']
    +        self.qos_ext._process_update_policy(policy_obj)
    +        self.qos_ext.qos_driver.update.assert_called_with(port1, policy_obj)
     
             self.qos_ext.qos_driver.update.reset_mock()
    -        policy.id = port2['qos_policy_id']
    -        self.qos_ext._process_update_policy(policy)
    -        self.qos_ext.qos_driver.update.assert_called_with(port2, policy)
    +        policy_obj.id = port2['qos_policy_id']
    +        self.qos_ext._process_update_policy(policy_obj)
    +        self.qos_ext.qos_driver.update.assert_called_with(port2, policy_obj)
     
         def test__process_reset_port(self):
    -        port1 = self._create_test_port_dict()
    -        port2 = self._create_test_port_dict()
    -        port1_id = port1['port_id']
    -        port2_id = port2['port_id']
    -        self.qos_ext.qos_policy_ports = {
    -            port1['qos_policy_id']: {port1_id: port1},
    -            port2['qos_policy_id']: {port2_id: port2},
    -        }
    -        self.qos_ext.known_ports = {port1_id, port2_id}
    +        port1 = self._create_test_port_dict(qos_policy_id=TEST_POLICY.id)
    +        port2 = self._create_test_port_dict(qos_policy_id=TEST_POLICY2.id)
    +        self.qos_ext.policy_map.set_port_policy(port1, TEST_POLICY)
    +        self.qos_ext.policy_map.set_port_policy(port2, TEST_POLICY2)
     
             self.qos_ext._process_reset_port(port1)
    -        self.qos_ext.qos_driver.delete.assert_called_with(port1, None)
    -        self.assertNotIn(port1_id, self.qos_ext.known_ports)
    -        self.assertIn(port2_id, self.qos_ext.known_ports)
    +        self.qos_ext.qos_driver.delete.assert_called_with(port1)
    +        self.assertIsNone(self.qos_ext.policy_map.get_port_policy(port1))
    +        self.assertIsNotNone(self.qos_ext.policy_map.get_port_policy(port2))
     
             self.qos_ext.qos_driver.delete.reset_mock()
             self.qos_ext._process_reset_port(port2)
    -        self.qos_ext.qos_driver.delete.assert_called_with(port2, None)
    -        self.assertNotIn(port2_id, self.qos_ext.known_ports)
    +        self.qos_ext.qos_driver.delete.assert_called_with(port2)
    +        self.assertIsNone(self.qos_ext.policy_map.get_port_policy(port2))
     
     
     class QosExtensionInitializeTestCase(QosExtensionBaseTestCase):
    @@ -185,3 +260,60 @@ def test_initialize_subscribed_to_rpc(self, rpc_mock, subscribe_mock):
                  for resource_type in self.qos_ext.SUPPORTED_RESOURCES]
             )
             subscribe_mock.assert_called_with(mock.ANY, resources.QOS_POLICY)
    +
    +
    +class PortPolicyMapTestCase(base.BaseTestCase):
    +
    +    def setUp(self):
    +        super(PortPolicyMapTestCase, self).setUp()
    +        self.policy_map = qos.PortPolicyMap()
    +
    +    def test_update_policy(self):
    +        self.policy_map.update_policy(TEST_POLICY)
    +        self.assertEqual(TEST_POLICY,
    +                         self.policy_map.known_policies[TEST_POLICY.id])
    +
    +    def _set_ports(self):
    +        self.policy_map.set_port_policy(TEST_PORT, TEST_POLICY)
    +        self.policy_map.set_port_policy(TEST_PORT2, TEST_POLICY2)
    +
    +    def test_set_port_policy(self):
    +        self._set_ports()
    +        self.assertEqual(TEST_POLICY,
    +                         self.policy_map.known_policies[TEST_POLICY.id])
    +        self.assertIn(TEST_PORT['port_id'],
    +                      self.policy_map.qos_policy_ports[TEST_POLICY.id])
    +
    +    def test_get_port_policy(self):
    +        self._set_ports()
    +        self.assertEqual(TEST_POLICY,
    +                         self.policy_map.get_port_policy(TEST_PORT))
    +        self.assertEqual(TEST_POLICY2,
    +                         self.policy_map.get_port_policy(TEST_PORT2))
    +
    +    def test_get_ports(self):
    +        self._set_ports()
    +        self.assertEqual([TEST_PORT],
    +                         list(self.policy_map.get_ports(TEST_POLICY)))
    +
    +        self.assertEqual([TEST_PORT2],
    +                         list(self.policy_map.get_ports(TEST_POLICY2)))
    +
    +    def test_clean_by_port(self):
    +        self._set_ports()
    +        self.policy_map.clean_by_port(TEST_PORT)
    +        self.assertNotIn(TEST_POLICY.id, self.policy_map.known_policies)
    +        self.assertNotIn(TEST_PORT['port_id'], self.policy_map.port_policies)
    +        self.assertIn(TEST_POLICY2.id, self.policy_map.known_policies)
    +
    +    def test_clean_by_port_raises_exception_for_unknown_port(self):
    +        self.assertRaises(exceptions.PortNotFound,
    +                          self.policy_map.clean_by_port, TEST_PORT)
    +
    +    def test_has_policy_changed(self):
    +        self._set_ports()
    +        self.assertTrue(
    +            self.policy_map.has_policy_changed(TEST_PORT, 'a_new_policy_id'))
    +
    +        self.assertFalse(
    +            self.policy_map.has_policy_changed(TEST_PORT, TEST_POLICY.id))
    
  • neutron/tests/unit/agent/l3/test_agent.py+3 3 modified
    @@ -2236,9 +2236,9 @@ def pd_notifier(context, prefix_update):
                 self.pd_update.append(prefix_update)
                 for intf in intfs:
                     for subnet in intf['subnets']:
    -                    if subnet['id'] == prefix_update.keys()[0]:
    +                    if subnet['id'] in prefix_update:
                             # Update the prefix
    -                        subnet['cidr'] = prefix_update.values()[0]
    +                        subnet['cidr'] = prefix_update[subnet['id']]
     
             # Process the router for removed interfaces
             agent.pd.notifier = pd_notifier
    @@ -2266,7 +2266,7 @@ def _pd_assert_dibbler_calls(self, expected, actual):
             external_process call is followed with either an enable() or disable()
             '''
     
    -        num_ext_calls = len(expected) / 2
    +        num_ext_calls = len(expected) // 2
             expected_ext_calls = []
             actual_ext_calls = []
             expected_action_calls = []
    
  • neutron/tests/unit/agent/l3/test_dvr_local_router.py+17 16 modified
    @@ -145,27 +145,27 @@ def setUp(self):
                               'interface_driver': self.mock_driver}
     
         def _create_router(self, router=None, **kwargs):
    -        agent_conf = mock.Mock()
    +        agent = l3_agent.L3NATAgent(HOSTNAME, self.conf)
             self.router_id = _uuid()
             if not router:
                 router = mock.MagicMock()
    -        return dvr_router.DvrLocalRouter(mock.sentinel.agent,
    -                                    mock.sentinel.myhost,
    +        return dvr_router.DvrLocalRouter(agent,
    +                                    HOSTNAME,
                                         self.router_id,
                                         router,
    -                                    agent_conf,
    -                                    mock.sentinel.interface_driver,
    +                                    self.conf,
    +                                    mock.Mock(),
                                         **kwargs)
     
         def test_get_floating_ips_dvr(self):
             router = mock.MagicMock()
    -        router.get.return_value = [{'host': mock.sentinel.myhost},
    +        router.get.return_value = [{'host': HOSTNAME},
                                        {'host': mock.sentinel.otherhost}]
             ri = self._create_router(router)
     
             fips = ri.get_floating_ips()
     
    -        self.assertEqual([{'host': mock.sentinel.myhost}], fips)
    +        self.assertEqual([{'host': HOSTNAME}], fips)
     
         @mock.patch.object(ip_lib, 'send_ip_addr_adv_notif')
         @mock.patch.object(ip_lib, 'IPDevice')
    @@ -242,15 +242,16 @@ def test_floating_ip_removed_dist(self, mIPRule, mIPDevice, mIPWrapper):
             ri.rtr_fip_subnet = lla.LinkLocalAddressPair('15.1.2.3/32')
             _, fip_to_rtr = ri.rtr_fip_subnet.get_pair()
             fip_ns = ri.fip_ns
    -
    -        ri.floating_ip_removed_dist(fip_cidr)
    -
    -        self.assertTrue(fip_ns.destroyed)
    -        mIPWrapper().del_veth.assert_called_once_with(
    -            fip_ns.get_int_device_name(router['id']))
    -        mIPDevice().route.delete_gateway.assert_called_once_with(
    -            str(fip_to_rtr.ip), table=16)
    -        fip_ns.unsubscribe.assert_called_once_with(ri.router_id)
    +        with mock.patch.object(self.plugin_api,
    +                               'delete_agent_gateway_port') as del_fip_gw:
    +            ri.floating_ip_removed_dist(fip_cidr)
    +            self.assertTrue(del_fip_gw.called)
    +            self.assertTrue(fip_ns.destroyed)
    +            mIPWrapper().del_veth.assert_called_once_with(
    +                fip_ns.get_int_device_name(router['id']))
    +            mIPDevice().route.delete_gateway.assert_called_once_with(
    +                str(fip_to_rtr.ip), table=16)
    +            fip_ns.unsubscribe.assert_called_once_with(ri.router_id)
     
         def _test_add_floating_ip(self, ri, fip, is_failure):
             ri._add_fip_addr_to_device = mock.Mock(return_value=is_failure)
    
  • neutron/tests/unit/agent/l3/test_legacy_router.py+9 0 modified
    @@ -48,6 +48,15 @@ def test_remove_floating_ip(self):
     
             device.delete_addr_and_conntrack_state.assert_called_once_with(cidr)
     
    +    def test_remove_external_gateway_ip(self):
    +        ri = self._create_router(mock.MagicMock())
    +        device = mock.Mock()
    +        cidr = '172.16.0.0/24'
    +
    +        ri.remove_external_gateway_ip(device, cidr)
    +
    +        device.delete_addr_and_conntrack_state.assert_called_once_with(cidr)
    +
     
     @mock.patch.object(ip_lib, 'send_ip_addr_adv_notif')
     class TestAddFloatingIpWithMockGarp(BasicRouterTestCaseFramework):
    
  • neutron/tests/unit/agent/linux/test_dhcp.py+38 8 modified
    @@ -510,10 +510,17 @@ def __init__(self, domain='openstacklocal'):
     
     
     class FakeDeviceManagerNetwork(object):
    -    id = 'cccccccc-cccc-cccc-cccc-cccccccccccc'
    -    subnets = [FakeV4Subnet(), FakeV6SubnetDHCPStateful()]
    -    ports = [FakePort1(), FakeV6Port(), FakeDualPort(), FakeRouterPort()]
    -    namespace = 'qdhcp-ns'
    +    # Use instance rather than class attributes here, so that we get
    +    # an independent set of ports each time FakeDeviceManagerNetwork()
    +    # is used.
    +    def __init__(self):
    +        self.id = 'cccccccc-cccc-cccc-cccc-cccccccccccc'
    +        self.subnets = [FakeV4Subnet(), FakeV6SubnetDHCPStateful()]
    +        self.ports = [FakePort1(),
    +                      FakeV6Port(),
    +                      FakeDualPort(),
    +                      FakeRouterPort()]
    +        self.namespace = 'qdhcp-ns'
     
     
     class FakeDualNetworkReserved(object):
    @@ -1904,7 +1911,17 @@ def test_setup(self, load_interface_driver, ip_lib):
             """Test new and existing cases of DeviceManager's DHCP port setup
             logic.
             """
    +        self._test_setup(load_interface_driver, ip_lib, False)
    +
    +    @mock.patch('neutron.agent.linux.dhcp.ip_lib')
    +    @mock.patch('neutron.agent.linux.dhcp.common_utils.load_interface_driver')
    +    def test_setup_gateway_ips(self, load_interface_driver, ip_lib):
    +        """Test new and existing cases of DeviceManager's DHCP port setup
    +        logic.
    +        """
    +        self._test_setup(load_interface_driver, ip_lib, True)
     
    +    def _test_setup(self, load_interface_driver, ip_lib, use_gateway_ips):
             # Create DeviceManager.
             self.conf.register_opt(cfg.BoolOpt('enable_isolated_metadata',
                                                default=False))
    @@ -1930,6 +1947,7 @@ def mock_create(dict):
     
             plugin.create_dhcp_port.side_effect = mock_create
             mgr.driver.get_device_name.return_value = 'ns-XXX'
    +        mgr.driver.use_gateway_ips = use_gateway_ips
             ip_lib.ensure_device_is_ready.return_value = True
             mgr.setup(network)
             plugin.create_dhcp_port.assert_called_with(mock.ANY)
    @@ -1938,8 +1956,13 @@ def mock_create(dict):
                                                   mock.ANY,
                                                   namespace='qdhcp-ns')
             cidrs = set(mgr.driver.init_l3.call_args[0][1])
    -        self.assertEqual(cidrs, set(['unique-IP-address/24',
    -                                     'unique-IP-address/64']))
    +        if use_gateway_ips:
    +            self.assertEqual(cidrs, set(['%s/%s' % (s.gateway_ip,
    +                                                    s.cidr.split('/')[1])
    +                                         for s in network.subnets]))
    +        else:
    +            self.assertEqual(cidrs, set(['unique-IP-address/24',
    +                                         'unique-IP-address/64']))
     
             # Now call setup again.  This time we go through the existing
             # port code path, and the driver's init_l3 method is called
    @@ -1951,8 +1974,13 @@ def mock_create(dict):
                                                   mock.ANY,
                                                   namespace='qdhcp-ns')
             cidrs = set(mgr.driver.init_l3.call_args[0][1])
    -        self.assertEqual(cidrs, set(['unique-IP-address/24',
    -                                     'unique-IP-address/64']))
    +        if use_gateway_ips:
    +            self.assertEqual(cidrs, set(['%s/%s' % (s.gateway_ip,
    +                                                    s.cidr.split('/')[1])
    +                                         for s in network.subnets]))
    +        else:
    +            self.assertEqual(cidrs, set(['unique-IP-address/24',
    +                                         'unique-IP-address/64']))
             self.assertFalse(plugin.create_dhcp_port.called)
     
         @mock.patch('neutron.agent.linux.dhcp.ip_lib')
    @@ -1982,6 +2010,7 @@ def mock_update(port_id, dict):
     
             plugin.update_dhcp_port.side_effect = mock_update
             mgr.driver.get_device_name.return_value = 'ns-XXX'
    +        mgr.driver.use_gateway_ips = False
             ip_lib.ensure_device_is_ready.return_value = True
             mgr.setup(network)
             plugin.update_dhcp_port.assert_called_with(reserved_port.id, mock.ANY)
    @@ -2021,6 +2050,7 @@ def mock_update(port_id, dict):
     
             plugin.update_dhcp_port.side_effect = mock_update
             mgr.driver.get_device_name.return_value = 'ns-XXX'
    +        mgr.driver.use_gateway_ips = False
             ip_lib.ensure_device_is_ready.return_value = True
             mgr.setup(network)
             plugin.update_dhcp_port.assert_called_with(reserved_port_2.id,
    
  • neutron/tests/unit/agent/linux/test_ebtables_driver.py+0 191 removed
    @@ -1,191 +0,0 @@
    -# Copyright (c) 2015 OpenStack Foundation.
    -# All Rights Reserved.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -#
    -
    -import mock
    -
    -from oslo_config import cfg
    -
    -from neutron.agent.linux import ebtables_driver as eb
    -from neutron.cmd.sanity.checks import ebtables_supported
    -from neutron.tests import base
    -
    -
    -TABLES_NAMES = ['filter', 'nat', 'broute']
    -
    -CONF = cfg.CONF
    -
    -
    -class EbtablesDriverLowLevelInputTestCase(base.BaseTestCase):
    -
    -    def test_match_rule_line(self):
    -        self.assertEqual((None, None), eb._match_rule_line(None, "foo"))
    -
    -        rule_line = "[0:1] foobar blah bar"
    -        self.assertEqual(('mytab', [('mytab', ['foobar', 'blah', 'bar'])]),
    -                         eb._match_rule_line("mytab", rule_line))
    -
    -        rule_line = "[2:3] foobar -A BAR -j BLAH"
    -        self.assertEqual(
    -            ('mytab',
    -            [('mytab', ['foobar', '-A', 'BAR', '-j', 'BLAH']),
    -             ('mytab', ['foobar', '-C', 'BAR', '2', '3', '-j', 'BLAH'])]),
    -            eb._match_rule_line("mytab", rule_line))
    -
    -    def test_match_chain_name(self):
    -        self.assertEqual((None, None), eb._match_chain_name(None, None, "foo"))
    -
    -        rule_line = ":neutron-nwfilter-OUTPUT ACCEPT"
    -        tables = {"mytab": []}
    -        self.assertEqual(
    -            ('mytab',
    -             ('mytab', ['-N', 'neutron-nwfilter-OUTPUT', '-P', 'ACCEPT'])),
    -            eb._match_chain_name("mytab", tables, rule_line))
    -
    -        rule_line = ":neutron-nwfilter-OUTPUT ACCEPT"
    -        tables = {"mytab": ['neutron-nwfilter-OUTPUT']}
    -        self.assertEqual(
    -            ('mytab',
    -             ('mytab', ['-P', 'neutron-nwfilter-OUTPUT', 'ACCEPT'])),
    -            eb._match_chain_name("mytab", tables, rule_line))
    -
    -    def test_match_table_name(self):
    -        self.assertEqual((None, None), eb._match_table_name(None, "foo"))
    -
    -        rule_line = "*filter"
    -        self.assertEqual(('filter', ('filter', ['--atomic-init'])),
    -                         eb._match_table_name("mytab", rule_line))
    -
    -    def test_commit_statement(self):
    -        self.assertEqual(None, eb._match_commit_statement(None, "foo"))
    -
    -        rule_line = "COMMIT"
    -        self.assertEqual(('mytab', ['--atomic-commit']),
    -                         eb._match_commit_statement("mytab", rule_line))
    -
    -    def test_ebtables_input_parse_comment(self):
    -        # Comments and empty lines are stripped, nothing should be left.
    -        test_input = ("# Here is a comment\n"
    -                      "\n"
    -                      "# We just had an empty line.\n")
    -        res = eb._process_ebtables_input(test_input)
    -        self.assertEqual(list(), res)
    -
    -    def test_ebtables_input_parse_start(self):
    -        # Starting
    -        test_input = "*filter"
    -        res = eb._process_ebtables_input(test_input)
    -        self.assertEqual([('filter', ['--atomic-init'])], res)
    -
    -    def test_ebtables_input_parse_commit(self):
    -        # COMMIT without first starting a table should result in nothing,
    -        test_input = "COMMIT"
    -        res = eb._process_ebtables_input(test_input)
    -        self.assertEqual(list(), res)
    -
    -        test_input = "*filter\nCOMMIT"
    -        res = eb._process_ebtables_input(test_input)
    -        self.assertEqual([('filter', ['--atomic-init']),
    -                          ('filter', ['--atomic-commit'])],
    -                         res)
    -
    -    def test_ebtables_input_parse_rule(self):
    -        test_input = "*filter\n[0:0] -A INPUT -j neutron-nwfilter-INPUT"
    -        res = eb._process_ebtables_input(test_input)
    -        self.assertEqual([('filter', ['--atomic-init']),
    -                          ('filter',
    -                           ['-A', 'INPUT', '-j', 'neutron-nwfilter-INPUT'])],
    -                         res)
    -
    -    def test_ebtables_input_parse_chain(self):
    -        test_input = "*filter\n:foobar ACCEPT"
    -        res = eb._process_ebtables_input(test_input)
    -        self.assertEqual([('filter', ['--atomic-init']),
    -                          ('filter', ['-N', 'foobar', '-P', 'ACCEPT'])],
    -                         res)
    -
    -    def test_ebtables_input_parse_all_together(self):
    -        test_input = \
    -            ("*filter\n"
    -             ":INPUT ACCEPT\n"
    -             ":FORWARD ACCEPT\n"
    -             ":OUTPUT ACCEPT\n"
    -             ":neutron-nwfilter-spoofing-fallb ACCEPT\n"
    -             ":neutron-nwfilter-OUTPUT ACCEPT\n"
    -             ":neutron-nwfilter-INPUT ACCEPT\n"
    -             ":neutron-nwfilter-FORWARD ACCEPT\n"
    -             "[0:0] -A INPUT -j neutron-nwfilter-INPUT\n"
    -             "[0:0] -A OUTPUT -j neutron-nwfilter-OUTPUT\n"
    -             "[0:0] -A FORWARD -j neutron-nwfilter-FORWARD\n"
    -             "[0:0] -A neutron-nwfilter-spoofing-fallb -j DROP\n"
    -             "COMMIT")
    -        observed_res = eb._process_ebtables_input(test_input)
    -        TNAME = 'filter'
    -        expected_res = [
    -            (TNAME, ['--atomic-init']),
    -            (TNAME, ['-P', 'INPUT', 'ACCEPT']),
    -            (TNAME, ['-P', 'FORWARD', 'ACCEPT']),
    -            (TNAME, ['-P', 'OUTPUT', 'ACCEPT']),
    -            (TNAME, ['-N', 'neutron-nwfilter-spoofing-fallb', '-P', 'ACCEPT']),
    -            (TNAME, ['-N', 'neutron-nwfilter-OUTPUT', '-P', 'ACCEPT']),
    -            (TNAME, ['-N', 'neutron-nwfilter-INPUT', '-P', 'ACCEPT']),
    -            (TNAME, ['-N', 'neutron-nwfilter-FORWARD', '-P', 'ACCEPT']),
    -            (TNAME, ['-A', 'INPUT', '-j', 'neutron-nwfilter-INPUT']),
    -            (TNAME, ['-A', 'OUTPUT', '-j', 'neutron-nwfilter-OUTPUT']),
    -            (TNAME, ['-A', 'FORWARD', '-j', 'neutron-nwfilter-FORWARD']),
    -            (TNAME, ['-A', 'neutron-nwfilter-spoofing-fallb', '-j', 'DROP']),
    -            (TNAME, ['--atomic-commit'])]
    -
    -        self.assertEqual(expected_res, observed_res)
    -
    -
    -class EbtablesDriverLowLevelOutputTestCase(base.BaseTestCase):
    -
    -    def test_ebtables_save_and_restore(self):
    -        test_output = ('Bridge table: filter\n'
    -                       'Bridge chain: INPUT, entries: 1, policy: ACCEPT\n'
    -                       '-j CONTINUE , pcnt = 0 -- bcnt = 0\n'
    -                       'Bridge chain: FORWARD, entries: 1, policy: ACCEPT\n'
    -                       '-j CONTINUE , pcnt = 0 -- bcnt = 1\n'
    -                       'Bridge chain: OUTPUT, entries: 1, policy: ACCEPT\n'
    -                       '-j CONTINUE , pcnt = 1 -- bcnt = 1').split('\n')
    -
    -        observed_res = eb._process_ebtables_output(test_output)
    -        expected_res = ['*filter',
    -                        ':INPUT ACCEPT',
    -                        ':FORWARD ACCEPT',
    -                        ':OUTPUT ACCEPT',
    -                        '[0:0] -A INPUT -j CONTINUE',
    -                        '[0:1] -A FORWARD -j CONTINUE',
    -                        '[1:1] -A OUTPUT -j CONTINUE',
    -                        'COMMIT']
    -        self.assertEqual(expected_res, observed_res)
    -
    -
    -class EbtablesDriverTestCase(base.BaseTestCase):
    -
    -    def setUp(self):
    -        super(EbtablesDriverTestCase, self).setUp()
    -        self.root_helper = 'sudo'
    -        self.ebtables_path = CONF.ebtables_path
    -        self.execute_p = mock.patch('neutron.agent.linux.utils.execute')
    -        self.execute = self.execute_p.start()
    -
    -    def test_ebtables_sanity_check(self):
    -        self.assertTrue(ebtables_supported())
    -        self.execute.assert_has_calls([mock.call(['ebtables', '--version'])])
    -
    -        self.execute.side_effect = RuntimeError
    -        self.assertFalse(ebtables_supported())
    
  • neutron/tests/unit/agent/linux/test_ebtables_manager.py+0 160 removed
    @@ -1,160 +0,0 @@
    -# Copyright (c) 2015 OpenStack Foundation.
    -# All Rights Reserved.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -#
    -
    -import mock
    -
    -from neutron.agent.linux import ebtables_manager as em
    -
    -from neutron.tests import base
    -
    -LONG_NAME = "1234567890" * 3
    -
    -
    -class EbtablesManagerBaseTestCase(base.BaseTestCase):
    -    def setUp(self):
    -        super(EbtablesManagerBaseTestCase, self).setUp()
    -        mock.patch.object(em, "binary_name", return_value="binary").start()
    -
    -
    -class EbtablesChainNameTestCase(EbtablesManagerBaseTestCase):
    -
    -    def test_get_prefix_chain(self):
    -        # Fake the binary name to a known value for this test.
    -        # Testing prefix chain name
    -        self.assertEqual(em._get_prefix_chain(), "binary")
    -        self.assertEqual(em._get_prefix_chain("some-name"),
    -                         "some-name")
    -        self.assertEqual(em._get_prefix_chain(LONG_NAME),
    -                         LONG_NAME[:em.MAX_LEN_PREFIX_CHAIN])
    -
    -    def test_get_chain_name(self):
    -        # Testing full chain name
    -        prefix_chain = "some-other-name"
    -        self.assertEqual(em.get_chain_name(chain_name="foobar",
    -                                           prefix_chain=prefix_chain),
    -                         "foobar")
    -
    -        is_name = em.get_chain_name(chain_name=LONG_NAME,
    -                                    wrap=True,
    -                                    prefix_chain=prefix_chain)
    -        should_name = (LONG_NAME[:em.MAX_CHAIN_LEN_EBTABLES -
    -                                 len(prefix_chain) - 1])
    -        self.assertEqual(is_name, should_name)
    -        self.assertEqual(em.get_chain_name(chain_name=LONG_NAME,
    -                                           wrap=False,
    -                                           prefix_chain=prefix_chain),
    -                         LONG_NAME)
    -        should_name = LONG_NAME[:-len("bar")]
    -        self.assertEqual(em.get_chain_name(chain_name=LONG_NAME,
    -                                           wrap=True,
    -                                           prefix_chain="bar"),
    -                         should_name)
    -        self.assertEqual(em.get_chain_name(chain_name=LONG_NAME,
    -                                           wrap=False,
    -                                           prefix_chain="bar"),
    -                         LONG_NAME)
    -
    -
    -class EbtablesRuleTestCase(EbtablesManagerBaseTestCase):
    -
    -    def test_basic_ops(self):
    -        r1 = em.EbtablesRule("chain-name", "some-rule", wrap=True, top=False,
    -                             prefix_chain="foobar")
    -        r2 = em.EbtablesRule("chain-name", "some-rule", wrap=True, top=False,
    -                             prefix_chain="foobar")
    -        r3 = em.EbtablesRule("chain-name", "some-rule", wrap=True, top=True,
    -                             prefix_chain="foobar")
    -        self.assertEqual(r1, r2)
    -        self.assertNotEqual(r1, r3)
    -
    -        self.assertEqual("-A foobar-chain-name some-rule", str(r1))
    -
    -
    -class EbtablesTableTestCase(EbtablesManagerBaseTestCase):
    -
    -    def setUp(self):
    -        super(EbtablesTableTestCase, self).setUp()
    -        self.et = em.EbtablesTable()
    -
    -    def test_add_chain(self):
    -        # Wrapped and un-wrapped chains are maintained separately, thus same
    -        # name is possible.
    -        self.et.add_chain("bar" + LONG_NAME, wrap=False)
    -        self.et.add_chain("baz" + LONG_NAME, wrap=False)
    -        self.et.add_chain("baz" + LONG_NAME)
    -        self.et.add_chain("foo" + LONG_NAME)
    -
    -        self.assertEqual(set(['baz123456789012345678901',
    -                              'foo123456789012345678901']),
    -                         self.et._select_chain_set(wrap=True))
    -        self.assertEqual(set(['bar1234567890123456789012345678',
    -                              'baz1234567890123456789012345678']),
    -                         self.et._select_chain_set(wrap=False))
    -
    -    def test_add_remove_rule(self):
    -        # Adding some rules to a chain
    -        self.et.add_chain("foobar")
    -        self.et.add_rule("foobar", "some rule text")
    -        self.assertEqual("-A binary-foobar some rule text",
    -                         str(self.et.rules[0]))
    -        self.assertEqual(1, len(self.et.rules))
    -
    -        self.et.add_rule("foobar", "another rule")
    -        self.assertEqual(2, len(self.et.rules))
    -        self.assertEqual("-A binary-foobar some rule text",
    -                         str(self.et.rules[0]))
    -        self.assertEqual("-A binary-foobar another rule",
    -                         str(self.et.rules[1]))
    -
    -        # Removing one of the rules, testing the state of the remaining rule
    -        # list.
    -        self.et.remove_rule("foobar", "some rule text")
    -        self.assertEqual(1, len(self.et.rules))
    -        self.assertEqual("-A binary-foobar another rule",
    -                         str(self.et.rules[0]))
    -
    -        # Testing emptying of a chain
    -        self.et.add_rule("foobar", "yet another rule")
    -        self.assertEqual(2, len(self.et.rules))
    -        self.et.empty_chain("foobar")
    -        self.assertEqual(0, len(self.et.rules))
    -
    -    def test_remove_chain(self):
    -        self.et.add_chain("foobar")
    -        self.et.add_rule("foobar", "some rule text")
    -        self.et.add_rule("foobar", "yet another rule")
    -        self.et.ensure_remove_chain("foobar")
    -        self.assertEqual(0, len(self.et.rules))
    -        self.assertEqual(0, len(self.et.chains))
    -
    -        # Testing the 'cascading' remove: If rules of chain A point to chain B
    -        # and chain B is removed then those rules of chain A also need to be
    -        # removed.
    -        self.et.add_chain("chain-A")
    -        self.et.add_rule("chain-A", "some rule text")
    -        self.et.add_chain("chain-B")
    -        self.et.add_rule("chain-B", "another rule")
    -        # Now add the rule to chain-A with chain-B as jump target
    -        self.et.add_rule("chain-A", "jumpyjump -j binary-chain-B")
    -        self.assertEqual(2, len(self.et.chains))
    -        self.assertEqual(3, len(self.et.rules))
    -        # Remove chain-B, making the jump rule in chain-A invalid. This should
    -        # trigger the cascading deletion of the rules.
    -        self.et.ensure_remove_chain("chain-B")
    -        self.assertEqual(1, len(self.et.chains))
    -        self.assertEqual(1, len(self.et.rules))
    -        self.assertEqual("-A binary-chain-A some rule text",
    -                         str(self.et.rules[0]))
    
  • neutron/tests/unit/agent/linux/test_interface.py+3 2 modified
    @@ -104,7 +104,8 @@ def test_init_router_port_delete_onlink_routes(self):
             addresses = [dict(scope='global',
                               dynamic=False, cidr='172.16.77.240/24')]
             self.ip_dev().addr.list = mock.Mock(return_value=addresses)
    -        self.ip_dev().route.list_onlink_routes.return_value = ['172.20.0.0/24']
    +        self.ip_dev().route.list_onlink_routes.return_value = [
    +            {'cidr': '172.20.0.0/24'}]
     
             bc = BaseChild(self.conf)
             ns = '12345678-1234-5678-90ab-ba0987654321'
    @@ -218,7 +219,7 @@ def test_init_router_port_with_ipv6_delete_onlink_routes(self):
                               dynamic=False, cidr='2001:db8:a::123/64')]
             route = '2001:db8:a::/64'
             self.ip_dev().addr.list = mock.Mock(return_value=addresses)
    -        self.ip_dev().route.list_onlink_routes.return_value = [route]
    +        self.ip_dev().route.list_onlink_routes.return_value = [{'cidr': route}]
     
             bc = BaseChild(self.conf)
             ns = '12345678-1234-5678-90ab-ba0987654321'
    
  • neutron/tests/unit/agent/linux/test_ip_lib.py+113 10 modified
    @@ -562,7 +562,9 @@ def _test_add_rule(self, ip, table, priority):
             self.rule_cmd.add(ip, table=table, priority=priority)
             self._assert_sudo([ip_version], (['show']))
             self._assert_sudo([ip_version], ('add', 'from', ip,
    -                                         'priority', priority, 'table', table))
    +                                         'priority', str(priority),
    +                                         'table', str(table),
    +                                         'type', 'unicast'))
     
         def _test_add_rule_exists(self, ip, table, priority, output):
             self.parent._as_root.return_value = output
    @@ -574,8 +576,8 @@ def _test_delete_rule(self, ip, table, priority):
             ip_version = netaddr.IPNetwork(ip).version
             self.rule_cmd.delete(ip, table=table, priority=priority)
             self._assert_sudo([ip_version],
    -                          ('del', 'priority', priority,
    -                           'table', table))
    +                          ('del', 'priority', str(priority),
    +                           'table', str(table), 'type', 'unicast'))
     
         def test__parse_line(self):
             def test(ip_version, line, expected):
    @@ -585,13 +587,49 @@ def test(ip_version, line, expected):
             test(4, "4030201:\tfrom 1.2.3.4/24 lookup 10203040",
                  {'from': '1.2.3.4/24',
                   'table': '10203040',
    +              'type': 'unicast',
                   'priority': '4030201'})
             test(6, "1024:    from all iif qg-c43b1928-48 lookup noscope",
                  {'priority': '1024',
                   'from': '::/0',
    +              'type': 'unicast',
                   'iif': 'qg-c43b1928-48',
                   'table': 'noscope'})
     
    +    def test__make_canonical_all_v4(self):
    +        actual = self.rule_cmd._make_canonical(4, {'from': 'all'})
    +        self.assertEqual({'from': '0.0.0.0/0', 'type': 'unicast'}, actual)
    +
    +    def test__make_canonical_all_v6(self):
    +        actual = self.rule_cmd._make_canonical(6, {'from': 'all'})
    +        self.assertEqual({'from': '::/0', 'type': 'unicast'}, actual)
    +
    +    def test__make_canonical_lookup(self):
    +        actual = self.rule_cmd._make_canonical(6, {'lookup': 'table'})
    +        self.assertEqual({'table': 'table', 'type': 'unicast'}, actual)
    +
    +    def test__make_canonical_iif(self):
    +        actual = self.rule_cmd._make_canonical(6, {'iif': 'iface_name'})
    +        self.assertEqual({'iif': 'iface_name', 'type': 'unicast'}, actual)
    +
    +    def test__make_canonical_fwmark(self):
    +        actual = self.rule_cmd._make_canonical(6, {'fwmark': '0x400'})
    +        self.assertEqual({'fwmark': '0x400/0xffffffff',
    +                          'type': 'unicast'}, actual)
    +
    +    def test__make_canonical_fwmark_with_mask(self):
    +        actual = self.rule_cmd._make_canonical(6, {'fwmark': '0x400/0x00ff'})
    +        self.assertEqual({'fwmark': '0x400/0xff', 'type': 'unicast'}, actual)
    +
    +    def test__make_canonical_fwmark_integer(self):
    +        actual = self.rule_cmd._make_canonical(6, {'fwmark': 0x400})
    +        self.assertEqual({'fwmark': '0x400/0xffffffff',
    +                          'type': 'unicast'}, actual)
    +
    +    def test__make_canonical_fwmark_iterable(self):
    +        actual = self.rule_cmd._make_canonical(6, {'fwmark': (0x400, 0xffff)})
    +        self.assertEqual({'fwmark': '0x400/0xffff', 'type': 'unicast'}, actual)
    +
         def test_add_rule_v4(self):
             self._test_add_rule('192.168.45.100', 2, 100)
     
    @@ -913,6 +951,20 @@ def test_add_route(self):
                                'dev', self.parent.name,
                                'table', self.table))
     
    +    def test_add_route_no_via(self):
    +        self.route_cmd.add_route(self.cidr, table=self.table)
    +        self._assert_sudo([self.ip_version],
    +                          ('replace', self.cidr,
    +                           'dev', self.parent.name,
    +                           'table', self.table))
    +
    +    def test_add_route_with_scope(self):
    +        self.route_cmd.add_route(self.cidr, scope='link')
    +        self._assert_sudo([self.ip_version],
    +                          ('replace', self.cidr,
    +                           'dev', self.parent.name,
    +                           'scope', 'link'))
    +
         def test_delete_route(self):
             self.route_cmd.delete_route(self.cidr, self.ip, self.table)
             self._assert_sudo([self.ip_version],
    @@ -921,32 +973,67 @@ def test_delete_route(self):
                                'dev', self.parent.name,
                                'table', self.table))
     
    +    def test_delete_route_no_via(self):
    +        self.route_cmd.delete_route(self.cidr, table=self.table)
    +        self._assert_sudo([self.ip_version],
    +                          ('del', self.cidr,
    +                           'dev', self.parent.name,
    +                           'table', self.table))
    +
    +    def test_delete_route_with_scope(self):
    +        self.route_cmd.delete_route(self.cidr, scope='link')
    +        self._assert_sudo([self.ip_version],
    +                          ('del', self.cidr,
    +                           'dev', self.parent.name,
    +                           'scope', 'link'))
    +
    +    def test_list_routes(self):
    +        self.parent._run.return_value = (
    +            "default via 172.124.4.1 dev eth0 metric 100\n"
    +            "10.0.0.0/22 dev eth0 scope link\n"
    +            "172.24.4.0/24 dev eth0 proto kernel src 172.24.4.2\n")
    +        routes = self.route_cmd.table(self.table).list_routes(self.ip_version)
    +        self.assertEqual([{'cidr': '0.0.0.0/0',
    +                           'dev': 'eth0',
    +                           'metric': '100',
    +                           'table': 14,
    +                           'via': '172.124.4.1'},
    +                          {'cidr': '10.0.0.0/22',
    +                           'dev': 'eth0',
    +                           'scope': 'link',
    +                           'table': 14},
    +                          {'cidr': '172.24.4.0/24',
    +                           'dev': 'eth0',
    +                           'proto': 'kernel',
    +                           'src': '172.24.4.2',
    +                           'table': 14}], routes)
    +
         def test_list_onlink_routes_subtable(self):
             self.parent._run.return_value = (
                 "10.0.0.0/22\n"
                 "172.24.4.0/24 proto kernel src 172.24.4.2\n")
             routes = self.route_cmd.table(self.table).list_onlink_routes(
                 self.ip_version)
    -        self.assertEqual(['10.0.0.0/22'], routes)
    +        self.assertEqual(['10.0.0.0/22'], [r['cidr'] for r in routes])
             self._assert_call([self.ip_version],
    -                          ('list', 'dev', self.parent.name, 'scope', 'link',
    -                           'table', self.table))
    +                          ('list', 'dev', self.parent.name,
    +                           'table', self.table, 'scope', 'link'))
     
         def test_add_onlink_route_subtable(self):
             self.route_cmd.table(self.table).add_onlink_route(self.cidr)
             self._assert_sudo([self.ip_version],
                               ('replace', self.cidr,
                                'dev', self.parent.name,
    -                           'scope', 'link',
    -                           'table', self.table))
    +                           'table', self.table,
    +                           'scope', 'link'))
     
         def test_delete_onlink_route_subtable(self):
             self.route_cmd.table(self.table).delete_onlink_route(self.cidr)
             self._assert_sudo([self.ip_version],
                               ('del', self.cidr,
                                'dev', self.parent.name,
    -                           'scope', 'link',
    -                           'table', self.table))
    +                           'table', self.table,
    +                           'scope', 'link'))
     
     
     class TestIPv6IpRouteCommand(TestIpRouteCommand):
    @@ -974,6 +1061,22 @@ def setUp(self):
                                 {'gateway': '2001:470:9:1224:4508:b885:5fb:740b',
                                  'metric': 1024}}]
     
    +    def test_list_routes(self):
    +        self.parent._run.return_value = (
    +            "default via 2001:db8::1 dev eth0 metric 100\n"
    +            "2001:db8::/64 dev eth0 proto kernel src 2001:db8::2\n")
    +        routes = self.route_cmd.table(self.table).list_routes(self.ip_version)
    +        self.assertEqual([{'cidr': '::/0',
    +                           'dev': 'eth0',
    +                           'metric': '100',
    +                           'table': 14,
    +                           'via': '2001:db8::1'},
    +                          {'cidr': '2001:db8::/64',
    +                           'dev': 'eth0',
    +                           'proto': 'kernel',
    +                           'src': '2001:db8::2',
    +                           'table': 14}], routes)
    +
     
     class TestIPRoute(TestIpRouteCommand):
         """Leverage existing tests for IpRouteCommand for IPRoute
    
  • neutron/tests/unit/agent/linux/test_iptables_firewall.py+36 37 modified
    @@ -124,11 +124,11 @@ def test_prepare_port_filter_with_no_sg(self):
                                         comment=ic.SG_TO_VM_SG),
                      mock.call.add_rule(
                          'ifake_dev',
    -                     '-m state --state INVALID -j DROP',
    +                     '-m state --state RELATED,ESTABLISHED -j RETURN',
                          comment=None),
                      mock.call.add_rule(
                          'ifake_dev',
    -                     '-m state --state RELATED,ESTABLISHED -j RETURN',
    +                     '-m state --state INVALID -j DROP',
                          comment=None),
                      mock.call.add_rule(
                          'ifake_dev',
    @@ -165,13 +165,13 @@ def test_prepare_port_filter_with_no_sg(self):
                          'ofake_dev',
                          '-p udp -m udp --sport 67 --dport 68 -j DROP',
                          comment=None),
    -                 mock.call.add_rule(
    -                     'ofake_dev',
    -                     '-m state --state INVALID -j DROP', comment=None),
                      mock.call.add_rule(
                          'ofake_dev',
                          '-m state --state RELATED,ESTABLISHED -j RETURN',
                          comment=None),
    +                 mock.call.add_rule(
    +                     'ofake_dev',
    +                     '-m state --state INVALID -j DROP', comment=None),
                      mock.call.add_rule(
                          'ofake_dev',
                          '-j $sg-fallback',
    @@ -981,10 +981,6 @@ def _test_prepare_port_filter(self,
                                            '-p icmpv6 --icmpv6-type %s -j RETURN' %
                                            icmp6_type, comment=None))
             calls += [
    -            mock.call.add_rule(
    -                'ifake_dev',
    -                '-m state --state INVALID -j DROP', comment=None
    -            ),
                 mock.call.add_rule(
                     'ifake_dev',
                     '-m state --state RELATED,ESTABLISHED -j RETURN',
    @@ -995,7 +991,10 @@ def _test_prepare_port_filter(self,
             if ingress_expected_call:
                 calls.append(ingress_expected_call)
     
    -        calls += [mock.call.add_rule('ifake_dev',
    +        calls += [mock.call.add_rule(
    +                      'ifake_dev',
    +                      '-m state --state INVALID -j DROP', comment=None),
    +                  mock.call.add_rule('ifake_dev',
                                          '-j $sg-fallback', comment=None),
                       mock.call.add_chain('ofake_dev'),
                       mock.call.add_rule('FORWARD',
    @@ -1034,9 +1033,6 @@ def _test_prepare_port_filter(self,
                     comment=None))
     
             calls += [
    -            mock.call.add_rule(
    -                'ofake_dev',
    -                '-m state --state INVALID -j DROP', comment=None),
                 mock.call.add_rule(
                     'ofake_dev',
                     '-m state --state RELATED,ESTABLISHED -j RETURN',
    @@ -1046,7 +1042,10 @@ def _test_prepare_port_filter(self,
             if egress_expected_call:
                 calls.append(egress_expected_call)
     
    -        calls += [mock.call.add_rule('ofake_dev',
    +        calls += [mock.call.add_rule(
    +                      'ofake_dev',
    +                      '-m state --state INVALID -j DROP', comment=None),
    +                  mock.call.add_rule('ofake_dev',
                                          '-j $sg-fallback', comment=None),
                       mock.call.add_rule('sg-chain', '-j ACCEPT')]
     
    @@ -1150,15 +1149,15 @@ def test_update_delete_port_filter(self):
                          '-m physdev --physdev-out tapfake_dev '
                          '--physdev-is-bridged -j $ifake_dev',
                          comment=ic.SG_TO_VM_SG),
    -                 mock.call.add_rule(
    -                     'ifake_dev',
    -                     '-m state --state INVALID -j DROP', comment=None),
                      mock.call.add_rule(
                          'ifake_dev',
                          '-m state --state RELATED,ESTABLISHED -j RETURN',
                          comment=None),
                      mock.call.add_rule('ifake_dev', '-j RETURN',
                                         comment=None),
    +                 mock.call.add_rule(
    +                     'ifake_dev',
    +                     '-m state --state INVALID -j DROP', comment=None),
                      mock.call.add_rule(
                          'ifake_dev',
                          '-j $sg-fallback', comment=None),
    @@ -1197,13 +1196,13 @@ def test_update_delete_port_filter(self):
                          'ofake_dev',
                          '-p udp -m udp --sport 67 --dport 68 -j DROP',
                          comment=None),
    -                 mock.call.add_rule(
    -                     'ofake_dev', '-m state --state INVALID -j DROP',
    -                     comment=None),
                      mock.call.add_rule(
                          'ofake_dev',
                          '-m state --state RELATED,ESTABLISHED -j RETURN',
                          comment=None),
    +                 mock.call.add_rule(
    +                     'ofake_dev', '-m state --state INVALID -j DROP',
    +                     comment=None),
                      mock.call.add_rule(
                          'ofake_dev',
                          '-j $sg-fallback', comment=None),
    @@ -1224,13 +1223,13 @@ def test_update_delete_port_filter(self):
                          '-m physdev --physdev-out tapfake_dev '
                          '--physdev-is-bridged -j $ifake_dev',
                          comment=ic.SG_TO_VM_SG),
    -                 mock.call.add_rule(
    -                     'ifake_dev',
    -                     '-m state --state INVALID -j DROP', comment=None),
                      mock.call.add_rule(
                          'ifake_dev',
                          '-m state --state RELATED,ESTABLISHED -j RETURN',
                          comment=None),
    +                 mock.call.add_rule(
    +                     'ifake_dev',
    +                     '-m state --state INVALID -j DROP', comment=None),
                      mock.call.add_rule(
                          'ifake_dev',
                          '-j $sg-fallback', comment=None),
    @@ -1269,15 +1268,15 @@ def test_update_delete_port_filter(self):
                          'ofake_dev',
                          '-p udp -m udp --sport 67 --dport 68 -j DROP',
                          comment=None),
    -                 mock.call.add_rule(
    -                     'ofake_dev',
    -                     '-m state --state INVALID -j DROP', comment=None),
                      mock.call.add_rule(
                          'ofake_dev',
                          '-m state --state RELATED,ESTABLISHED -j RETURN',
                          comment=None),
                      mock.call.add_rule('ofake_dev', '-j RETURN',
                                         comment=None),
    +                 mock.call.add_rule(
    +                     'ofake_dev',
    +                     '-m state --state INVALID -j DROP', comment=None),
                      mock.call.add_rule('ofake_dev',
                                         '-j $sg-fallback',
                                         comment=None),
    @@ -1398,13 +1397,13 @@ def test_ip_spoofing_filter_with_multiple_ips(self):
                                         '--physdev-is-bridged '
                                         '-j $ifake_dev',
                                         comment=ic.SG_TO_VM_SG),
    -                 mock.call.add_rule(
    -                     'ifake_dev',
    -                     '-m state --state INVALID -j DROP', comment=None),
                      mock.call.add_rule(
                          'ifake_dev',
                          '-m state --state RELATED,ESTABLISHED -j RETURN',
                          comment=None),
    +                 mock.call.add_rule(
    +                     'ifake_dev',
    +                     '-m state --state INVALID -j DROP', comment=None),
                      mock.call.add_rule('ifake_dev',
                                         '-j $sg-fallback', comment=None),
                      mock.call.add_chain('ofake_dev'),
    @@ -1444,13 +1443,13 @@ def test_ip_spoofing_filter_with_multiple_ips(self):
                          'ofake_dev',
                          '-p udp -m udp --sport 67 --dport 68 -j DROP',
                          comment=None),
    -                 mock.call.add_rule(
    -                     'ofake_dev',
    -                     '-m state --state INVALID -j DROP', comment=None),
                      mock.call.add_rule(
                          'ofake_dev',
                          '-m state --state RELATED,ESTABLISHED -j RETURN',
                          comment=None),
    +                 mock.call.add_rule(
    +                     'ofake_dev',
    +                     '-m state --state INVALID -j DROP', comment=None),
                      mock.call.add_rule('ofake_dev',
                                         '-j $sg-fallback', comment=None),
                      mock.call.add_rule('sg-chain', '-j ACCEPT')]
    @@ -1478,13 +1477,13 @@ def test_ip_spoofing_no_fixed_ips(self):
                                         '--physdev-is-bridged '
                                         '-j $ifake_dev',
                                         comment=ic.SG_TO_VM_SG),
    -                 mock.call.add_rule(
    -                     'ifake_dev',
    -                     '-m state --state INVALID -j DROP', comment=None),
                      mock.call.add_rule(
                          'ifake_dev',
                          '-m state --state RELATED,ESTABLISHED -j RETURN',
                          comment=None),
    +                 mock.call.add_rule(
    +                     'ifake_dev',
    +                     '-m state --state INVALID -j DROP', comment=None),
                      mock.call.add_rule('ifake_dev', '-j $sg-fallback',
                                         comment=None),
                      mock.call.add_chain('ofake_dev'),
    @@ -1520,11 +1519,11 @@ def test_ip_spoofing_no_fixed_ips(self):
                          comment=None),
                      mock.call.add_rule(
                          'ofake_dev',
    -                     '-m state --state INVALID -j DROP',
    +                     '-m state --state RELATED,ESTABLISHED -j RETURN',
                          comment=None),
                      mock.call.add_rule(
                          'ofake_dev',
    -                     '-m state --state RELATED,ESTABLISHED -j RETURN',
    +                     '-m state --state INVALID -j DROP',
                          comment=None),
                      mock.call.add_rule('ofake_dev', '-j $sg-fallback',
                                         comment=None),
    
  • neutron/tests/unit/agent/linux/test_keepalived.py+8 0 modified
    @@ -321,6 +321,14 @@ def test_vip_with_scope(self):
             self.assertEqual('fe80::3e97:eff:fe26:3bfa/64 dev eth1 scope link',
                              vip.build_config())
     
    +    def test_add_vip_returns_exception_on_duplicate_ip(self):
    +        instance = keepalived.KeepalivedInstance('MASTER', 'eth0', 1,
    +                                                 ['169.254.192.0/18'])
    +        instance.add_vip('192.168.222.1/32', 'eth11', None)
    +        self.assertRaises(keepalived.VIPDuplicateAddressException,
    +                          instance.add_vip, '192.168.222.1/32', 'eth12',
    +                          'link')
    +
     
     class KeepalivedVirtualRouteTestCase(base.BaseTestCase):
         def test_virtual_route_with_dev(self):
    
  • neutron/tests/unit/agent/linux/test_utils.py+17 0 modified
    @@ -18,6 +18,8 @@
     import six
     import testtools
     
    +import oslo_i18n
    +
     from neutron.agent.linux import utils
     from neutron.tests import base
     
    @@ -100,6 +102,21 @@ def test_return_code_log_debug(self):
                 utils.execute(['ls'])
                 self.assertTrue(log.debug.called)
     
    +    def test_return_code_log_error_change_locale(self):
    +        ja_output = 'std_out in Japanese'
    +        ja_error = 'std_err in Japanese'
    +        ja_message_out = oslo_i18n._message.Message(ja_output)
    +        ja_message_err = oslo_i18n._message.Message(ja_error)
    +        ja_translate_out = oslo_i18n._translate.translate(ja_message_out, 'ja')
    +        ja_translate_err = oslo_i18n._translate.translate(ja_message_err, 'ja')
    +        self.mock_popen.return_value = (ja_translate_out, ja_translate_err)
    +        self.process.return_value.returncode = 1
    +
    +        with mock.patch.object(utils, 'LOG') as log:
    +            utils.execute(['ls'], check_exit_code=False)
    +            self.assertIn(ja_translate_out, str(log.error.call_args_list))
    +            self.assertIn(ja_translate_err, str(log.error.call_args_list))
    +
         def test_return_code_raise_runtime_do_not_log_fail_as_error(self):
             self.mock_popen.return_value = ('', '')
             self.process.return_value.returncode = 1
    
  • neutron/tests/unit/agent/test_securitygroups_rpc.py+34 33 modified
    @@ -1772,13 +1772,13 @@ def test_security_groups_member_not_updated(self):
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
     [0:0] -A %(bn)s-sg-chain %(physdev_mod)s --physdev-INGRESS tap_port1 \
     %(physdev_is_bridged)s -j %(bn)s-i_port1
    -[0:0] -A %(bn)s-i_port1 -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_port1 -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-i_port1 -s 10.0.0.2/32 -p udp -m udp --sport 67 --dport 68 \
     -j RETURN
     [0:0] -A %(bn)s-i_port1 -p tcp -m tcp --dport 22 -j RETURN
     [0:0] -A %(bn)s-i_port1 -m set --match-set NIPv4security_group1 src -j \
     RETURN
    +[0:0] -A %(bn)s-i_port1 -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_port1 -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-FORWARD %(physdev_mod)s --physdev-EGRESS tap_port1 \
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
    @@ -1792,9 +1792,9 @@ def test_security_groups_member_not_updated(self):
     [0:0] -A %(bn)s-o_port1 -p udp -m udp --sport 68 --dport 67 -j RETURN
     [0:0] -A %(bn)s-o_port1 -j %(bn)s-s_port1
     [0:0] -A %(bn)s-o_port1 -p udp -m udp --sport 67 --dport 68 -j DROP
    -[0:0] -A %(bn)s-o_port1 -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_port1 -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-o_port1 -j RETURN
    +[0:0] -A %(bn)s-o_port1 -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_port1 -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-sg-chain -j ACCEPT
     COMMIT
    @@ -1824,11 +1824,11 @@ def test_security_groups_member_not_updated(self):
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
     [0:0] -A %(bn)s-sg-chain %(physdev_mod)s --physdev-INGRESS tap_port1 \
     %(physdev_is_bridged)s -j %(bn)s-i_port1
    -[0:0] -A %(bn)s-i_port1 -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_port1 -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-i_port1 -s 10.0.0.2/32 -p udp -m udp --sport 67 --dport 68 \
     -j RETURN
     [0:0] -A %(bn)s-i_port1 -p tcp -m tcp --dport 22 -j RETURN
    +[0:0] -A %(bn)s-i_port1 -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_port1 -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-FORWARD %(physdev_mod)s --physdev-EGRESS tap_port1 \
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
    @@ -1842,9 +1842,9 @@ def test_security_groups_member_not_updated(self):
     [0:0] -A %(bn)s-o_port1 -p udp -m udp --sport 68 --dport 67 -j RETURN
     [0:0] -A %(bn)s-o_port1 -j %(bn)s-s_port1
     [0:0] -A %(bn)s-o_port1 -p udp -m udp --sport 67 --dport 68 -j DROP
    -[0:0] -A %(bn)s-o_port1 -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_port1 -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-o_port1 -j RETURN
    +[0:0] -A %(bn)s-o_port1 -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_port1 -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-sg-chain -j ACCEPT
     COMMIT
    @@ -1875,12 +1875,12 @@ def test_security_groups_member_not_updated(self):
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
     [0:0] -A %(bn)s-sg-chain %(physdev_mod)s --physdev-INGRESS tap_port1 \
     %(physdev_is_bridged)s -j %(bn)s-i_port1
    -[0:0] -A %(bn)s-i_port1 -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_port1 -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-i_port1 -s 10.0.0.2/32 -p udp -m udp --sport 67 --dport 68 \
     -j RETURN
     [0:0] -A %(bn)s-i_port1 -p tcp -m tcp --dport 22 -j RETURN
     [0:0] -A %(bn)s-i_port1 -s 10.0.0.4/32 -j RETURN
    +[0:0] -A %(bn)s-i_port1 -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_port1 -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-FORWARD %(physdev_mod)s --physdev-EGRESS tap_port1 \
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
    @@ -1894,9 +1894,9 @@ def test_security_groups_member_not_updated(self):
     [0:0] -A %(bn)s-o_port1 -p udp -m udp --sport 68 --dport 67 -j RETURN
     [0:0] -A %(bn)s-o_port1 -j %(bn)s-s_port1
     [0:0] -A %(bn)s-o_port1 -p udp -m udp --sport 67 --dport 68 -j DROP
    -[0:0] -A %(bn)s-o_port1 -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_port1 -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-o_port1 -j RETURN
    +[0:0] -A %(bn)s-o_port1 -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_port1 -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-sg-chain -j ACCEPT
     COMMIT
    @@ -1931,13 +1931,13 @@ def test_security_groups_member_not_updated(self):
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
     [0:0] -A %(bn)s-sg-chain %(physdev_mod)s --physdev-INGRESS tap_%(port1)s \
     %(physdev_is_bridged)s -j %(bn)s-i_%(port1)s
    -[0:0] -A %(bn)s-i_%(port1)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_%(port1)s -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-i_%(port1)s -s 10.0.0.2/32 -p udp -m udp --sport 67 \
     --dport 68 -j RETURN
     [0:0] -A %(bn)s-i_%(port1)s -p tcp -m tcp --dport 22 -j RETURN
     [0:0] -A %(bn)s-i_%(port1)s -m set --match-set NIPv4security_group1 src -j \
     RETURN
    +[0:0] -A %(bn)s-i_%(port1)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_%(port1)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-FORWARD %(physdev_mod)s --physdev-EGRESS tap_%(port1)s \
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
    @@ -1951,21 +1951,21 @@ def test_security_groups_member_not_updated(self):
     [0:0] -A %(bn)s-o_%(port1)s -p udp -m udp --sport 68 --dport 67 -j RETURN
     [0:0] -A %(bn)s-o_%(port1)s -j %(bn)s-s_%(port1)s
     [0:0] -A %(bn)s-o_%(port1)s -p udp -m udp --sport 67 --dport 68 -j DROP
    -[0:0] -A %(bn)s-o_%(port1)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port1)s -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-o_%(port1)s -j RETURN
    +[0:0] -A %(bn)s-o_%(port1)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port1)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-FORWARD %(physdev_mod)s --physdev-INGRESS tap_%(port2)s \
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
     [0:0] -A %(bn)s-sg-chain %(physdev_mod)s --physdev-INGRESS tap_%(port2)s \
     %(physdev_is_bridged)s -j %(bn)s-i_%(port2)s
    -[0:0] -A %(bn)s-i_%(port2)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_%(port2)s -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-i_%(port2)s -s 10.0.0.2/32 -p udp -m udp --sport 67 \
     --dport 68 -j RETURN
     [0:0] -A %(bn)s-i_%(port2)s -p tcp -m tcp --dport 22 -j RETURN
     [0:0] -A %(bn)s-i_%(port2)s -m set --match-set NIPv4security_group1 src -j \
     RETURN
    +[0:0] -A %(bn)s-i_%(port2)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_%(port2)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-FORWARD %(physdev_mod)s --physdev-EGRESS tap_%(port2)s \
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
    @@ -1979,9 +1979,9 @@ def test_security_groups_member_not_updated(self):
     [0:0] -A %(bn)s-o_%(port2)s -p udp -m udp --sport 68 --dport 67 -j RETURN
     [0:0] -A %(bn)s-o_%(port2)s -j %(bn)s-s_%(port2)s
     [0:0] -A %(bn)s-o_%(port2)s -p udp -m udp --sport 67 --dport 68 -j DROP
    -[0:0] -A %(bn)s-o_%(port2)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port2)s -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-o_%(port2)s -j RETURN
    +[0:0] -A %(bn)s-o_%(port2)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port2)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-sg-chain -j ACCEPT
     COMMIT
    @@ -2014,14 +2014,14 @@ def test_security_groups_member_not_updated(self):
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
     [0:0] -A %(bn)s-sg-chain %(physdev_mod)s --physdev-INGRESS tap_%(port1)s \
     %(physdev_is_bridged)s -j %(bn)s-i_%(port1)s
    -[0:0] -A %(bn)s-i_%(port1)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_%(port1)s -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-i_%(port1)s -s 10.0.0.2/32 -p udp -m udp --sport 67 \
     --dport 68 -j RETURN
     [0:0] -A %(bn)s-i_%(port1)s -p tcp -m tcp --dport 22 -j RETURN
     [0:0] -A %(bn)s-i_%(port1)s -m set --match-set NIPv4security_group1 src -j \
     RETURN
     [0:0] -A %(bn)s-i_%(port1)s -p icmp -j RETURN
    +[0:0] -A %(bn)s-i_%(port1)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_%(port1)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-FORWARD %(physdev_mod)s --physdev-EGRESS tap_%(port1)s \
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
    @@ -2035,22 +2035,22 @@ def test_security_groups_member_not_updated(self):
     [0:0] -A %(bn)s-o_%(port1)s -p udp -m udp --sport 68 --dport 67 -j RETURN
     [0:0] -A %(bn)s-o_%(port1)s -j %(bn)s-s_%(port1)s
     [0:0] -A %(bn)s-o_%(port1)s -p udp -m udp --sport 67 --dport 68 -j DROP
    -[0:0] -A %(bn)s-o_%(port1)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port1)s -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-o_%(port1)s -j RETURN
    +[0:0] -A %(bn)s-o_%(port1)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port1)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-FORWARD %(physdev_mod)s --physdev-INGRESS tap_%(port2)s \
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
     [0:0] -A %(bn)s-sg-chain %(physdev_mod)s --physdev-INGRESS tap_%(port2)s \
     %(physdev_is_bridged)s -j %(bn)s-i_%(port2)s
    -[0:0] -A %(bn)s-i_%(port2)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_%(port2)s -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-i_%(port2)s -s 10.0.0.2/32 -p udp -m udp --sport 67 \
     --dport 68 -j RETURN
     [0:0] -A %(bn)s-i_%(port2)s -p tcp -m tcp --dport 22 -j RETURN
     [0:0] -A %(bn)s-i_%(port2)s -m set --match-set NIPv4security_group1 src -j \
     RETURN
     [0:0] -A %(bn)s-i_%(port2)s -p icmp -j RETURN
    +[0:0] -A %(bn)s-i_%(port2)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_%(port2)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-FORWARD %(physdev_mod)s --physdev-EGRESS tap_%(port2)s \
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
    @@ -2064,9 +2064,9 @@ def test_security_groups_member_not_updated(self):
     [0:0] -A %(bn)s-o_%(port2)s -p udp -m udp --sport 68 --dport 67 -j RETURN
     [0:0] -A %(bn)s-o_%(port2)s -j %(bn)s-s_%(port2)s
     [0:0] -A %(bn)s-o_%(port2)s -p udp -m udp --sport 67 --dport 68 -j DROP
    -[0:0] -A %(bn)s-o_%(port2)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port2)s -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-o_%(port2)s -j RETURN
    +[0:0] -A %(bn)s-o_%(port2)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port2)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-sg-chain -j ACCEPT
     COMMIT
    @@ -2099,12 +2099,12 @@ def test_security_groups_member_not_updated(self):
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
     [0:0] -A %(bn)s-sg-chain %(physdev_mod)s --physdev-INGRESS tap_%(port1)s \
     %(physdev_is_bridged)s -j %(bn)s-i_%(port1)s
    -[0:0] -A %(bn)s-i_%(port1)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_%(port1)s -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-i_%(port1)s -s 10.0.0.2/32 -p udp -m udp --sport 67 \
     --dport 68 -j RETURN
     [0:0] -A %(bn)s-i_%(port1)s -p tcp -m tcp --dport 22 -j RETURN
     [0:0] -A %(bn)s-i_%(port1)s -s %(ip2)s -j RETURN
    +[0:0] -A %(bn)s-i_%(port1)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_%(port1)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-FORWARD %(physdev_mod)s --physdev-EGRESS tap_%(port1)s \
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
    @@ -2118,20 +2118,20 @@ def test_security_groups_member_not_updated(self):
     [0:0] -A %(bn)s-o_%(port1)s -p udp -m udp --sport 68 --dport 67 -j RETURN
     [0:0] -A %(bn)s-o_%(port1)s -j %(bn)s-s_%(port1)s
     [0:0] -A %(bn)s-o_%(port1)s -p udp -m udp --sport 67 --dport 68 -j DROP
    -[0:0] -A %(bn)s-o_%(port1)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port1)s -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-o_%(port1)s -j RETURN
    +[0:0] -A %(bn)s-o_%(port1)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port1)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-FORWARD %(physdev_mod)s --physdev-INGRESS tap_%(port2)s \
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
     [0:0] -A %(bn)s-sg-chain %(physdev_mod)s --physdev-INGRESS tap_%(port2)s \
     %(physdev_is_bridged)s -j %(bn)s-i_%(port2)s
    -[0:0] -A %(bn)s-i_%(port2)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_%(port2)s -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-i_%(port2)s -s 10.0.0.2/32 -p udp -m udp --sport 67 \
     --dport 68 -j RETURN
     [0:0] -A %(bn)s-i_%(port2)s -p tcp -m tcp --dport 22 -j RETURN
     [0:0] -A %(bn)s-i_%(port2)s -s %(ip1)s -j RETURN
    +[0:0] -A %(bn)s-i_%(port2)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_%(port2)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-FORWARD %(physdev_mod)s --physdev-EGRESS tap_%(port2)s \
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
    @@ -2145,9 +2145,9 @@ def test_security_groups_member_not_updated(self):
     [0:0] -A %(bn)s-o_%(port2)s -p udp -m udp --sport 68 --dport 67 -j RETURN
     [0:0] -A %(bn)s-o_%(port2)s -j %(bn)s-s_%(port2)s
     [0:0] -A %(bn)s-o_%(port2)s -p udp -m udp --sport 67 --dport 68 -j DROP
    -[0:0] -A %(bn)s-o_%(port2)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port2)s -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-o_%(port2)s -j RETURN
    +[0:0] -A %(bn)s-o_%(port2)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port2)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-sg-chain -j ACCEPT
     COMMIT
    @@ -2180,11 +2180,11 @@ def test_security_groups_member_not_updated(self):
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
     [0:0] -A %(bn)s-sg-chain %(physdev_mod)s --physdev-INGRESS tap_%(port1)s \
     %(physdev_is_bridged)s -j %(bn)s-i_%(port1)s
    -[0:0] -A %(bn)s-i_%(port1)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_%(port1)s -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-i_%(port1)s -s 10.0.0.2/32 -p udp -m udp --sport 67 \
     --dport 68 -j RETURN
     [0:0] -A %(bn)s-i_%(port1)s -p tcp -m tcp --dport 22 -j RETURN
    +[0:0] -A %(bn)s-i_%(port1)s -m state --state INVALID -j DROP
     """ % IPTABLES_ARG
     IPTABLES_FILTER_2_2 += """[0:0] -A %(bn)s-i_%(port1)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-FORWARD %(physdev_mod)s --physdev-EGRESS tap_%(port1)s \
    @@ -2199,15 +2199,14 @@ def test_security_groups_member_not_updated(self):
     [0:0] -A %(bn)s-o_%(port1)s -p udp -m udp --sport 68 --dport 67 -j RETURN
     [0:0] -A %(bn)s-o_%(port1)s -j %(bn)s-s_%(port1)s
     [0:0] -A %(bn)s-o_%(port1)s -p udp -m udp --sport 67 --dport 68 -j DROP
    -[0:0] -A %(bn)s-o_%(port1)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port1)s -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-o_%(port1)s -j RETURN
    +[0:0] -A %(bn)s-o_%(port1)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port1)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-FORWARD %(physdev_mod)s --physdev-INGRESS tap_%(port2)s \
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
     [0:0] -A %(bn)s-sg-chain %(physdev_mod)s --physdev-INGRESS tap_%(port2)s \
     %(physdev_is_bridged)s -j %(bn)s-i_%(port2)s
    -[0:0] -A %(bn)s-i_%(port2)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_%(port2)s -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-i_%(port2)s -s 10.0.0.2/32 -p udp -m udp --sport 67 \
     --dport 68 -j RETURN
    @@ -2216,7 +2215,9 @@ def test_security_groups_member_not_updated(self):
     IPTABLES_FILTER_2_2 += ("[0:0] -A %(bn)s-i_%(port2)s -s %(ip1)s "
                             "-j RETURN\n"
                             % IPTABLES_ARG)
    -IPTABLES_FILTER_2_2 += """[0:0] -A %(bn)s-i_%(port2)s -j %(bn)s-sg-fallback
    +IPTABLES_FILTER_2_2 += """[0:0] -A %(bn)s-i_%(port2)s -m state --state \
    +INVALID -j DROP
    +[0:0] -A %(bn)s-i_%(port2)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-FORWARD %(physdev_mod)s --physdev-EGRESS tap_%(port2)s \
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
     [0:0] -A %(bn)s-sg-chain %(physdev_mod)s --physdev-EGRESS tap_%(port2)s \
    @@ -2229,9 +2230,9 @@ def test_security_groups_member_not_updated(self):
     [0:0] -A %(bn)s-o_%(port2)s -p udp -m udp --sport 68 --dport 67 -j RETURN
     [0:0] -A %(bn)s-o_%(port2)s -j %(bn)s-s_%(port2)s
     [0:0] -A %(bn)s-o_%(port2)s -p udp -m udp --sport 67 --dport 68 -j DROP
    -[0:0] -A %(bn)s-o_%(port2)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port2)s -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-o_%(port2)s -j RETURN
    +[0:0] -A %(bn)s-o_%(port2)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port2)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-sg-chain -j ACCEPT
     COMMIT
    @@ -2264,13 +2265,13 @@ def test_security_groups_member_not_updated(self):
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
     [0:0] -A %(bn)s-sg-chain %(physdev_mod)s --physdev-INGRESS tap_%(port1)s \
     %(physdev_is_bridged)s -j %(bn)s-i_%(port1)s
    -[0:0] -A %(bn)s-i_%(port1)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_%(port1)s -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-i_%(port1)s -s 10.0.0.2/32 -p udp -m udp --sport 67 \
     --dport 68 -j RETURN
     [0:0] -A %(bn)s-i_%(port1)s -p tcp -m tcp --dport 22 -j RETURN
     [0:0] -A %(bn)s-i_%(port1)s -s %(ip2)s -j RETURN
     [0:0] -A %(bn)s-i_%(port1)s -p icmp -j RETURN
    +[0:0] -A %(bn)s-i_%(port1)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_%(port1)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-FORWARD %(physdev_mod)s --physdev-EGRESS tap_%(port1)s \
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
    @@ -2284,21 +2285,21 @@ def test_security_groups_member_not_updated(self):
     [0:0] -A %(bn)s-o_%(port1)s -p udp -m udp --sport 68 --dport 67 -j RETURN
     [0:0] -A %(bn)s-o_%(port1)s -j %(bn)s-s_%(port1)s
     [0:0] -A %(bn)s-o_%(port1)s -p udp -m udp --sport 67 --dport 68 -j DROP
    -[0:0] -A %(bn)s-o_%(port1)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port1)s -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-o_%(port1)s -j RETURN
    +[0:0] -A %(bn)s-o_%(port1)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port1)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-FORWARD %(physdev_mod)s --physdev-INGRESS tap_%(port2)s \
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
     [0:0] -A %(bn)s-sg-chain %(physdev_mod)s --physdev-INGRESS tap_%(port2)s \
     %(physdev_is_bridged)s -j %(bn)s-i_%(port2)s
    -[0:0] -A %(bn)s-i_%(port2)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_%(port2)s -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-i_%(port2)s -s 10.0.0.2/32 -p udp -m udp --sport 67 \
     --dport 68 -j RETURN
     [0:0] -A %(bn)s-i_%(port2)s -p tcp -m tcp --dport 22 -j RETURN
     [0:0] -A %(bn)s-i_%(port2)s -s %(ip1)s -j RETURN
     [0:0] -A %(bn)s-i_%(port2)s -p icmp -j RETURN
    +[0:0] -A %(bn)s-i_%(port2)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_%(port2)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-FORWARD %(physdev_mod)s --physdev-EGRESS tap_%(port2)s \
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
    @@ -2312,9 +2313,9 @@ def test_security_groups_member_not_updated(self):
     [0:0] -A %(bn)s-o_%(port2)s -p udp -m udp --sport 68 --dport 67 -j RETURN
     [0:0] -A %(bn)s-o_%(port2)s -j %(bn)s-s_%(port2)s
     [0:0] -A %(bn)s-o_%(port2)s -p udp -m udp --sport 67 --dport 68 -j DROP
    -[0:0] -A %(bn)s-o_%(port2)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port2)s -m state --state RELATED,ESTABLISHED -j RETURN
     [0:0] -A %(bn)s-o_%(port2)s -j RETURN
    +[0:0] -A %(bn)s-o_%(port2)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port2)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-sg-chain -j ACCEPT
     COMMIT
    @@ -2371,8 +2372,8 @@ def test_security_groups_member_not_updated(self):
     [0:0] -A %(bn)s-i_port1 -p icmpv6 --icmpv6-type 132 -j RETURN
     [0:0] -A %(bn)s-i_port1 -p icmpv6 --icmpv6-type 135 -j RETURN
     [0:0] -A %(bn)s-i_port1 -p icmpv6 --icmpv6-type 136 -j RETURN
    -[0:0] -A %(bn)s-i_port1 -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_port1 -m state --state RELATED,ESTABLISHED -j RETURN
    +[0:0] -A %(bn)s-i_port1 -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_port1 -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-FORWARD %(physdev_mod)s --physdev-EGRESS tap_port1 \
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
    @@ -2384,8 +2385,8 @@ def test_security_groups_member_not_updated(self):
     [0:0] -A %(bn)s-o_port1 -p icmpv6 -j RETURN
     [0:0] -A %(bn)s-o_port1 -p udp -m udp --sport 546 --dport 547 -j RETURN
     [0:0] -A %(bn)s-o_port1 -p udp -m udp --sport 547 --dport 546 -j DROP
    -[0:0] -A %(bn)s-o_port1 -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_port1 -m state --state RELATED,ESTABLISHED -j RETURN
    +[0:0] -A %(bn)s-o_port1 -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_port1 -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-sg-chain -j ACCEPT
     COMMIT
    @@ -2424,8 +2425,8 @@ def test_security_groups_member_not_updated(self):
     [0:0] -A %(bn)s-i_%(port1)s -p icmpv6 --icmpv6-type 132 -j RETURN
     [0:0] -A %(bn)s-i_%(port1)s -p icmpv6 --icmpv6-type 135 -j RETURN
     [0:0] -A %(bn)s-i_%(port1)s -p icmpv6 --icmpv6-type 136 -j RETURN
    -[0:0] -A %(bn)s-i_%(port1)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_%(port1)s -m state --state RELATED,ESTABLISHED -j RETURN
    +[0:0] -A %(bn)s-i_%(port1)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_%(port1)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-FORWARD %(physdev_mod)s --physdev-EGRESS tap_%(port1)s \
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
    @@ -2437,8 +2438,8 @@ def test_security_groups_member_not_updated(self):
     [0:0] -A %(bn)s-o_%(port1)s -p icmpv6 -j RETURN
     [0:0] -A %(bn)s-o_%(port1)s -p udp -m udp --sport 546 --dport 547 -j RETURN
     [0:0] -A %(bn)s-o_%(port1)s -p udp -m udp --sport 547 --dport 546 -j DROP
    -[0:0] -A %(bn)s-o_%(port1)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port1)s -m state --state RELATED,ESTABLISHED -j RETURN
    +[0:0] -A %(bn)s-o_%(port1)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port1)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-FORWARD %(physdev_mod)s --physdev-INGRESS tap_%(port2)s \
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
    @@ -2449,8 +2450,8 @@ def test_security_groups_member_not_updated(self):
     [0:0] -A %(bn)s-i_%(port2)s -p icmpv6 --icmpv6-type 132 -j RETURN
     [0:0] -A %(bn)s-i_%(port2)s -p icmpv6 --icmpv6-type 135 -j RETURN
     [0:0] -A %(bn)s-i_%(port2)s -p icmpv6 --icmpv6-type 136 -j RETURN
    -[0:0] -A %(bn)s-i_%(port2)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_%(port2)s -m state --state RELATED,ESTABLISHED -j RETURN
    +[0:0] -A %(bn)s-i_%(port2)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-i_%(port2)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-FORWARD %(physdev_mod)s --physdev-EGRESS tap_%(port2)s \
     %(physdev_is_bridged)s -j %(bn)s-sg-chain
    @@ -2462,8 +2463,8 @@ def test_security_groups_member_not_updated(self):
     [0:0] -A %(bn)s-o_%(port2)s -p icmpv6 -j RETURN
     [0:0] -A %(bn)s-o_%(port2)s -p udp -m udp --sport 546 --dport 547 -j RETURN
     [0:0] -A %(bn)s-o_%(port2)s -p udp -m udp --sport 547 --dport 546 -j DROP
    -[0:0] -A %(bn)s-o_%(port2)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port2)s -m state --state RELATED,ESTABLISHED -j RETURN
    +[0:0] -A %(bn)s-o_%(port2)s -m state --state INVALID -j DROP
     [0:0] -A %(bn)s-o_%(port2)s -j %(bn)s-sg-fallback
     [0:0] -A %(bn)s-sg-chain -j ACCEPT
     COMMIT
    
  • neutron/tests/unit/common/test_utils.py+3 0 modified
    @@ -562,6 +562,9 @@ def _test_is_dvr_serviced(self, device_owner, expected):
         def test_is_dvr_serviced_with_lb_port(self):
             self._test_is_dvr_serviced(constants.DEVICE_OWNER_LOADBALANCER, True)
     
    +    def test_is_dvr_serviced_with_lbv2_port(self):
    +        self._test_is_dvr_serviced(constants.DEVICE_OWNER_LOADBALANCERV2, True)
    +
         def test_is_dvr_serviced_with_dhcp_port(self):
             self._test_is_dvr_serviced(constants.DEVICE_OWNER_DHCP, True)
     
    
  • neutron/tests/unit/db/quota/test_api.py+32 67 modified
    @@ -34,16 +34,14 @@ def _create_reservation(self, resource_deltas,
             return quota_api.create_reservation(
                 self.context, tenant_id, resource_deltas, expiration)
     
    -    def _create_quota_usage(self, resource, used, reserved, tenant_id=None):
    +    def _create_quota_usage(self, resource, used, tenant_id=None):
             tenant_id = tenant_id or self.tenant_id
             return quota_api.set_quota_usage(
    -            self.context, resource, tenant_id,
    -            in_use=used, reserved=reserved)
    +            self.context, resource, tenant_id, in_use=used)
     
         def _verify_quota_usage(self, usage_info,
                                 expected_resource=None,
                                 expected_used=None,
    -                            expected_reserved=None,
                                 expected_dirty=None):
             self.assertEqual(self.tenant_id, usage_info.tenant_id)
             if expected_resource:
    @@ -52,57 +50,42 @@ def _verify_quota_usage(self, usage_info,
                     self.assertEqual(expected_dirty, usage_info.dirty)
             if expected_used is not None:
                 self.assertEqual(expected_used, usage_info.used)
    -        if expected_reserved is not None:
    -            self.assertEqual(expected_reserved, usage_info.reserved)
    -        if expected_used is not None and expected_reserved is not None:
    -            self.assertEqual(expected_used + expected_reserved,
    -                             usage_info.total)
     
         def setUp(self):
             super(TestQuotaDbApi, self).setUp()
             self._set_context()
     
         def test_create_quota_usage(self):
    -        usage_info = self._create_quota_usage('goals', 26, 10)
    +        usage_info = self._create_quota_usage('goals', 26)
             self._verify_quota_usage(usage_info,
                                      expected_resource='goals',
    -                                 expected_used=26,
    -                                 expected_reserved=10)
    +                                 expected_used=26)
     
         def test_update_quota_usage(self):
    -        self._create_quota_usage('goals', 26, 10)
    +        self._create_quota_usage('goals', 26)
             # Higuain scores a double
             usage_info_1 = quota_api.set_quota_usage(
                 self.context, 'goals', self.tenant_id,
                 in_use=28)
             self._verify_quota_usage(usage_info_1,
    -                                 expected_used=28,
    -                                 expected_reserved=10)
    +                                 expected_used=28)
             usage_info_2 = quota_api.set_quota_usage(
                 self.context, 'goals', self.tenant_id,
    -            reserved=8)
    +            in_use=24)
             self._verify_quota_usage(usage_info_2,
    -                                 expected_used=28,
    -                                 expected_reserved=8)
    +                                 expected_used=24)
     
         def test_update_quota_usage_with_deltas(self):
    -        self._create_quota_usage('goals', 26, 10)
    +        self._create_quota_usage('goals', 26)
             # Higuain scores a double
             usage_info_1 = quota_api.set_quota_usage(
                 self.context, 'goals', self.tenant_id,
                 in_use=2, delta=True)
             self._verify_quota_usage(usage_info_1,
    -                                 expected_used=28,
    -                                 expected_reserved=10)
    -        usage_info_2 = quota_api.set_quota_usage(
    -            self.context, 'goals', self.tenant_id,
    -            reserved=-2, delta=True)
    -        self._verify_quota_usage(usage_info_2,
    -                                 expected_used=28,
    -                                 expected_reserved=8)
    +                                 expected_used=28)
     
         def test_set_quota_usage_dirty(self):
    -        self._create_quota_usage('goals', 26, 10)
    +        self._create_quota_usage('goals', 26)
             # Higuain needs a shower after the match
             self.assertEqual(1, quota_api.set_quota_usage_dirty(
                 self.context, 'goals', self.tenant_id))
    @@ -123,9 +106,9 @@ def test_set_dirty_non_existing_quota_usage(self):
                 self.context, 'meh', self.tenant_id))
     
         def test_set_resources_quota_usage_dirty(self):
    -        self._create_quota_usage('goals', 26, 10)
    -        self._create_quota_usage('assists', 11, 5)
    -        self._create_quota_usage('bookings', 3, 1)
    +        self._create_quota_usage('goals', 26)
    +        self._create_quota_usage('assists', 11)
    +        self._create_quota_usage('bookings', 3)
             self.assertEqual(2, quota_api.set_resources_quota_usage_dirty(
                 self.context, ['goals', 'bookings'], self.tenant_id))
             usage_info_goals = quota_api.get_quota_usage_by_resource_and_tenant(
    @@ -139,9 +122,9 @@ def test_set_resources_quota_usage_dirty(self):
             self._verify_quota_usage(usage_info_bookings, expected_dirty=True)
     
         def test_set_resources_quota_usage_dirty_with_empty_list(self):
    -        self._create_quota_usage('goals', 26, 10)
    -        self._create_quota_usage('assists', 11, 5)
    -        self._create_quota_usage('bookings', 3, 1)
    +        self._create_quota_usage('goals', 26)
    +        self._create_quota_usage('assists', 11)
    +        self._create_quota_usage('bookings', 3)
             # Expect all the resources for the tenant to be set dirty
             self.assertEqual(3, quota_api.set_resources_quota_usage_dirty(
                 self.context, [], self.tenant_id))
    @@ -164,8 +147,8 @@ def test_set_resources_quota_usage_dirty_with_empty_list(self):
                                      expected_dirty=False)
     
         def _test_set_all_quota_usage_dirty(self, expected):
    -        self._create_quota_usage('goals', 26, 10)
    -        self._create_quota_usage('goals', 12, 6, tenant_id='Callejon')
    +        self._create_quota_usage('goals', 26)
    +        self._create_quota_usage('goals', 12, tenant_id='Callejon')
             self.assertEqual(expected, quota_api.set_all_quota_usage_dirty(
                 self.context, 'goals'))
     
    @@ -175,10 +158,10 @@ def test_set_all_quota_usage_dirty(self):
             self._test_set_all_quota_usage_dirty(expected=1)
     
         def test_get_quota_usage_by_tenant(self):
    -        self._create_quota_usage('goals', 26, 10)
    -        self._create_quota_usage('assists', 11, 5)
    +        self._create_quota_usage('goals', 26)
    +        self._create_quota_usage('assists', 11)
             # Create a resource for a different tenant
    -        self._create_quota_usage('mehs', 99, 99, tenant_id='buffon')
    +        self._create_quota_usage('mehs', 99, tenant_id='buffon')
             usage_infos = quota_api.get_quota_usage_by_tenant_id(
                 self.context, self.tenant_id)
     
    @@ -188,26 +171,24 @@ def test_get_quota_usage_by_tenant(self):
             self.assertIn('assists', resources)
     
         def test_get_quota_usage_by_resource(self):
    -        self._create_quota_usage('goals', 26, 10)
    -        self._create_quota_usage('assists', 11, 5)
    -        self._create_quota_usage('goals', 12, 6, tenant_id='Callejon')
    +        self._create_quota_usage('goals', 26)
    +        self._create_quota_usage('assists', 11)
    +        self._create_quota_usage('goals', 12, tenant_id='Callejon')
             usage_infos = quota_api.get_quota_usage_by_resource(
                 self.context, 'goals')
             # Only 1 result expected in tenant context
             self.assertEqual(1, len(usage_infos))
             self._verify_quota_usage(usage_infos[0],
                                      expected_resource='goals',
    -                                 expected_used=26,
    -                                 expected_reserved=10)
    +                                 expected_used=26)
     
         def test_get_quota_usage_by_tenant_and_resource(self):
    -        self._create_quota_usage('goals', 26, 10)
    +        self._create_quota_usage('goals', 26)
             usage_info = quota_api.get_quota_usage_by_resource_and_tenant(
                 self.context, 'goals', self.tenant_id)
             self._verify_quota_usage(usage_info,
                                      expected_resource='goals',
    -                                 expected_used=26,
    -                                 expected_reserved=10)
    +                                 expected_used=26)
     
         def test_get_non_existing_quota_usage_returns_none(self):
             self.assertIsNone(quota_api.get_quota_usage_by_resource_and_tenant(
    @@ -226,30 +207,14 @@ def test_create_reservation(self):
             self.assertEqual(self.tenant_id, resv.tenant_id)
             self._verify_reserved_resources(resources, resv.deltas)
     
    -    def test_create_reservation_with_expirtion(self):
    +    def test_create_reservation_with_expiration(self):
             resources = {'goals': 2, 'assists': 1}
             exp_date = datetime.datetime(2016, 3, 31, 14, 30)
             resv = self._create_reservation(resources, expiration=exp_date)
             self.assertEqual(self.tenant_id, resv.tenant_id)
             self.assertEqual(exp_date, resv.expiration)
             self._verify_reserved_resources(resources, resv.deltas)
     
    -    def _test_remove_reservation(self, set_dirty):
    -        resources = {'goals': 2, 'assists': 1}
    -        resv = self._create_reservation(resources)
    -        self.assertEqual(1, quota_api.remove_reservation(
    -            self.context, resv.reservation_id, set_dirty=set_dirty))
    -
    -    def test_remove_reservation(self):
    -        self._test_remove_reservation(False)
    -
    -    def test_remove_reservation_and_set_dirty(self):
    -        routine = 'neutron.db.quota.api.set_resources_quota_usage_dirty'
    -        with mock.patch(routine) as mock_routine:
    -            self._test_remove_reservation(False)
    -        mock_routine.assert_called_once_with(
    -            self.context, mock.ANY, self.tenant_id)
    -
         def test_remove_non_existent_reservation(self):
             self.assertIsNone(quota_api.remove_reservation(self.context, 'meh'))
     
    @@ -342,9 +307,9 @@ def _set_context(self):
                                            load_admin_roles=False)
     
         def test_get_quota_usage_by_resource(self):
    -        self._create_quota_usage('goals', 26, 10)
    -        self._create_quota_usage('assists', 11, 5)
    -        self._create_quota_usage('goals', 12, 6, tenant_id='Callejon')
    +        self._create_quota_usage('goals', 26)
    +        self._create_quota_usage('assists', 11)
    +        self._create_quota_usage('goals', 12, tenant_id='Callejon')
             usage_infos = quota_api.get_quota_usage_by_resource(
                 self.context, 'goals')
             # 2 results expected in admin context
    
  • neutron/tests/unit/db/test_allowedaddresspairs_db.py+12 1 modified
    @@ -13,6 +13,8 @@
     # See the License for the specific language governing permissions and
     # limitations under the License.
     
    +from oslo_config import cfg
    +from webob import exc as web_exc
     
     from neutron.api.v2 import attributes as attr
     from neutron.db import allowedaddresspairs_db as addr_pair_db
    @@ -23,7 +25,6 @@
     from neutron.extensions import securitygroup as secgroup
     from neutron import manager
     from neutron.tests.unit.db import test_db_base_plugin_v2
    -from oslo_config import cfg
     
     
     DB_PLUGIN_KLASS = ('neutron.tests.unit.db.test_allowedaddresspairs_db.'
    @@ -97,6 +98,16 @@ def setUp(self, plugin=None, ext_mgr=None):
     
     class TestAllowedAddressPairs(AllowedAddressPairDBTestCase):
     
    +    def test_create_port_allowed_address_pairs_bad_format(self):
    +        with self.network() as net:
    +            bad_values = [False, True, None, 1.1, 1]
    +            for value in bad_values:
    +                self._create_port(
    +                    self.fmt, net['network']['id'],
    +                    expected_res_status=web_exc.HTTPBadRequest.code,
    +                    arg_list=(addr_pair.ADDRESS_PAIRS,),
    +                    allowed_address_pairs=value)
    +
         def test_create_port_allowed_address_pairs(self):
             with self.network() as net:
                 address_pairs = [{'mac_address': '00:00:00:00:00:01',
    
  • neutron/tests/unit/db/test_l3_dvr_db.py+124 156 modified
    @@ -198,7 +198,7 @@ def _test_prepare_direct_delete_dvr_internal_ports(self, port):
                                   self.ctx,
                                   port['id'])
     
    -    def test_prevent__delete_floatingip_agent_gateway_port(self):
    +    def test_prevent_delete_floatingip_agent_gateway_port(self):
             port = {
                 'id': 'my_port_id',
                 'fixed_ips': mock.ANY,
    @@ -239,30 +239,6 @@ def test_build_routers_list_with_gw_port_mismatch(self):
             routers = self.mixin._build_routers_list(self.ctx, routers, gw_ports)
             self.assertIsNone(routers[0].get('gw_port'))
     
    -    def test__clear_unused_fip_agent_gw_port(self):
    -        floatingip = {
    -            'id': _uuid(),
    -            'fixed_port_id': _uuid(),
    -            'floating_network_id': _uuid()
    -        }
    -        with mock.patch.object(l3_dvr_db.l3_db.L3_NAT_db_mixin,
    -                               '_get_floatingip') as gfips,\
    -                mock.patch.object(self.mixin, '_get_vm_port_hostid') as gvm,\
    -                mock.patch.object(
    -                    self.mixin,
    -                    '_check_fips_availability_on_host_ext_net') as cfips,\
    -                mock.patch.object(
    -                    self.mixin,
    -                    '_delete_floatingip_agent_gateway_port') as dfips:
    -            gfips.return_value = floatingip
    -            gvm.return_value = 'my-host'
    -            cfips.return_value = True
    -            self.mixin._clear_unused_fip_agent_gw_port(
    -                self.ctx, floatingip)
    -            self.assertTrue(dfips.called)
    -            self.assertTrue(cfips.called)
    -            self.assertTrue(gvm.called)
    -
         def setup_port_has_ipv6_address(self, port):
             with mock.patch.object(l3_dvr_db.l3_db.L3_NAT_db_mixin,
                                    '_port_has_ipv6_address') as pv6:
    @@ -288,80 +264,101 @@ def test__port_has_ipv6_address_for_non_snat_ports(self):
             self.assertTrue(result)
             self.assertTrue(pv6.called)
     
    -    def test__delete_floatingip_agent_gateway_port(self):
    -        port = {
    +    def _helper_delete_floatingip_agent_gateway_port(self, port_host):
    +        ports = [{
                 'id': 'my_port_id',
                 'binding:host_id': 'foo_host',
                 'network_id': 'ext_network_id',
    -            'device_owner': l3_const.DEVICE_OWNER_AGENT_GW
    -        }
    -        with mock.patch.object(manager.NeutronManager, 'get_plugin') as gp,\
    -                mock.patch.object(self.mixin,
    -                                  '_get_vm_port_hostid') as vm_host:
    +            'device_owner': l3_const.DEVICE_OWNER_ROUTER_GW
    +        },
    +                {
    +            'id': 'my_new_port_id',
    +            'binding:host_id': 'my_foo_host',
    +            'network_id': 'ext_network_id',
    +            'device_owner': l3_const.DEVICE_OWNER_ROUTER_GW
    +        }]
    +        with mock.patch.object(manager.NeutronManager, 'get_plugin') as gp:
                 plugin = mock.Mock()
                 gp.return_value = plugin
    -            plugin.get_ports.return_value = [port]
    -            vm_host.return_value = 'foo_host'
    -            self.mixin._delete_floatingip_agent_gateway_port(
    -                self.ctx, 'foo_host', 'network_id')
    +            plugin.get_ports.return_value = ports
    +            self.mixin.delete_floatingip_agent_gateway_port(
    +                self.ctx, port_host, 'ext_network_id')
             plugin.get_ports.assert_called_with(self.ctx, filters={
    -            'network_id': ['network_id'],
    +            'network_id': ['ext_network_id'],
                 'device_owner': [l3_const.DEVICE_OWNER_AGENT_GW]})
    -        plugin.ipam.delete_port.assert_called_with(self.ctx, 'my_port_id')
    -
    -    def _delete_floatingip_test_setup(self, floatingip):
    -        fip_id = floatingip['id']
    -        with mock.patch.object(l3_dvr_db.l3_db.L3_NAT_db_mixin,
    -                               '_get_floatingip') as gf,\
    -                mock.patch.object(self.mixin,
    -                                  '_clear_unused_fip_agent_gw_port') as vf,\
    -                mock.patch.object(l3_dvr_db.l3_db.L3_NAT_db_mixin,
    -                                  'delete_floatingip'):
    -            gf.return_value = floatingip
    -            self.mixin.delete_floatingip(self.ctx, fip_id)
    -            return vf
    -
    -    def _disassociate_floatingip_setup(self, port_id=None, floatingip=None):
    -        with mock.patch.object(self.mixin, '_get_floatingip_on_port') as gf,\
    -                mock.patch.object(self.mixin,
    -                                  '_clear_unused_fip_agent_gw_port') as vf:
    -            gf.return_value = floatingip
    -            self.mixin.disassociate_floatingips(
    -                self.ctx, port_id, do_notify=False)
    -            return vf
    -
    -    def test_disassociate_floatingip_with_vm_port(self):
    -        port_id = '1234'
    -        floatingip = {
    -            'id': _uuid(),
    -            'fixed_port_id': 1234,
    -            'floating_network_id': _uuid()
    -        }
    -        mock_disassociate_fip = self._disassociate_floatingip_setup(
    -            port_id=port_id, floatingip=floatingip)
    -        self.assertTrue(mock_disassociate_fip.called)
    -
    -    def test_disassociate_floatingip_with_no_vm_port(self):
    -        mock_disassociate_fip = self._disassociate_floatingip_setup()
    -        self.assertFalse(mock_disassociate_fip.called)
    -
    -    def test_delete_floatingip_without_internal_port(self):
    -        floatingip = {
    -            'id': _uuid(),
    -            'fixed_port_id': None,
    -            'floating_network_id': _uuid()
    -        }
    -        mock_fip_clear = self._delete_floatingip_test_setup(floatingip)
    -        self.assertFalse(mock_fip_clear.call_count)
    -
    -    def test_delete_floatingip_with_internal_port(self):
    -        floatingip = {
    -            'id': _uuid(),
    -            'fixed_port_id': _uuid(),
    -            'floating_network_id': _uuid()
    +        if port_host:
    +            plugin.ipam.delete_port.assert_called_once_with(
    +                self.ctx, 'my_port_id')
    +        else:
    +            plugin.ipam.delete_port.assert_called_with(
    +                self.ctx, 'my_new_port_id')
    +
    +    def test_delete_floatingip_agent_gateway_port_without_host_id(self):
    +        self._helper_delete_floatingip_agent_gateway_port(None)
    +
    +    def test_delete_floatingip_agent_gateway_port_with_host_id(self):
    +        self._helper_delete_floatingip_agent_gateway_port(
    +            'foo_host')
    +
    +    def _setup_delete_current_gw_port_deletes_fip_agent_gw_port(
    +        self, port=None):
    +        gw_port_db = {
    +            'id': 'my_gw_id',
    +            'network_id': 'ext_net_id',
    +            'device_owner': l3_const.DEVICE_OWNER_ROUTER_GW
             }
    -        mock_fip_clear = self._delete_floatingip_test_setup(floatingip)
    -        self.assertTrue(mock_fip_clear.called)
    +        router = mock.MagicMock()
    +        router.extra_attributes.distributed = True
    +        router['gw_port_id'] = gw_port_db['id']
    +        router.gw_port = gw_port_db
    +        with mock.patch.object(manager.NeutronManager, 'get_plugin') as gp,\
    +            mock.patch.object(l3_dvr_db.l3_db.L3_NAT_db_mixin,
    +                              '_delete_current_gw_port'),\
    +            mock.patch.object(
    +                self.mixin,
    +                '_get_router') as grtr,\
    +            mock.patch.object(
    +                self.mixin,
    +                'delete_csnat_router_interface_ports') as del_csnat_port,\
    +            mock.patch.object(
    +                self.mixin,
    +                'delete_floatingip_agent_gateway_port') as del_agent_gw_port:
    +            plugin = mock.Mock()
    +            gp.return_value = plugin
    +            plugin.get_ports.return_value = port
    +            grtr.return_value = router
    +            self.mixin._delete_current_gw_port(
    +                self.ctx, router['id'], router, 'ext_network_id')
    +            return router, plugin, del_csnat_port, del_agent_gw_port
    +
    +    def test_delete_current_gw_port_deletes_fip_agent_gw_port(self):
    +        rtr, plugin, d_csnat_port, d_agent_gw_port = (
    +            self._setup_delete_current_gw_port_deletes_fip_agent_gw_port())
    +        self.assertTrue(d_csnat_port.called)
    +        self.assertTrue(d_agent_gw_port.called)
    +        d_csnat_port.assert_called_once_with(
    +            mock.ANY, rtr)
    +        d_agent_gw_port.assert_called_once_with(
    +            mock.ANY, None, 'ext_net_id')
    +
    +    def test_delete_current_gw_port_never_calls_delete_fip_agent_gw_port(self):
    +        port = [{
    +            'id': 'my_port_id',
    +            'network_id': 'ext_net_id',
    +            'device_owner': l3_const.DEVICE_OWNER_ROUTER_GW
    +        },
    +                {
    +            'id': 'my_new_port_id',
    +            'network_id': 'ext_net_id',
    +            'device_owner': l3_const.DEVICE_OWNER_ROUTER_GW
    +        }]
    +        rtr, plugin, d_csnat_port, d_agent_gw_port = (
    +            self._setup_delete_current_gw_port_deletes_fip_agent_gw_port(
    +                port=port))
    +        self.assertTrue(d_csnat_port.called)
    +        self.assertFalse(d_agent_gw_port.called)
    +        d_csnat_port.assert_called_once_with(
    +            mock.ANY, rtr)
     
         def _floatingip_on_port_test_setup(self, hostid):
             router = {'id': 'foo_router_id', 'distributed': True}
    @@ -412,28 +409,7 @@ def test_floatingip_on_port_with_host(self):
             self.assertIn('fip_interface',
                 router[l3_const.FLOATINGIP_AGENT_INTF_KEY])
     
    -    def test_delete_disassociated_floatingip_agent_port(self):
    -        fip = {
    -            'id': _uuid(),
    -            'port_id': None
    -        }
    -        floatingip = {
    -            'id': _uuid(),
    -            'fixed_port_id': 1234,
    -            'router_id': 'foo_router_id'
    -        }
    -        router = {'id': 'foo_router_id', 'distributed': True}
    -        with mock.patch.object(self.mixin, 'get_router') as grtr,\
    -                mock.patch.object(self.mixin,
    -                                  '_clear_unused_fip_agent_gw_port') as vf,\
    -                mock.patch.object(l3_dvr_db.l3_db.L3_NAT_db_mixin,
    -                                  '_update_fip_assoc'):
    -            grtr.return_value = router
    -            self.mixin._update_fip_assoc(
    -                self.ctx, fip, floatingip, mock.ANY)
    -            self.assertTrue(vf.called)
    -
    -    def _setup_test_create_delete_floatingip(
    +    def _setup_test_create_floatingip(
             self, fip, floatingip_db, router_db):
             port = {
                 'id': '1234',
    @@ -443,8 +419,6 @@ def _setup_test_create_delete_floatingip(
     
             with mock.patch.object(self.mixin, 'get_router') as grtr,\
                     mock.patch.object(self.mixin, '_get_vm_port_hostid') as vmp,\
    -                mock.patch.object(self.mixin,
    -                                  '_clear_unused_fip_agent_gw_port') as d_fip,\
                     mock.patch.object(
                         self.mixin,
                         'create_fip_agent_gw_port_if_not_exists') as c_fip,\
    @@ -454,7 +428,7 @@ def _setup_test_create_delete_floatingip(
                 vmp.return_value = 'my-host'
                 self.mixin._update_fip_assoc(
                     self.ctx, fip, floatingip_db, port)
    -            return d_fip, c_fip
    +            return c_fip
     
         def test_create_floatingip_agent_gw_port_with_dvr_router(self):
             floatingip = {
    @@ -466,11 +440,10 @@ def test_create_floatingip_agent_gw_port_with_dvr_router(self):
                 'id': _uuid(),
                 'port_id': _uuid()
             }
    -        delete_fip, create_fip = (
    -            self._setup_test_create_delete_floatingip(
    +        create_fip = (
    +            self._setup_test_create_floatingip(
                     fip, floatingip, router))
             self.assertTrue(create_fip.called)
    -        self.assertFalse(delete_fip.called)
     
         def test_create_floatingip_agent_gw_port_with_non_dvr_router(self):
             floatingip = {
    @@ -482,45 +455,10 @@ def test_create_floatingip_agent_gw_port_with_non_dvr_router(self):
                 'id': _uuid(),
                 'port_id': _uuid()
             }
    -        delete_fip, create_fip = (
    -            self._setup_test_create_delete_floatingip(
    -                fip, floatingip, router))
    -        self.assertFalse(create_fip.called)
    -        self.assertFalse(delete_fip.called)
    -
    -    def test_delete_floatingip_agent_gw_port_with_dvr_router(self):
    -        floatingip = {
    -            'id': _uuid(),
    -            'fixed_port_id': 1234,
    -            'router_id': 'foo_router_id'
    -        }
    -        router = {'id': 'foo_router_id', 'distributed': True}
    -        fip = {
    -            'id': _uuid(),
    -            'port_id': None
    -        }
    -        delete_fip, create_fip = (
    -            self._setup_test_create_delete_floatingip(
    -                fip, floatingip, router))
    -        self.assertTrue(delete_fip.called)
    -        self.assertFalse(create_fip.called)
    -
    -    def test_delete_floatingip_agent_gw_port_with_non_dvr_router(self):
    -        floatingip = {
    -            'id': _uuid(),
    -            'fixed_port_id': 1234,
    -            'router_id': 'foo_router_id'
    -        }
    -        router = {'id': 'foo_router_id', 'distributed': False}
    -        fip = {
    -            'id': _uuid(),
    -            'port_id': None
    -        }
    -        delete_fip, create_fip = (
    -            self._setup_test_create_delete_floatingip(
    +        create_fip = (
    +            self._setup_test_create_floatingip(
                     fip, floatingip, router))
             self.assertFalse(create_fip.called)
    -        self.assertFalse(delete_fip.called)
     
         def test_remove_router_interface_delete_router_l3agent_binding(self):
             interface_info = {'subnet_id': '123'}
    @@ -660,3 +598,33 @@ def test_dvr_vmarp_table_update_with_service_port_deleted(self):
             action = 'del'
             device_owner = l3_const.DEVICE_OWNER_LOADBALANCER
             self._test_dvr_vmarp_table_update(device_owner, action)
    +
    +    def test_add_router_interface_csnat_ports_failure(self):
    +        router_dict = {'name': 'test_router', 'admin_state_up': True,
    +                       'distributed': True}
    +        router = self._create_router(router_dict)
    +        with self.network() as net_ext,\
    +                self.subnet() as subnet:
    +            ext_net_id = net_ext['network']['id']
    +            self.core_plugin.update_network(
    +                self.ctx, ext_net_id,
    +                {'network': {'router:external': True}})
    +            self.mixin.update_router(
    +                self.ctx, router['id'],
    +                {'router': {'external_gateway_info':
    +                            {'network_id': ext_net_id}}})
    +            with mock.patch.object(
    +                self.mixin, '_add_csnat_router_interface_port') as f:
    +                f.side_effect = RuntimeError()
    +                self.assertRaises(
    +                    RuntimeError,
    +                    self.mixin.add_router_interface,
    +                    self.ctx, router['id'],
    +                    {'subnet_id': subnet['subnet']['id']})
    +                filters = {
    +                    'device_id': [router['id']],
    +                }
    +                router_ports = self.core_plugin.get_ports(self.ctx, filters)
    +                self.assertEqual(1, len(router_ports))
    +                self.assertEqual(l3_const.DEVICE_OWNER_ROUTER_GW,
    +                                 router_ports[0]['device_owner'])
    
  • neutron/tests/unit/db/test_migration.py+114 11 modified
    @@ -16,13 +16,18 @@
     import copy
     import os
     import sys
    +import textwrap
     
    +from alembic.autogenerate import api as alembic_ag_api
     from alembic import config as alembic_config
    +from alembic.operations import ops as alembic_ops
     import fixtures
     import mock
     import pkg_resources
    +import sqlalchemy as sa
     
     from neutron.db import migration
    +from neutron.db.migration import autogen
     from neutron.db.migration import cli
     from neutron.tests import base
     from neutron.tests.unit import testlib_api
    @@ -177,17 +182,9 @@ def _test_database_sync_revision(self, separate_branches=True):
                                       return_value=separate_branches):
                 if separate_branches:
                     mock.patch('os.path.exists').start()
    -                expected_kwargs = [
    -                    {'message': 'message', 'sql': False, 'autogenerate': True,
    -                     'version_path':
    -                         cli._get_version_branch_path(config, branch),
    -                     'head': cli._get_branch_head(branch)}
    -                    for config in self.configs
    -                    for branch in cli.MIGRATION_BRANCHES]
    -            else:
    -                expected_kwargs = [{
    -                    'message': 'message', 'sql': False, 'autogenerate': True,
    -                }]
    +            expected_kwargs = [{
    +                'message': 'message', 'sql': False, 'autogenerate': True,
    +            }]
                 self._main_test_helper(
                     ['prog', 'revision', '--autogenerate', '-m', 'message'],
                     'revision',
    @@ -498,6 +495,112 @@ def test_validate_labels_walks_thru_all_revisions(
                 [mock.call(mock.ANY, revision) for revision in revisions]
             )
     
    +    @mock.patch.object(cli, '_use_separate_migration_branches')
    +    @mock.patch.object(cli, '_get_version_branch_path')
    +    def test_autogen_process_directives(
    +            self,
    +            get_version_branch_path,
    +            use_separate_migration_branches):
    +
    +        use_separate_migration_branches.return_value = True
    +        get_version_branch_path.side_effect = lambda cfg, branch: (
    +            "/foo/expand" if branch == 'expand' else "/foo/contract")
    +
    +        migration_script = alembic_ops.MigrationScript(
    +            'eced083f5df',
    +            # these directives will be split into separate
    +            # expand/contract scripts
    +            alembic_ops.UpgradeOps(
    +                ops=[
    +                    alembic_ops.CreateTableOp(
    +                        'organization',
    +                        [
    +                            sa.Column('id', sa.Integer(), primary_key=True),
    +                            sa.Column('name', sa.String(50), nullable=False)
    +                        ]
    +                    ),
    +                    alembic_ops.ModifyTableOps(
    +                        'user',
    +                        ops=[
    +                            alembic_ops.AddColumnOp(
    +                                'user',
    +                                sa.Column('organization_id', sa.Integer())
    +                            ),
    +                            alembic_ops.CreateForeignKeyOp(
    +                                'org_fk', 'user', 'organization',
    +                                ['organization_id'], ['id']
    +                            ),
    +                            alembic_ops.DropConstraintOp(
    +                                'user', 'uq_user_org'
    +                            ),
    +                            alembic_ops.DropColumnOp(
    +                                'user', 'organization_name'
    +                            )
    +                        ]
    +                    )
    +                ]
    +            ),
    +            # these will be discarded
    +            alembic_ops.DowngradeOps(
    +                ops=[
    +                    alembic_ops.AddColumnOp(
    +                        'user', sa.Column(
    +                            'organization_name', sa.String(50), nullable=True)
    +                    ),
    +                    alembic_ops.CreateUniqueConstraintOp(
    +                        'uq_user_org', 'user',
    +                        ['user_name', 'organization_name']
    +                    ),
    +                    alembic_ops.ModifyTableOps(
    +                        'user',
    +                        ops=[
    +                            alembic_ops.DropConstraintOp('org_fk', 'user'),
    +                            alembic_ops.DropColumnOp('user', 'organization_id')
    +                        ]
    +                    ),
    +                    alembic_ops.DropTableOp('organization')
    +                ]
    +            ),
    +            message='create the organization table and '
    +            'replace user.organization_name'
    +        )
    +
    +        directives = [migration_script]
    +        autogen.process_revision_directives(
    +            mock.Mock(), mock.Mock(), directives
    +        )
    +
    +        expand = directives[0]
    +        contract = directives[1]
    +        self.assertEqual("/foo/expand", expand.version_path)
    +        self.assertEqual("/foo/contract", contract.version_path)
    +        self.assertTrue(expand.downgrade_ops.is_empty())
    +        self.assertTrue(contract.downgrade_ops.is_empty())
    +
    +        self.assertEqual(
    +            textwrap.dedent("""\
    +            ### commands auto generated by Alembic - please adjust! ###
    +                op.create_table('organization',
    +                sa.Column('id', sa.Integer(), nullable=False),
    +                sa.Column('name', sa.String(length=50), nullable=False),
    +                sa.PrimaryKeyConstraint('id')
    +                )
    +                op.add_column('user', """
    +                """sa.Column('organization_id', sa.Integer(), nullable=True))
    +                op.create_foreign_key('org_fk', 'user', """
    +                """'organization', ['organization_id'], ['id'])
    +                ### end Alembic commands ###"""),
    +            alembic_ag_api.render_python_code(expand.upgrade_ops)
    +        )
    +        self.assertEqual(
    +            textwrap.dedent("""\
    +            ### commands auto generated by Alembic - please adjust! ###
    +                op.drop_constraint('user', 'uq_user_org', type_=None)
    +                op.drop_column('user', 'organization_name')
    +                ### end Alembic commands ###"""),
    +            alembic_ag_api.render_python_code(contract.upgrade_ops)
    +        )
    +
     
     class TestSafetyChecks(base.BaseTestCase):
     
    
  • neutron/tests/unit/extensions/test_flavors.py+4 1 modified
    @@ -224,7 +224,9 @@ def test_get_service_profiles(self):
             self.assertEqual(expected, res)
     
         def test_associate_service_profile_with_flavor(self):
    -        expected = {'service_profile': {'id': _uuid()}}
    +        tenant_id = uuidutils.generate_uuid()
    +        expected = {'service_profile': {'id': _uuid(),
    +                                        'tenant_id': tenant_id}}
             instance = self.plugin.return_value
             instance.create_flavor_service_profile.return_value = (
                 expected['service_profile'])
    @@ -306,6 +308,7 @@ def test_create_flavor(self):
             res = self.ctx.session.query(flavors_db.Flavor).all()
             self.assertEqual(1, len(res))
             self.assertEqual('GOLD', res[0]['name'])
    +        self.assertEqual(constants.LOADBALANCER, res[0]['service_type'])
     
         def test_update_flavor(self):
             fl, flavor = self._create_flavor()
    
  • neutron/tests/unit/extensions/test_l3.py+73 0 modified
    @@ -28,8 +28,10 @@
     from neutron.api.rpc.agentnotifiers import l3_rpc_agent_api
     from neutron.api.rpc.handlers import l3_rpc
     from neutron.api.v2 import attributes
    +from neutron.callbacks import events
     from neutron.callbacks import exceptions
     from neutron.callbacks import registry
    +from neutron.callbacks import resources
     from neutron.common import constants as l3_constants
     from neutron.common import exceptions as n_exc
     from neutron import context
    @@ -2494,6 +2496,41 @@ def test_create_router_gateway_fails(self):
             routers = plugin.get_routers(ctx)
             self.assertEqual(0, len(routers))
     
    +    def test_update_subnet_gateway_for_external_net(self):
    +        """Test to make sure notification to routers occurs when the gateway
    +            ip address of a subnet of the external network is changed.
    +        """
    +        plugin = manager.NeutronManager.get_service_plugins()[
    +            service_constants.L3_ROUTER_NAT]
    +        if not hasattr(plugin, 'l3_rpc_notifier'):
    +            self.skipTest("Plugin does not support l3_rpc_notifier")
    +        # make sure the callback is registered.
    +        registry.subscribe(
    +            l3_db._notify_subnet_gateway_ip_update, resources.SUBNET_GATEWAY,
    +            events.AFTER_UPDATE)
    +        with mock.patch.object(plugin.l3_rpc_notifier,
    +                               'routers_updated') as chk_method:
    +            with self.network() as network:
    +                allocation_pools = [{'start': '120.0.0.3',
    +                                     'end': '120.0.0.254'}]
    +                with self.subnet(network=network,
    +                                 gateway_ip='120.0.0.1',
    +                                 allocation_pools=allocation_pools,
    +                                 cidr='120.0.0.0/24') as subnet:
    +                    kwargs = {
    +                        'device_owner': l3_constants.DEVICE_OWNER_ROUTER_GW,
    +                        'device_id': 'fake_device'}
    +                    with self.port(subnet=subnet, **kwargs):
    +                        data = {'subnet': {'gateway_ip': '120.0.0.2'}}
    +                        req = self.new_update_request('subnets', data,
    +                                                      subnet['subnet']['id'])
    +                        res = self.deserialize(self.fmt,
    +                                               req.get_response(self.api))
    +                        self.assertEqual(res['subnet']['gateway_ip'],
    +                                         data['subnet']['gateway_ip'])
    +                        chk_method.assert_called_with(mock.ANY,
    +                                                      ['fake_device'], None)
    +
     
     class L3AgentDbTestCaseBase(L3NatTestCaseMixin):
     
    @@ -2518,6 +2555,42 @@ def test_l3_agent_routers_query_interfaces(self):
                     wanted_subnetid = p['port']['fixed_ips'][0]['subnet_id']
                     self.assertEqual(wanted_subnetid, subnet_id)
     
    +    def test_l3_agent_sync_interfaces(self):
    +        """Test L3 interfaces query return valid result"""
    +        with self.router() as router1, self.router() as router2:
    +            with self.port() as port1, self.port() as port2:
    +                self._router_interface_action('add',
    +                                              router1['router']['id'],
    +                                              None,
    +                                              port1['port']['id'])
    +                self._router_interface_action('add',
    +                                              router2['router']['id'],
    +                                              None,
    +                                              port2['port']['id'])
    +                admin_ctx = context.get_admin_context()
    +                router1_id = router1['router']['id']
    +                router2_id = router2['router']['id']
    +
    +                # Verify if router1 pass in, return only interface from router1
    +                ifaces = self.plugin._get_sync_interfaces(admin_ctx,
    +                                                          [router1_id])
    +                self.assertEqual(1, len(ifaces))
    +                self.assertEqual(router1_id,
    +                                 ifaces[0]['device_id'])
    +
    +                # Verify if router1 and router2 pass in, return both interfaces
    +                ifaces = self.plugin._get_sync_interfaces(admin_ctx,
    +                                                          [router1_id,
    +                                                           router2_id])
    +                self.assertEqual(2, len(ifaces))
    +                device_list = [i['device_id'] for i in ifaces]
    +                self.assertIn(router1_id, device_list)
    +                self.assertIn(router2_id, device_list)
    +
    +                #Verify if no router pass in, return empty list
    +                ifaces = self.plugin._get_sync_interfaces(admin_ctx, None)
    +                self.assertEqual(0, len(ifaces))
    +
         def test_l3_agent_routers_query_ignore_interfaces_with_moreThanOneIp(self):
             with self.router() as r:
                 with self.subnet(cidr='9.0.1.0/24') as subnet:
    
  • neutron/tests/unit/ipam/drivers/neutrondb_ipam/test_db_api.py+9 9 modified
    @@ -65,26 +65,26 @@ def _create_pools(self, pools):
                 db_pools.append(db_pool)
             return db_pools
     
    -    def _validate_ips(self, pool, db_pool):
    -        self.assertEqual(pool[0], db_pool.first_ip)
    -        self.assertEqual(pool[1], db_pool.last_ip)
    +    def _validate_ips(self, pools, db_pool):
    +        self.assertTrue(
    +            any(pool == (db_pool.first_ip, db_pool.last_ip) for pool in pools))
     
         def test_create_pool(self):
             db_pools = self._create_pools([self.single_pool])
     
             ipam_pool = self.ctx.session.query(db_models.IpamAllocationPool).\
                 filter_by(ipam_subnet_id=self.ipam_subnet_id).first()
    -        self._validate_ips(self.single_pool, ipam_pool)
    +        self._validate_ips([self.single_pool], ipam_pool)
     
             range = self.ctx.session.query(db_models.IpamAvailabilityRange).\
                 filter_by(allocation_pool_id=db_pools[0].id).first()
    -        self._validate_ips(self.single_pool, range)
    +        self._validate_ips([self.single_pool], range)
     
         def _test_get_first_range(self, locking):
             self._create_pools(self.multi_pool)
             range = self.subnet_manager.get_first_range(self.ctx.session,
                                                         locking=locking)
    -        self._validate_ips(self.multi_pool[0], range)
    +        self._validate_ips(self.multi_pool, range)
     
         def test_get_first_range(self):
             self._test_get_first_range(False)
    @@ -110,20 +110,20 @@ def test_list_ranges_by_allocation_pool(self):
                 db_pools[0].id).all()
             self.assertEqual(1, len(db_ranges))
             self.assertEqual(db_models.IpamAvailabilityRange, type(db_ranges[0]))
    -        self._validate_ips(self.single_pool, db_ranges[0])
    +        self._validate_ips([self.single_pool], db_ranges[0])
     
         def test_create_range(self):
             self._create_pools([self.single_pool])
             pool = self.ctx.session.query(db_models.IpamAllocationPool).\
                 filter_by(ipam_subnet_id=self.ipam_subnet_id).first()
    -        self._validate_ips(self.single_pool, pool)
    +        self._validate_ips([self.single_pool], pool)
             allocation_pool_id = pool.id
     
             # delete the range
             db_range = self.subnet_manager.list_ranges_by_allocation_pool(
                 self.ctx.session,
                 pool.id).first()
    -        self._validate_ips(self.single_pool, db_range)
    +        self._validate_ips([self.single_pool], db_range)
             self.ctx.session.delete(db_range)
     
             # create a new range
    
  • neutron/tests/unit/objects/qos/test_rule.py+44 0 modified
    @@ -10,12 +10,56 @@
     #    License for the specific language governing permissions and limitations
     #    under the License.
     
    +from neutron.common import constants
     from neutron.objects.qos import policy
     from neutron.objects.qos import rule
     from neutron.services.qos import qos_consts
    +from neutron.tests import base as neutron_test_base
     from neutron.tests.unit.objects import test_base
     from neutron.tests.unit import testlib_api
     
    +POLICY_ID_A = 'policy-id-a'
    +POLICY_ID_B = 'policy-id-b'
    +DEVICE_OWNER_COMPUTE = 'compute:None'
    +
    +
    +class QosRuleObjectTestCase(neutron_test_base.BaseTestCase):
    +
    +    def _test_should_apply_to_port(self, rule_policy_id, port_policy_id,
    +                                   device_owner, expected_result):
    +        test_rule = rule.QosRule(qos_policy_id=rule_policy_id)
    +        port = {qos_consts.QOS_POLICY_ID: port_policy_id,
    +                'device_owner': device_owner}
    +        self.assertEqual(expected_result, test_rule.should_apply_to_port(port))
    +
    +    def test_should_apply_to_port_with_network_port_and_net_policy(self):
    +        self._test_should_apply_to_port(
    +            rule_policy_id=POLICY_ID_B,
    +            port_policy_id=POLICY_ID_A,
    +            device_owner=constants.DEVICE_OWNER_ROUTER_INTF,
    +            expected_result=False)
    +
    +    def test_should_apply_to_port_with_network_port_and_port_policy(self):
    +        self._test_should_apply_to_port(
    +            rule_policy_id=POLICY_ID_A,
    +            port_policy_id=POLICY_ID_A,
    +            device_owner=constants.DEVICE_OWNER_ROUTER_INTF,
    +            expected_result=True)
    +
    +    def test_should_apply_to_port_with_compute_port_and_net_policy(self):
    +        self._test_should_apply_to_port(
    +            rule_policy_id=POLICY_ID_B,
    +            port_policy_id=POLICY_ID_A,
    +            device_owner=DEVICE_OWNER_COMPUTE,
    +            expected_result=True)
    +
    +    def test_should_apply_to_port_with_compute_port_and_port_policy(self):
    +        self._test_should_apply_to_port(
    +            rule_policy_id=POLICY_ID_A,
    +            port_policy_id=POLICY_ID_A,
    +            device_owner=DEVICE_OWNER_COMPUTE,
    +            expected_result=True)
    +
     
     class QosBandwidthLimitRuleObjectTestCase(test_base.BaseObjectIfaceTestCase):
     
    
  • neutron/tests/unit/objects/test_base.py+5 2 modified
    @@ -20,6 +20,7 @@
     from oslo_versionedobjects import fields as obj_fields
     
     from neutron.common import exceptions as n_exc
    +from neutron.common import utils as common_utils
     from neutron import context
     from neutron.db import api as db_api
     from neutron.objects import base
    @@ -184,8 +185,10 @@ def test_get_objects_invalid_fields(self):
         def _validate_objects(self, expected, observed):
             self.assertTrue(all(self._is_test_class(obj) for obj in observed))
             self.assertEqual(
    -            sorted(expected),
    -            sorted(get_obj_db_fields(obj) for obj in observed))
    +            sorted(expected,
    +                   key=common_utils.safe_sort_key),
    +            sorted([get_obj_db_fields(obj) for obj in observed],
    +                   key=common_utils.safe_sort_key))
     
         def _check_equal(self, obj, db_obj):
             self.assertEqual(
    
  • neutron/tests/unit/plugins/cisco/__init__.py+0 0 removed
  • neutron/tests/unit/plugins/cisco/n1kv/fake_client.py+0 126 removed
    @@ -1,126 +0,0 @@
    -# Copyright 2014 Cisco Systems, Inc.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -from neutron.plugins.cisco.common import cisco_exceptions as c_exc
    -from neutron.plugins.cisco.n1kv import n1kv_client
    -
    -_resource_metadata = {'port': ['id', 'macAddress', 'ipAddress', 'subnetId'],
    -                      'vmnetwork': ['name', 'networkSegmentId',
    -                                    'networkSegment', 'portProfile',
    -                                    'portProfileId', 'tenantId',
    -                                    'portId', 'macAddress',
    -                                    'ipAddress', 'subnetId'],
    -                      'subnet': ['addressRangeStart', 'addressRangeEnd',
    -                                 'ipAddressSubnet', 'description', 'gateway',
    -                                 'dhcp', 'dnsServersList', 'networkAddress',
    -                                 'netSegmentName', 'id', 'tenantId']}
    -
    -
    -class TestClient(n1kv_client.Client):
    -
    -    def __init__(self, **kwargs):
    -        self.broken = False
    -        self.inject_params = False
    -        self.total_profiles = 2
    -        super(TestClient, self).__init__()
    -
    -    def _get_total_profiles(self):
    -        return self.total_profiles
    -
    -    def _do_request(self, method, action, body=None, headers=None):
    -        if self.broken:
    -            raise c_exc.VSMError(reason='VSM:Internal Server Error')
    -        if self.inject_params and body:
    -            body['invalidKey'] = 'catchMeIfYouCan'
    -        if method == 'POST':
    -            return _validate_resource(action, body)
    -        elif method == 'GET':
    -            if 'virtual-port-profile' in action:
    -                return _policy_profile_generator(
    -                    self._get_total_profiles())
    -            else:
    -                raise c_exc.VSMError(reason='VSM:Internal Server Error')
    -
    -
    -class TestClientInvalidRequest(TestClient):
    -
    -    def __init__(self, **kwargs):
    -        super(TestClientInvalidRequest, self).__init__()
    -        self.inject_params = True
    -
    -
    -class TestClientInvalidResponse(TestClient):
    -
    -    def __init__(self, **kwargs):
    -        super(TestClientInvalidResponse, self).__init__()
    -        self.broken = True
    -
    -
    -def _validate_resource(action, body=None):
    -    if body:
    -        body_set = set(body.keys())
    -    else:
    -        return
    -    if 'vm-network' in action and 'port' not in action:
    -        vmnetwork_set = set(_resource_metadata['vmnetwork'])
    -        if body_set - vmnetwork_set:
    -            raise c_exc.VSMError(reason='Invalid Request')
    -    elif 'port' in action:
    -        port_set = set(_resource_metadata['port'])
    -        if body_set - port_set:
    -            raise c_exc.VSMError(reason='Invalid Request')
    -    elif 'subnet' in action:
    -        subnet_set = set(_resource_metadata['subnet'])
    -        if body_set - subnet_set:
    -            raise c_exc.VSMError(reason='Invalid Request')
    -    else:
    -        return
    -
    -
    -def _policy_profile_generator(total_profiles):
    -    """
    -    Generate policy profile response and return a dictionary.
    -
    -    :param total_profiles: integer representing total number of profiles to
    -                           return
    -    """
    -    profiles = {}
    -    for num in range(1, total_profiles + 1):
    -        name = "pp-%s" % num
    -        profile_id = "00000000-0000-0000-0000-00000000000%s" % num
    -        profiles[name] = {"properties": {"name": name, "id": profile_id}}
    -    return profiles
    -
    -
    -def _policy_profile_generator_xml(total_profiles):
    -    """
    -    Generate policy profile response in XML format.
    -
    -    :param total_profiles: integer representing total number of profiles to
    -                           return
    -    """
    -    xml = ["""<?xml version="1.0" encoding="utf-8"?>
    -           <set name="virtual_port_profile_set">"""]
    -    template = (
    -        '<instance name="%(num)d"'
    -        ' url="/api/n1k/virtual-port-profile/%(num)s">'
    -        '<properties>'
    -        '<id>00000000-0000-0000-0000-00000000000%(num)s</id>'
    -        '<name>pp-%(num)s</name>'
    -        '</properties>'
    -        '</instance>'
    -    )
    -    xml.extend(template % {'num': n} for n in range(1, total_profiles + 1))
    -    xml.append("</set>")
    -    return ''.join(xml)
    
  • neutron/tests/unit/plugins/cisco/n1kv/__init__.py+0 0 removed
  • neutron/tests/unit/plugins/cisco/n1kv/test_n1kv_db.py+0 881 removed
    @@ -1,881 +0,0 @@
    -# Copyright 2013 Cisco Systems, Inc.
    -#
    -#    Licensed under the Apache License, Version 2.0 (the "License"); you may
    -#    not use this file except in compliance with the License. You may obtain
    -#    a copy of the License at
    -#
    -#         http://www.apache.org/licenses/LICENSE-2.0
    -#
    -#    Unless required by applicable law or agreed to in writing, software
    -#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    -#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    -#    License for the specific language governing permissions and limitations
    -#    under the License.
    -
    -from six import moves
    -from sqlalchemy.orm import exc as s_exc
    -from testtools import matchers
    -
    -from neutron.common import exceptions as n_exc
    -from neutron import context
    -from neutron.db import api as db
    -from neutron.db import common_db_mixin
    -from neutron.plugins.cisco.common import cisco_constants as c_const
    -from neutron.plugins.cisco.common import cisco_exceptions as c_exc
    -from neutron.plugins.cisco.db import n1kv_db_v2
    -from neutron.plugins.cisco.db import n1kv_models_v2
    -from neutron.tests.unit.db import test_db_base_plugin_v2 as test_plugin
    -from neutron.tests.unit import testlib_api
    -
    -
    -PHYS_NET = 'physnet1'
    -PHYS_NET_2 = 'physnet2'
    -VLAN_MIN = 10
    -VLAN_MAX = 19
    -VXLAN_MIN = 5000
    -VXLAN_MAX = 5009
    -SEGMENT_RANGE = '200-220'
    -SEGMENT_RANGE_MIN_OVERLAP = '210-230'
    -SEGMENT_RANGE_MAX_OVERLAP = '190-209'
    -SEGMENT_RANGE_OVERLAP = '190-230'
    -TEST_NETWORK_ID = 'abcdefghijklmnopqrstuvwxyz'
    -TEST_NETWORK_ID2 = 'abcdefghijklmnopqrstuvwxy2'
    -TEST_NETWORK_ID3 = 'abcdefghijklmnopqrstuvwxy3'
    -TEST_NETWORK_PROFILE = {'name': 'test_profile',
    -                        'segment_type': c_const.NETWORK_TYPE_VLAN,
    -                        'physical_network': 'physnet1',
    -                        'segment_range': '10-19'}
    -TEST_NETWORK_PROFILE_2 = {'name': 'test_profile_2',
    -                          'segment_type': c_const.NETWORK_TYPE_VLAN,
    -                          'physical_network': 'physnet1',
    -                          'segment_range': SEGMENT_RANGE}
    -TEST_NETWORK_PROFILE_VXLAN = {'name': 'test_profile',
    -                              'segment_type': c_const.NETWORK_TYPE_OVERLAY,
    -                              'sub_type': c_const.NETWORK_SUBTYPE_NATIVE_VXLAN,
    -                              'segment_range': '5000-5009',
    -                              'multicast_ip_range': '239.0.0.70-239.0.0.80'}
    -TEST_POLICY_PROFILE = {'id': '4a417990-76fb-11e2-bcfd-0800200c9a66',
    -                       'name': 'test_policy_profile'}
    -TEST_NETWORK_PROFILE_MULTI_SEGMENT = {'name': 'test_profile',
    -                                      'segment_type':
    -                                      c_const.NETWORK_TYPE_MULTI_SEGMENT}
    -TEST_NETWORK_PROFILE_VLAN_TRUNK = {'name': 'test_profile',
    -                                   'segment_type': c_const.NETWORK_TYPE_TRUNK,
    -                                   'sub_type': c_const.NETWORK_TYPE_VLAN}
    -TEST_NETWORK_PROFILE_VXLAN_TRUNK = {'name': 'test_profile',
    -                                    'segment_type': c_const.NETWORK_TYPE_TRUNK,
    -                                    'sub_type': c_const.NETWORK_TYPE_OVERLAY}
    -
    -
    -def _create_test_network_profile_if_not_there(session,
    -                                              profile=TEST_NETWORK_PROFILE):
    -    try:
    -        _profile = session.query(n1kv_models_v2.NetworkProfile).filter_by(
    -            name=profile['name']).one()
    -    except s_exc.NoResultFound:
    -        _profile = n1kv_db_v2.create_network_profile(session, profile)
    -    return _profile
    -
    -
    -def _create_test_policy_profile_if_not_there(session,
    -                                             profile=TEST_POLICY_PROFILE):
    -    try:
    -        _profile = session.query(n1kv_models_v2.PolicyProfile).filter_by(
    -            name=profile['name']).one()
    -    except s_exc.NoResultFound:
    -        _profile = n1kv_db_v2.create_policy_profile(profile)
    -    return _profile
    -
    -
    -class VlanAllocationsTest(testlib_api.SqlTestCase):
    -
    -    def setUp(self):
    -        super(VlanAllocationsTest, self).setUp()
    -        self.session = db.get_session()
    -        self.net_p = _create_test_network_profile_if_not_there(self.session)
    -        n1kv_db_v2.sync_vlan_allocations(self.session, self.net_p)
    -
    -    def test_sync_vlan_allocations_outside_segment_range(self):
    -        self.assertRaises(c_exc.VlanIDNotFound,
    -                          n1kv_db_v2.get_vlan_allocation,
    -                          self.session,
    -                          PHYS_NET,
    -                          VLAN_MIN - 1)
    -        self.assertRaises(c_exc.VlanIDNotFound,
    -                          n1kv_db_v2.get_vlan_allocation,
    -                          self.session,
    -                          PHYS_NET,
    -                          VLAN_MAX + 1)
    -        self.assertRaises(c_exc.VlanIDNotFound,
    -                          n1kv_db_v2.get_vlan_allocation,
    -                          self.session,
    -                          PHYS_NET_2,
    -                          VLAN_MIN + 20)
    -        self.assertRaises(c_exc.VlanIDNotFound,
    -                          n1kv_db_v2.get_vlan_allocation,
    -                          self.session,
    -                          PHYS_NET_2,
    -                          VLAN_MIN + 20)
    -        self.assertRaises(c_exc.VlanIDNotFound,
    -                          n1kv_db_v2.get_vlan_allocation,
    -                          self.session,
    -                          PHYS_NET_2,
    -                          VLAN_MAX + 20)
    -
    -    def test_sync_vlan_allocations_unallocated_vlans(self):
    -        self.assertFalse(n1kv_db_v2.get_vlan_allocation(self.session,
    -                                                        PHYS_NET,
    -                                                        VLAN_MIN).allocated)
    -        self.assertFalse(n1kv_db_v2.get_vlan_allocation(self.session,
    -                                                        PHYS_NET,
    -                                                        VLAN_MIN + 1).
    -                         allocated)
    -        self.assertFalse(n1kv_db_v2.get_vlan_allocation(self.session,
    -                                                        PHYS_NET,
    -                                                        VLAN_MAX - 1).
    -                         allocated)
    -        self.assertFalse(n1kv_db_v2.get_vlan_allocation(self.session,
    -                                                        PHYS_NET,
    -                                                        VLAN_MAX).allocated)
    -
    -    def test_vlan_pool(self):
    -        vlan_ids = set()
    -        for x in moves.range(VLAN_MIN, VLAN_MAX + 1):
    -            (physical_network, seg_type,
    -             vlan_id, m_ip) = n1kv_db_v2.reserve_vlan(self.session, self.net_p)
    -            self.assertEqual(physical_network, PHYS_NET)
    -            self.assertThat(vlan_id, matchers.GreaterThan(VLAN_MIN - 1))
    -            self.assertThat(vlan_id, matchers.LessThan(VLAN_MAX + 1))
    -            vlan_ids.add(vlan_id)
    -
    -        self.assertRaises(n_exc.NoNetworkAvailable,
    -                          n1kv_db_v2.reserve_vlan,
    -                          self.session,
    -                          self.net_p)
    -
    -        n1kv_db_v2.release_vlan(self.session, PHYS_NET, vlan_ids.pop())
    -        physical_network, seg_type, vlan_id, m_ip = (n1kv_db_v2.reserve_vlan(
    -                                                     self.session, self.net_p))
    -        self.assertEqual(physical_network, PHYS_NET)
    -        self.assertThat(vlan_id, matchers.GreaterThan(VLAN_MIN - 1))
    -        self.assertThat(vlan_id, matchers.LessThan(VLAN_MAX + 1))
    -        vlan_ids.add(vlan_id)
    -
    -        for vlan_id in vlan_ids:
    -            n1kv_db_v2.release_vlan(self.session, PHYS_NET, vlan_id)
    -
    -    def test_specific_vlan_inside_pool(self):
    -        vlan_id = VLAN_MIN + 5
    -        self.assertFalse(n1kv_db_v2.get_vlan_allocation(self.session,
    -                                                        PHYS_NET,
    -                                                        vlan_id).allocated)
    -        n1kv_db_v2.reserve_specific_vlan(self.session, PHYS_NET, vlan_id)
    -        self.assertTrue(n1kv_db_v2.get_vlan_allocation(self.session,
    -                                                       PHYS_NET,
    -                                                       vlan_id).allocated)
    -
    -        self.assertRaises(n_exc.VlanIdInUse,
    -                          n1kv_db_v2.reserve_specific_vlan,
    -                          self.session,
    -                          PHYS_NET,
    -                          vlan_id)
    -
    -        n1kv_db_v2.release_vlan(self.session, PHYS_NET, vlan_id)
    -        self.assertFalse(n1kv_db_v2.get_vlan_allocation(self.session,
    -                                                        PHYS_NET,
    -                                                        vlan_id).allocated)
    -
    -    def test_specific_vlan_outside_pool(self):
    -        vlan_id = VLAN_MAX + 5
    -        self.assertRaises(c_exc.VlanIDNotFound,
    -                          n1kv_db_v2.get_vlan_allocation,
    -                          self.session,
    -                          PHYS_NET,
    -                          vlan_id)
    -        self.assertRaises(c_exc.VlanIDOutsidePool,
    -                          n1kv_db_v2.reserve_specific_vlan,
    -                          self.session,
    -                          PHYS_NET,
    -                          vlan_id)
    -
    -
    -class VxlanAllocationsTest(testlib_api.SqlTestCase,
    -                           n1kv_db_v2.NetworkProfile_db_mixin):
    -
    -    def setUp(self):
    -        super(VxlanAllocationsTest, self).setUp()
    -        self.session = db.get_session()
    -        self.net_p = _create_test_network_profile_if_not_there(
    -            self.session, TEST_NETWORK_PROFILE_VXLAN)
    -        n1kv_db_v2.sync_vxlan_allocations(self.session, self.net_p)
    -
    -    def test_sync_vxlan_allocations_outside_segment_range(self):
    -        self.assertRaises(c_exc.VxlanIDNotFound,
    -                          n1kv_db_v2.get_vxlan_allocation,
    -                          self.session,
    -                          VXLAN_MIN - 1)
    -        self.assertRaises(c_exc.VxlanIDNotFound,
    -                          n1kv_db_v2.get_vxlan_allocation,
    -                          self.session,
    -                          VXLAN_MAX + 1)
    -
    -    def test_sync_vxlan_allocations_unallocated_vxlans(self):
    -        self.assertFalse(n1kv_db_v2.get_vxlan_allocation(self.session,
    -                                                         VXLAN_MIN).allocated)
    -        self.assertFalse(n1kv_db_v2.get_vxlan_allocation(self.session,
    -                                                         VXLAN_MIN + 1).
    -                         allocated)
    -        self.assertFalse(n1kv_db_v2.get_vxlan_allocation(self.session,
    -                                                         VXLAN_MAX - 1).
    -                         allocated)
    -        self.assertFalse(n1kv_db_v2.get_vxlan_allocation(self.session,
    -                                                         VXLAN_MAX).allocated)
    -
    -    def test_vxlan_pool(self):
    -        vxlan_ids = set()
    -        for x in moves.range(VXLAN_MIN, VXLAN_MAX + 1):
    -            vxlan = n1kv_db_v2.reserve_vxlan(self.session, self.net_p)
    -            vxlan_id = vxlan[2]
    -            self.assertThat(vxlan_id, matchers.GreaterThan(VXLAN_MIN - 1))
    -            self.assertThat(vxlan_id, matchers.LessThan(VXLAN_MAX + 1))
    -            vxlan_ids.add(vxlan_id)
    -
    -        self.assertRaises(n_exc.NoNetworkAvailable,
    -                          n1kv_db_v2.reserve_vxlan,
    -                          self.session,
    -                          self.net_p)
    -        n1kv_db_v2.release_vxlan(self.session, vxlan_ids.pop())
    -        vxlan = n1kv_db_v2.reserve_vxlan(self.session, self.net_p)
    -        vxlan_id = vxlan[2]
    -        self.assertThat(vxlan_id, matchers.GreaterThan(VXLAN_MIN - 1))
    -        self.assertThat(vxlan_id, matchers.LessThan(VXLAN_MAX + 1))
    -        vxlan_ids.add(vxlan_id)
    -
    -        for vxlan_id in vxlan_ids:
    -            n1kv_db_v2.release_vxlan(self.session, vxlan_id)
    -        n1kv_db_v2.delete_network_profile(self.session, self.net_p.id)
    -
    -    def test_specific_vxlan_inside_pool(self):
    -        vxlan_id = VXLAN_MIN + 5
    -        self.assertFalse(n1kv_db_v2.get_vxlan_allocation(self.session,
    -                                                         vxlan_id).allocated)
    -        n1kv_db_v2.reserve_specific_vxlan(self.session, vxlan_id)
    -        self.assertTrue(n1kv_db_v2.get_vxlan_allocation(self.session,
    -                                                        vxlan_id).allocated)
    -
    -        self.assertRaises(c_exc.VxlanIDInUse,
    -                          n1kv_db_v2.reserve_specific_vxlan,
    -                          self.session,
    -                          vxlan_id)
    -
    -        n1kv_db_v2.release_vxlan(self.session, vxlan_id)
    -        self.assertFalse(n1kv_db_v2.get_vxlan_allocation(self.session,
    -                                                         vxlan_id).allocated)
    -
    -    def test_specific_vxlan_outside_pool(self):
    -        vxlan_id = VXLAN_MAX + 5
    -        self.assertRaises(c_exc.VxlanIDNotFound,
    -                          n1kv_db_v2.get_vxlan_allocation,
    -                          self.session,
    -                          vxlan_id)
    -        self.assertRaises(c_exc.VxlanIDOutsidePool,
    -                          n1kv_db_v2.reserve_specific_vxlan,
    -                          self.session,
    -                          vxlan_id)
    -
    -
    -class NetworkBindingsTest(test_plugin.NeutronDbPluginV2TestCase):
    -
    -    def setUp(self):
    -        super(NetworkBindingsTest, self).setUp()
    -        self.session = db.get_session()
    -
    -    def test_add_network_binding(self):
    -        with self.network() as network:
    -            TEST_NETWORK_ID = network['network']['id']
    -
    -            self.assertRaises(c_exc.NetworkBindingNotFound,
    -                              n1kv_db_v2.get_network_binding,
    -                              self.session,
    -                              TEST_NETWORK_ID)
    -
    -            p = _create_test_network_profile_if_not_there(self.session)
    -            n1kv_db_v2.add_network_binding(
    -                self.session, TEST_NETWORK_ID, c_const.NETWORK_TYPE_VLAN,
    -                PHYS_NET, 1234, '0.0.0.0', p.id, None)
    -            binding = n1kv_db_v2.get_network_binding(
    -                self.session, TEST_NETWORK_ID)
    -            self.assertIsNotNone(binding)
    -            self.assertEqual(binding.network_id, TEST_NETWORK_ID)
    -            self.assertEqual(binding.network_type, c_const.NETWORK_TYPE_VLAN)
    -            self.assertEqual(binding.physical_network, PHYS_NET)
    -            self.assertEqual(binding.segmentation_id, 1234)
    -
    -    def test_create_multi_segment_network(self):
    -        with self.network() as network:
    -            TEST_NETWORK_ID = network['network']['id']
    -
    -            self.assertRaises(c_exc.NetworkBindingNotFound,
    -                              n1kv_db_v2.get_network_binding,
    -                              self.session,
    -                              TEST_NETWORK_ID)
    -
    -            p = _create_test_network_profile_if_not_there(
    -                self.session,
    -                TEST_NETWORK_PROFILE_MULTI_SEGMENT)
    -            n1kv_db_v2.add_network_binding(
    -                self.session, TEST_NETWORK_ID,
    -                c_const.NETWORK_TYPE_MULTI_SEGMENT,
    -                None, 0, '0.0.0.0', p.id, None)
    -            binding = n1kv_db_v2.get_network_binding(
    -                self.session, TEST_NETWORK_ID)
    -            self.assertIsNotNone(binding)
    -            self.assertEqual(binding.network_id, TEST_NETWORK_ID)
    -            self.assertEqual(binding.network_type,
    -                             c_const.NETWORK_TYPE_MULTI_SEGMENT)
    -            self.assertIsNone(binding.physical_network)
    -            self.assertEqual(binding.segmentation_id, 0)
    -
    -    def test_add_multi_segment_binding(self):
    -        with self.network() as network:
    -            TEST_NETWORK_ID = network['network']['id']
    -
    -            self.assertRaises(c_exc.NetworkBindingNotFound,
    -                              n1kv_db_v2.get_network_binding,
    -                              self.session,
    -                              TEST_NETWORK_ID)
    -
    -            p = _create_test_network_profile_if_not_there(
    -                self.session,
    -                TEST_NETWORK_PROFILE_MULTI_SEGMENT)
    -            n1kv_db_v2.add_network_binding(
    -                self.session, TEST_NETWORK_ID,
    -                c_const.NETWORK_TYPE_MULTI_SEGMENT,
    -                None, 0, '0.0.0.0', p.id,
    -                [(TEST_NETWORK_ID2, TEST_NETWORK_ID3)])
    -            binding = n1kv_db_v2.get_network_binding(
    -                self.session, TEST_NETWORK_ID)
    -            self.assertIsNotNone(binding)
    -            self.assertEqual(binding.network_id, TEST_NETWORK_ID)
    -            self.assertEqual(binding.network_type,
    -                             c_const.NETWORK_TYPE_MULTI_SEGMENT)
    -            self.assertIsNone(binding.physical_network)
    -            self.assertEqual(binding.segmentation_id, 0)
    -            ms_binding = (n1kv_db_v2.get_multi_segment_network_binding(
    -                          self.session, TEST_NETWORK_ID,
    -                          (TEST_NETWORK_ID2, TEST_NETWORK_ID3)))
    -            self.assertIsNotNone(ms_binding)
    -            self.assertEqual(ms_binding.multi_segment_id, TEST_NETWORK_ID)
    -            self.assertEqual(ms_binding.segment1_id, TEST_NETWORK_ID2)
    -            self.assertEqual(ms_binding.segment2_id, TEST_NETWORK_ID3)
    -            ms_members = (n1kv_db_v2.get_multi_segment_members(
    -                          self.session, TEST_NETWORK_ID))
    -            self.assertEqual(ms_members,
    -                             [(TEST_NETWORK_ID2, TEST_NETWORK_ID3)])
    -            self.assertTrue(n1kv_db_v2.is_multi_segment_member(
    -                            self.session, TEST_NETWORK_ID2))
    -            self.assertTrue(n1kv_db_v2.is_multi_segment_member(
    -                            self.session, TEST_NETWORK_ID3))
    -            n1kv_db_v2.del_multi_segment_binding(
    -                self.session, TEST_NETWORK_ID,
    -                [(TEST_NETWORK_ID2, TEST_NETWORK_ID3)])
    -            ms_members = (n1kv_db_v2.get_multi_segment_members(
    -                          self.session, TEST_NETWORK_ID))
    -            self.assertEqual(ms_members, [])
    -
    -    def test_create_vlan_trunk_network(self):
    -        with self.network() as network:
    -            TEST_NETWORK_ID = network['network']['id']
    -
    -            self.assertRaises(c_exc.NetworkBindingNotFound,
    -                              n1kv_db_v2.get_network_binding,
    -                              self.session,
    -                              TEST_NETWORK_ID)
    -
    -            p = _create_test_network_profile_if_not_there(
    -                self.session,
    -                TEST_NETWORK_PROFILE_VLAN_TRUNK)
    -            n1kv_db_v2.add_network_binding(
    -                self.session, TEST_NETWORK_ID, c_const.NETWORK_TYPE_TRUNK,
    -                None, 0, '0.0.0.0', p.id, None)
    -            binding = n1kv_db_v2.get_network_binding(
    -                self.session, TEST_NETWORK_ID)
    -            self.assertIsNotNone(binding)
    -            self.assertEqual(binding.network_id, TEST_NETWORK_ID)
    -            self.assertEqual(binding.network_type, c_const.NETWORK_TYPE_TRUNK)
    -            self.assertIsNone(binding.physical_network)
    -            self.assertEqual(binding.segmentation_id, 0)
    -
    -    def test_create_vxlan_trunk_network(self):
    -        with self.network() as network:
    -            TEST_NETWORK_ID = network['network']['id']
    -
    -            self.assertRaises(c_exc.NetworkBindingNotFound,
    -                              n1kv_db_v2.get_network_binding,
    -                              self.session,
    -                              TEST_NETWORK_ID)
    -
    -            p = _create_test_network_profile_if_not_there(
    -                self.session,
    -                TEST_NETWORK_PROFILE_VXLAN_TRUNK)
    -            n1kv_db_v2.add_network_binding(
    -                self.session, TEST_NETWORK_ID, c_const.NETWORK_TYPE_TRUNK,
    -                None, 0, '0.0.0.0', p.id, None)
    -            binding = n1kv_db_v2.get_network_binding(
    -                self.session, TEST_NETWORK_ID)
    -            self.assertIsNotNone(binding)
    -            self.assertEqual(binding.network_id, TEST_NETWORK_ID)
    -            self.assertEqual(binding.network_type, c_const.NETWORK_TYPE_TRUNK)
    -            self.assertIsNone(binding.physical_network)
    -            self.assertEqual(binding.segmentation_id, 0)
    -
    -    def test_add_vlan_trunk_binding(self):
    -        with self.network() as network1:
    -            with self.network() as network2:
    -                TEST_NETWORK_ID = network1['network']['id']
    -
    -                self.assertRaises(c_exc.NetworkBindingNotFound,
    -                                  n1kv_db_v2.get_network_binding,
    -                                  self.session,
    -                                  TEST_NETWORK_ID)
    -                TEST_NETWORK_ID2 = network2['network']['id']
    -                self.assertRaises(c_exc.NetworkBindingNotFound,
    -                                  n1kv_db_v2.get_network_binding,
    -                                  self.session,
    -                                  TEST_NETWORK_ID2)
    -                p_v = _create_test_network_profile_if_not_there(self.session)
    -                n1kv_db_v2.add_network_binding(
    -                    self.session, TEST_NETWORK_ID2, c_const.NETWORK_TYPE_VLAN,
    -                    PHYS_NET, 1234, '0.0.0.0', p_v.id, None)
    -                p = _create_test_network_profile_if_not_there(
    -                    self.session,
    -                    TEST_NETWORK_PROFILE_VLAN_TRUNK)
    -                n1kv_db_v2.add_network_binding(
    -                    self.session, TEST_NETWORK_ID, c_const.NETWORK_TYPE_TRUNK,
    -                    None, 0, '0.0.0.0', p.id, [(TEST_NETWORK_ID2, 0)])
    -                binding = n1kv_db_v2.get_network_binding(
    -                    self.session, TEST_NETWORK_ID)
    -                self.assertIsNotNone(binding)
    -                self.assertEqual(binding.network_id, TEST_NETWORK_ID)
    -                self.assertEqual(binding.network_type,
    -                                 c_const.NETWORK_TYPE_TRUNK)
    -                self.assertEqual(binding.physical_network, PHYS_NET)
    -                self.assertEqual(binding.segmentation_id, 0)
    -                t_binding = (n1kv_db_v2.get_trunk_network_binding(
    -                             self.session, TEST_NETWORK_ID,
    -                             (TEST_NETWORK_ID2, 0)))
    -                self.assertIsNotNone(t_binding)
    -                self.assertEqual(t_binding.trunk_segment_id, TEST_NETWORK_ID)
    -                self.assertEqual(t_binding.segment_id, TEST_NETWORK_ID2)
    -                self.assertEqual(t_binding.dot1qtag, '0')
    -                t_members = (n1kv_db_v2.get_trunk_members(
    -                    self.session, TEST_NETWORK_ID))
    -                self.assertEqual(t_members,
    -                                 [(TEST_NETWORK_ID2, '0')])
    -                self.assertTrue(n1kv_db_v2.is_trunk_member(
    -                                self.session, TEST_NETWORK_ID2))
    -                n1kv_db_v2.del_trunk_segment_binding(
    -                    self.session, TEST_NETWORK_ID,
    -                    [(TEST_NETWORK_ID2, '0')])
    -                t_members = (n1kv_db_v2.get_multi_segment_members(
    -                    self.session, TEST_NETWORK_ID))
    -                self.assertEqual(t_members, [])
    -
    -    def test_add_vxlan_trunk_binding(self):
    -        with self.network() as network1:
    -            with self.network() as network2:
    -                TEST_NETWORK_ID = network1['network']['id']
    -
    -                self.assertRaises(c_exc.NetworkBindingNotFound,
    -                                  n1kv_db_v2.get_network_binding,
    -                                  self.session,
    -                                  TEST_NETWORK_ID)
    -                TEST_NETWORK_ID2 = network2['network']['id']
    -                self.assertRaises(c_exc.NetworkBindingNotFound,
    -                                  n1kv_db_v2.get_network_binding,
    -                                  self.session,
    -                                  TEST_NETWORK_ID2)
    -                p_v = _create_test_network_profile_if_not_there(
    -                    self.session, TEST_NETWORK_PROFILE_VXLAN_TRUNK)
    -                n1kv_db_v2.add_network_binding(
    -                    self.session, TEST_NETWORK_ID2,
    -                    c_const.NETWORK_TYPE_OVERLAY,
    -                    None, 5100, '224.10.10.10', p_v.id, None)
    -                p = _create_test_network_profile_if_not_there(
    -                    self.session,
    -                    TEST_NETWORK_PROFILE_VXLAN_TRUNK)
    -                n1kv_db_v2.add_network_binding(
    -                    self.session, TEST_NETWORK_ID, c_const.NETWORK_TYPE_TRUNK,
    -                    None, 0, '0.0.0.0', p.id,
    -                    [(TEST_NETWORK_ID2, 5)])
    -                binding = n1kv_db_v2.get_network_binding(
    -                    self.session, TEST_NETWORK_ID)
    -                self.assertIsNotNone(binding)
    -                self.assertEqual(binding.network_id, TEST_NETWORK_ID)
    -                self.assertEqual(binding.network_type,
    -                                 c_const.NETWORK_TYPE_TRUNK)
    -                self.assertIsNone(binding.physical_network)
    -                self.assertEqual(binding.segmentation_id, 0)
    -                t_binding = (n1kv_db_v2.get_trunk_network_binding(
    -                             self.session, TEST_NETWORK_ID,
    -                             (TEST_NETWORK_ID2, '5')))
    -                self.assertIsNotNone(t_binding)
    -                self.assertEqual(t_binding.trunk_segment_id, TEST_NETWORK_ID)
    -                self.assertEqual(t_binding.segment_id, TEST_NETWORK_ID2)
    -                self.assertEqual(t_binding.dot1qtag, '5')
    -                t_members = (n1kv_db_v2.get_trunk_members(
    -                    self.session, TEST_NETWORK_ID))
    -                self.assertEqual(t_members,
    -                                 [(TEST_NETWORK_ID2, '5')])
    -                self.assertTrue(n1kv_db_v2.is_trunk_member(
    -                    self.session, TEST_NETWORK_ID2))
    -                n1kv_db_v2.del_trunk_segment_binding(
    -                    self.session, TEST_NETWORK_ID,
    -                    [(TEST_NETWORK_ID2, '5')])
    -                t_members = (n1kv_db_v2.get_multi_segment_members(
    -                    self.session, TEST_NETWORK_ID))
    -                self.assertEqual(t_members, [])
    -
    -
    -class NetworkProfileTests(testlib_api.SqlTestCase,
    -                          n1kv_db_v2.NetworkProfile_db_mixin):
    -
    -    def setUp(self):
    -        super(NetworkProfileTests, self).setUp()
    -        self.session = db.get_session()
    -
    -    def test_create_network_profile(self):
    -        _db_profile = n1kv_db_v2.create_network_profile(self.session,
    -                                                        TEST_NETWORK_PROFILE)
    -        self.assertIsNotNone(_db_profile)
    -        db_profile = (self.session.query(n1kv_models_v2.NetworkProfile).
    -                      filter_by(name=TEST_NETWORK_PROFILE['name']).one())
    -        self.assertIsNotNone(db_profile)
    -        self.assertEqual(_db_profile.id, db_profile.id)
    -        self.assertEqual(_db_profile.name, db_profile.name)
    -        self.assertEqual(_db_profile.segment_type, db_profile.segment_type)
    -        self.assertEqual(_db_profile.segment_range, db_profile.segment_range)
    -        self.assertEqual(_db_profile.multicast_ip_index,
    -                         db_profile.multicast_ip_index)
    -        self.assertEqual(_db_profile.multicast_ip_range,
    -                         db_profile.multicast_ip_range)
    -        n1kv_db_v2.delete_network_profile(self.session, _db_profile.id)
    -
    -    def test_create_multi_segment_network_profile(self):
    -        _db_profile = (n1kv_db_v2.create_network_profile(
    -                       self.session, TEST_NETWORK_PROFILE_MULTI_SEGMENT))
    -        self.assertIsNotNone(_db_profile)
    -        db_profile = (
    -            self.session.query(
    -                n1kv_models_v2.NetworkProfile).filter_by(
    -                    name=TEST_NETWORK_PROFILE_MULTI_SEGMENT['name'])
    -            .one())
    -        self.assertIsNotNone(db_profile)
    -        self.assertEqual(_db_profile.id, db_profile.id)
    -        self.assertEqual(_db_profile.name, db_profile.name)
    -        self.assertEqual(_db_profile.segment_type, db_profile.segment_type)
    -        self.assertEqual(_db_profile.segment_range, db_profile.segment_range)
    -        self.assertEqual(_db_profile.multicast_ip_index,
    -                         db_profile.multicast_ip_index)
    -        self.assertEqual(_db_profile.multicast_ip_range,
    -                         db_profile.multicast_ip_range)
    -        n1kv_db_v2.delete_network_profile(self.session, _db_profile.id)
    -
    -    def test_create_vlan_trunk_network_profile(self):
    -        _db_profile = (n1kv_db_v2.create_network_profile(
    -                       self.session, TEST_NETWORK_PROFILE_VLAN_TRUNK))
    -        self.assertIsNotNone(_db_profile)
    -        db_profile = (self.session.query(n1kv_models_v2.NetworkProfile).
    -                      filter_by(name=TEST_NETWORK_PROFILE_VLAN_TRUNK['name']).
    -                      one())
    -        self.assertIsNotNone(db_profile)
    -        self.assertEqual(_db_profile.id, db_profile.id)
    -        self.assertEqual(_db_profile.name, db_profile.name)
    -        self.assertEqual(_db_profile.segment_type, db_profile.segment_type)
    -        self.assertEqual(_db_profile.segment_range, db_profile.segment_range)
    -        self.assertEqual(_db_profile.multicast_ip_index,
    -                         db_profile.multicast_ip_index)
    -        self.assertEqual(_db_profile.multicast_ip_range,
    -                         db_profile.multicast_ip_range)
    -        self.assertEqual(_db_profile.sub_type, db_profile.sub_type)
    -        n1kv_db_v2.delete_network_profile(self.session, _db_profile.id)
    -
    -    def test_create_vxlan_trunk_network_profile(self):
    -        _db_profile = (n1kv_db_v2.create_network_profile(
    -                       self.session, TEST_NETWORK_PROFILE_VXLAN_TRUNK))
    -        self.assertIsNotNone(_db_profile)
    -        db_profile = (self.session.query(n1kv_models_v2.NetworkProfile).
    -                      filter_by(name=TEST_NETWORK_PROFILE_VXLAN_TRUNK['name']).
    -                      one())
    -        self.assertIsNotNone(db_profile)
    -        self.assertEqual(_db_profile.id, db_profile.id)
    -        self.assertEqual(_db_profile.name, db_profile.name)
    -        self.assertEqual(_db_profile.segment_type, db_profile.segment_type)
    -        self.assertEqual(_db_profile.segment_range, db_profile.segment_range)
    -        self.assertEqual(_db_profile.multicast_ip_index,
    -                         db_profile.multicast_ip_index)
    -        self.assertEqual(_db_profile.multicast_ip_range,
    -                         db_profile.multicast_ip_range)
    -        self.assertEqual(_db_profile.sub_type, db_profile.sub_type)
    -        n1kv_db_v2.delete_network_profile(self.session, _db_profile.id)
    -
    -    def test_create_network_profile_overlap(self):
    -        _db_profile = n1kv_db_v2.create_network_profile(self.session,
    -                                                        TEST_NETWORK_PROFILE_2)
    -        ctx = context.get_admin_context()
    -        TEST_NETWORK_PROFILE_2['name'] = 'net-profile-min-overlap'
    -        TEST_NETWORK_PROFILE_2['segment_range'] = SEGMENT_RANGE_MIN_OVERLAP
    -        test_net_profile = {'network_profile': TEST_NETWORK_PROFILE_2}
    -        self.assertRaises(n_exc.InvalidInput,
    -                          self.create_network_profile,
    -                          ctx,
    -                          test_net_profile)
    -
    -        TEST_NETWORK_PROFILE_2['name'] = 'net-profile-max-overlap'
    -        TEST_NETWORK_PROFILE_2['segment_range'] = SEGMENT_RANGE_MAX_OVERLAP
    -        test_net_profile = {'network_profile': TEST_NETWORK_PROFILE_2}
    -        self.assertRaises(n_exc.InvalidInput,
    -                          self.create_network_profile,
    -                          ctx,
    -                          test_net_profile)
    -
    -        TEST_NETWORK_PROFILE_2['name'] = 'net-profile-overlap'
    -        TEST_NETWORK_PROFILE_2['segment_range'] = SEGMENT_RANGE_OVERLAP
    -        test_net_profile = {'network_profile': TEST_NETWORK_PROFILE_2}
    -        self.assertRaises(n_exc.InvalidInput,
    -                          self.create_network_profile,
    -                          ctx,
    -                          test_net_profile)
    -        n1kv_db_v2.delete_network_profile(self.session, _db_profile.id)
    -
    -    def test_delete_network_profile(self):
    -        try:
    -            profile = (self.session.query(n1kv_models_v2.NetworkProfile).
    -                       filter_by(name=TEST_NETWORK_PROFILE['name']).one())
    -        except s_exc.NoResultFound:
    -            profile = n1kv_db_v2.create_network_profile(self.session,
    -                                                        TEST_NETWORK_PROFILE)
    -
    -        n1kv_db_v2.delete_network_profile(self.session, profile.id)
    -        try:
    -            self.session.query(n1kv_models_v2.NetworkProfile).filter_by(
    -                name=TEST_NETWORK_PROFILE['name']).one()
    -        except s_exc.NoResultFound:
    -            pass
    -        else:
    -            self.fail("Network Profile (%s) was not deleted" %
    -                      TEST_NETWORK_PROFILE['name'])
    -
    -    def test_update_network_profile(self):
    -        TEST_PROFILE_1 = {'name': 'test_profile_1'}
    -        profile = _create_test_network_profile_if_not_there(self.session)
    -        updated_profile = n1kv_db_v2.update_network_profile(self.session,
    -                                                            profile.id,
    -                                                            TEST_PROFILE_1)
    -        self.assertEqual(updated_profile.name, TEST_PROFILE_1['name'])
    -        n1kv_db_v2.delete_network_profile(self.session, profile.id)
    -
    -    def test_get_network_profile(self):
    -        profile = n1kv_db_v2.create_network_profile(self.session,
    -                                                    TEST_NETWORK_PROFILE)
    -        got_profile = n1kv_db_v2.get_network_profile(self.session, profile.id)
    -        self.assertEqual(profile.id, got_profile.id)
    -        self.assertEqual(profile.name, got_profile.name)
    -        n1kv_db_v2.delete_network_profile(self.session, profile.id)
    -
    -    def test_get_network_profiles(self):
    -        test_profiles = [{'name': 'test_profile1',
    -                          'segment_type': c_const.NETWORK_TYPE_VLAN,
    -                          'physical_network': 'phys1',
    -                          'segment_range': '200-210'},
    -                         {'name': 'test_profile2',
    -                          'segment_type': c_const.NETWORK_TYPE_VLAN,
    -                          'physical_network': 'phys1',
    -                          'segment_range': '211-220'},
    -                         {'name': 'test_profile3',
    -                          'segment_type': c_const.NETWORK_TYPE_VLAN,
    -                          'physical_network': 'phys1',
    -                          'segment_range': '221-230'},
    -                         {'name': 'test_profile4',
    -                          'segment_type': c_const.NETWORK_TYPE_VLAN,
    -                          'physical_network': 'phys1',
    -                          'segment_range': '231-240'},
    -                         {'name': 'test_profile5',
    -                          'segment_type': c_const.NETWORK_TYPE_VLAN,
    -                          'physical_network': 'phys1',
    -                          'segment_range': '241-250'},
    -                         {'name': 'test_profile6',
    -                          'segment_type': c_const.NETWORK_TYPE_VLAN,
    -                          'physical_network': 'phys1',
    -                          'segment_range': '251-260'},
    -                         {'name': 'test_profile7',
    -                          'segment_type': c_const.NETWORK_TYPE_VLAN,
    -                          'physical_network': 'phys1',
    -                          'segment_range': '261-270'}]
    -        [n1kv_db_v2.create_network_profile(self.session, p)
    -         for p in test_profiles]
    -        # TODO(abhraut): Fix this test to work with real tenant_td
    -        profiles = n1kv_db_v2._get_network_profiles(db_session=self.session)
    -        self.assertEqual(len(test_profiles), len(list(profiles)))
    -
    -
    -class PolicyProfileTests(testlib_api.SqlTestCase):
    -
    -    def setUp(self):
    -        super(PolicyProfileTests, self).setUp()
    -        self.session = db.get_session()
    -
    -    def test_create_policy_profile(self):
    -        _db_profile = n1kv_db_v2.create_policy_profile(TEST_POLICY_PROFILE)
    -        self.assertIsNotNone(_db_profile)
    -        db_profile = (self.session.query(n1kv_models_v2.PolicyProfile).
    -                      filter_by(name=TEST_POLICY_PROFILE['name']).one)()
    -        self.assertIsNotNone(db_profile)
    -        self.assertTrue(_db_profile.id == db_profile.id)
    -        self.assertTrue(_db_profile.name == db_profile.name)
    -
    -    def test_delete_policy_profile(self):
    -        profile = _create_test_policy_profile_if_not_there(self.session)
    -        n1kv_db_v2.delete_policy_profile(profile.id)
    -        try:
    -            self.session.query(n1kv_models_v2.PolicyProfile).filter_by(
    -                name=TEST_POLICY_PROFILE['name']).one()
    -        except s_exc.NoResultFound:
    -            pass
    -        else:
    -            self.fail("Policy Profile (%s) was not deleted" %
    -                      TEST_POLICY_PROFILE['name'])
    -
    -    def test_update_policy_profile(self):
    -        TEST_PROFILE_1 = {'name': 'test_profile_1'}
    -        profile = _create_test_policy_profile_if_not_there(self.session)
    -        updated_profile = n1kv_db_v2.update_policy_profile(self.session,
    -                                                           profile.id,
    -                                                           TEST_PROFILE_1)
    -        self.assertEqual(updated_profile.name, TEST_PROFILE_1['name'])
    -
    -    def test_get_policy_profile(self):
    -        profile = _create_test_policy_profile_if_not_there(self.session)
    -        got_profile = n1kv_db_v2.get_policy_profile(self.session, profile.id)
    -        self.assertEqual(profile.id, got_profile.id)
    -        self.assertEqual(profile.name, got_profile.name)
    -
    -
    -class ProfileBindingTests(testlib_api.SqlTestCase,
    -                          n1kv_db_v2.NetworkProfile_db_mixin,
    -                          common_db_mixin.CommonDbMixin):
    -
    -    def setUp(self):
    -        super(ProfileBindingTests, self).setUp()
    -        self.session = db.get_session()
    -
    -    def _create_test_binding_if_not_there(self, tenant_id, profile_id,
    -                                          profile_type):
    -        try:
    -            _binding = (self.session.query(n1kv_models_v2.ProfileBinding).
    -                        filter_by(profile_type=profile_type,
    -                                  tenant_id=tenant_id,
    -                                  profile_id=profile_id).one())
    -        except s_exc.NoResultFound:
    -            _binding = n1kv_db_v2.create_profile_binding(self.session,
    -                                                         tenant_id,
    -                                                         profile_id,
    -                                                         profile_type)
    -        return _binding
    -
    -    def test_create_profile_binding(self):
    -        test_tenant_id = "d434dd90-76ec-11e2-bcfd-0800200c9a66"
    -        test_profile_id = "dd7b9741-76ec-11e2-bcfd-0800200c9a66"
    -        test_profile_type = "network"
    -        n1kv_db_v2.create_profile_binding(self.session,
    -                                          test_tenant_id,
    -                                          test_profile_id,
    -                                          test_profile_type)
    -        try:
    -            self.session.query(n1kv_models_v2.ProfileBinding).filter_by(
    -                profile_type=test_profile_type,
    -                tenant_id=test_tenant_id,
    -                profile_id=test_profile_id).one()
    -        except s_exc.MultipleResultsFound:
    -            self.fail("Bindings must be unique")
    -        except s_exc.NoResultFound:
    -            self.fail("Could not create Profile Binding")
    -
    -    def test_update_profile_binding(self):
    -        test_tenant_id = "d434dd90-76ec-11e2-bcfd-0800200c9a66"
    -        test_profile_id = "dd7b9741-76ec-11e2-bcfd-0800200c9a66"
    -        test_profile_type = "network"
    -        n1kv_db_v2.create_profile_binding(self.session,
    -                                          test_tenant_id,
    -                                          test_profile_id,
    -                                          test_profile_type)
    -        new_tenants = ['d434dd90-76ec-11e2-bcfd-0800200c9a67',
    -                       'd434dd90-76ec-11e2-bcfd-0800200c9a68',
    -                       'd434dd90-76ec-11e2-bcfd-0800200c9a69']
    -        n1kv_db_v2.update_profile_binding(self.session,
    -                                          test_profile_id,
    -                                          new_tenants,
    -                                          test_profile_type)
    -
    -        result = self.session.query(n1kv_models_v2.ProfileBinding).filter_by(
    -            profile_type=test_profile_type,
    -            profile_id=test_profile_id).all()
    -        self.assertEqual(3, len(result))
    -
    -    def test_get_profile_binding(self):
    -        test_tenant_id = "d434dd90-76ec-11e2-bcfd-0800200c9a66"
    -        test_profile_id = "dd7b9741-76ec-11e2-bcfd-0800200c9a66"
    -        test_profile_type = "network"
    -        self._create_test_binding_if_not_there(test_tenant_id,
    -                                               test_profile_id,
    -                                               test_profile_type)
    -        binding = n1kv_db_v2.get_profile_binding(self.session,
    -                                                 test_tenant_id,
    -                                                 test_profile_id)
    -        self.assertEqual(binding.tenant_id, test_tenant_id)
    -        self.assertEqual(binding.profile_id, test_profile_id)
    -        self.assertEqual(binding.profile_type, test_profile_type)
    -
    -    def test_get_profile_binding_not_found(self):
    -        self.assertRaises(
    -            c_exc.ProfileTenantBindingNotFound,
    -            n1kv_db_v2.get_profile_binding, self.session, "123", "456")
    -
    -    def test_delete_profile_binding(self):
    -        test_tenant_id = "d434dd90-76ec-11e2-bcfd-0800200c9a66"
    -        test_profile_id = "dd7b9741-76ec-11e2-bcfd-0800200c9a66"
    -        test_profile_type = "network"
    -        self._create_test_binding_if_not_there(test_tenant_id,
    -                                               test_profile_id,
    -                                               test_profile_type)
    -        n1kv_db_v2.delete_profile_binding(self.session,
    -                                          test_tenant_id,
    -                                          test_profile_id)
    -        q = (self.session.query(n1kv_models_v2.ProfileBinding).filter_by(
    -             profile_type=test_profile_type,
    -             tenant_id=test_tenant_id,
    -             profile_id=test_profile_id))
    -        self.assertFalse(q.count())
    -
    -    def test_default_tenant_replace(self):
    -        ctx = context.get_admin_context()
    -        ctx.tenant_id = "d434dd90-76ec-11e2-bcfd-0800200c9a66"
    -        test_profile_id = "AAAAAAAA-76ec-11e2-bcfd-0800200c9a66"
    -        test_profile_type = "policy"
    -        n1kv_db_v2.create_profile_binding(self.session,
    -                                          c_const.TENANT_ID_NOT_SET,
    -                                          test_profile_id,
    -                                          test_profile_type)
    -        network_profile = {"network_profile": TEST_NETWORK_PROFILE}
    -        self.create_network_profile(ctx, network_profile)
    -        binding = n1kv_db_v2.get_profile_binding(self.session,
    -                                                 ctx.tenant_id,
    -                                                 test_profile_id)
    -        self.assertRaises(
    -            c_exc.ProfileTenantBindingNotFound,
    -            n1kv_db_v2.get_profile_binding,
    -            self.session,
    -            c_const.TENANT_ID_NOT_SET,
    -            test_profile_id)
    -        self.assertNotEqual(binding.tenant_id,
    -                            c_const.TENANT_ID_NOT_SET)
    
  • neutron/tests/unit/plugins/cisco/n1kv/test_n1kv_plugin.py+0 0 removed
  • neutron/tests/unit/plugins/cisco/test_network_db.py+0 309 removed
  • neutron/tests/unit/plugins/ml2/drivers/linuxbridge/agent/test_linuxbridge_neutron_agent.py+110 15 modified
  • neutron/tests/unit/plugins/ml2/drivers/mech_sriov/agent/extension_drivers/test_qos_driver.py+13 5 modified
  • neutron/tests/unit/plugins/ml2/drivers/mech_sriov/agent/test_eswitch_manager.py+59 23 modified
  • neutron/tests/unit/plugins/ml2/drivers/mech_sriov/agent/test_pci_lib.py+12 1 modified
  • neutron/tests/unit/plugins/ml2/drivers/mech_sriov/agent/test_sriov_nic_agent.py+74 13 modified
  • neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/extension_drivers/test_qos_driver.py+9 3 modified
  • neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/fake_oflib.py+158 0 added
  • neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/openflow/native/__init__.py+0 0 renamed
  • neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_bridge_test_base.py+257 0 added
  • neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/openflow/native/test_br_int.py+403 0 added
  • neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/openflow/native/test_br_phys.py+147 0 added
  • neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/openflow/native/test_br_tun.py+484 0 added
  • neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/test_br_int.py+26 0 modified
  • neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/test_br_tun.py+2 2 modified
  • neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/ovs_test_base.py+22 0 modified
  • neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/test_ovs_neutron_agent.py+178 34 modified
  • neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/test_ovs_tunnel.py+17 3 modified
  • neutron/tests/unit/plugins/ml2/test_plugin.py+93 1 modified
  • neutron/tests/unit/plugins/ml2/test_rpc.py+3 3 modified
  • neutron/tests/unit/quota/test_resource.py+9 15 modified
  • neutron/tests/unit/quota/test_resource_registry.py+3 3 modified
  • neutron/tests/unit/scheduler/test_l3_agent_scheduler.py+225 6 modified
  • neutron/tests/unit/services/metering/agents/test_metering_agent.py+43 0 modified
  • neutron/tests/unit/test_policy.py+16 0 modified
  • neutron/tests/unit/test_wsgi.py+11 3 modified
  • neutron/worker.py+40 0 added
  • neutron/wsgi.py+3 1 modified
  • requirements.txt+4 3 modified
  • setup.cfg+1 17 modified
  • .testr.conf+1 1 modified
    @@ -1,4 +1,4 @@
     [DEFAULT]
    -test_command=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 ${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./neutron/tests/unit} $LISTOPT $IDOPTION
    +test_command=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 ${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./neutron/tests/unit} $LISTOPT $IDOPTION | cat
     test_id_option=--load-list $IDFILE
     test_list_option=--list
    
  • tools/check_unit_test_structure.sh+0 1 modified
  • tools/deploy_rootwrap.sh+2 0 modified
  • tox.ini+0 162 modified
767cea23de44

Stop device_owner from being set to 'network:*'

https://github.com/openstack/neutronKevin BentonAug 26, 2015via ghsa
5 files changed · +26 1
  • etc/policy.json+3 0 modified
    @@ -46,7 +46,9 @@
         "update_network:router:external": "rule:admin_only",
         "delete_network": "rule:admin_or_owner",
     
    +    "network_device": "field:port:device_owner=~^network:",
         "create_port": "",
    +    "create_port:device_owner": "not rule:network_device or rule:admin_or_network_owner or rule:context_is_advsvc",
         "create_port:mac_address": "rule:admin_or_network_owner or rule:context_is_advsvc",
         "create_port:fixed_ips": "rule:admin_or_network_owner or rule:context_is_advsvc",
         "create_port:port_security_enabled": "rule:admin_or_network_owner or rule:context_is_advsvc",
    @@ -61,6 +63,7 @@
         "get_port:binding:host_id": "rule:admin_only",
         "get_port:binding:profile": "rule:admin_only",
         "update_port": "rule:admin_or_owner or rule:context_is_advsvc",
    +    "update_port:device_owner": "not rule:network_device or rule:admin_or_network_owner or rule:context_is_advsvc",
         "update_port:mac_address": "rule:admin_only or rule:context_is_advsvc",
         "update_port:fixed_ips": "rule:admin_or_network_owner or rule:context_is_advsvc",
         "update_port:port_security_enabled": "rule:admin_or_network_owner or rule:context_is_advsvc",
    
  • neutron/api/v2/attributes.py+1 1 modified
    @@ -766,7 +766,7 @@ def convert_to_list(data):
                           'is_visible': True},
             'device_owner': {'allow_post': True, 'allow_put': True,
                              'validate': {'type:string': DEVICE_OWNER_MAX_LEN},
    -                         'default': '',
    +                         'default': '', 'enforce_policy': True,
                              'is_visible': True},
             'tenant_id': {'allow_post': True, 'allow_put': False,
                           'validate': {'type:string': TENANT_ID_MAX_LEN},
    
  • neutron/policy.py+3 0 modified
    @@ -335,6 +335,7 @@ def __init__(self, kind, match):
     
             self.field = field
             self.value = conv_func(value)
    +        self.regex = re.compile(value[1:]) if value.startswith('~') else None
     
         def __call__(self, target_dict, cred_dict, enforcer):
             target_value = target_dict.get(self.field)
    @@ -344,6 +345,8 @@ def __call__(self, target_dict, cred_dict, enforcer):
                           "%(target_dict)s",
                           {'field': self.field, 'target_dict': target_dict})
                 return False
    +        if self.regex:
    +            return bool(self.regex.match(target_value))
             return target_value == self.value
     
     
    
  • neutron/tests/etc/policy.json+3 0 modified
    @@ -46,7 +46,9 @@
         "update_network:router:external": "rule:admin_only",
         "delete_network": "rule:admin_or_owner",
     
    +    "network_device": "field:port:device_owner=~^network:",
         "create_port": "",
    +    "create_port:device_owner": "not rule:network_device or rule:admin_or_network_owner or rule:context_is_advsvc",
         "create_port:mac_address": "rule:admin_or_network_owner or rule:context_is_advsvc",
         "create_port:fixed_ips": "rule:admin_or_network_owner or rule:context_is_advsvc",
         "create_port:port_security_enabled": "rule:admin_or_network_owner or rule:context_is_advsvc",
    @@ -61,6 +63,7 @@
         "get_port:binding:host_id": "rule:admin_only",
         "get_port:binding:profile": "rule:admin_only",
         "update_port": "rule:admin_or_owner or rule:context_is_advsvc",
    +    "update_port:device_owner": "not rule:network_device or rule:admin_or_network_owner or rule:context_is_advsvc",
         "update_port:mac_address": "rule:admin_only or rule:context_is_advsvc",
         "update_port:fixed_ips": "rule:admin_or_network_owner or rule:context_is_advsvc",
         "update_port:port_security_enabled": "rule:admin_or_network_owner or rule:context_is_advsvc",
    
  • neutron/tests/unit/test_policy.py+16 0 modified
    @@ -232,6 +232,7 @@ def setUp(self):
                 "regular_user": "role:user",
                 "shared": "field:networks:shared=True",
                 "external": "field:networks:router:external=True",
    +            "network_device": "field:port:device_owner=~^network:",
                 "default": '@',
     
                 "create_network": "rule:admin_or_owner",
    @@ -243,6 +244,7 @@ def setUp(self):
                 "create_subnet": "rule:admin_or_network_owner",
                 "create_port:mac": "rule:admin_or_network_owner or "
                                    "rule:context_is_advsvc",
    +            "create_port:device_owner": "not rule:network_device",
                 "update_port": "rule:admin_or_owner or rule:context_is_advsvc",
                 "get_port": "rule:admin_or_owner or rule:context_is_advsvc",
                 "delete_port": "rule:admin_or_owner or rule:context_is_advsvc",
    @@ -312,6 +314,20 @@ def test_nonadmin_write_on_shared_fails(self):
             self._test_nonadmin_action_on_attr('create', 'shared', True,
                                                common_policy.PolicyNotAuthorized)
     
    +    def test_create_port_device_owner_regex(self):
    +        blocked_values = ('network:', 'network:abdef', 'network:dhcp',
    +                          'network:router_interface')
    +        for val in blocked_values:
    +            self._test_advsvc_action_on_attr(
    +                'create', 'port', 'device_owner', val,
    +                common_policy.PolicyNotAuthorized
    +            )
    +        ok_values = ('network', 'networks', 'my_network:test', 'my_network:')
    +        for val in ok_values:
    +            self._test_advsvc_action_on_attr(
    +                'create', 'port', 'device_owner', val
    +            )
    +
         def test_advsvc_get_network_works(self):
             self._test_advsvc_action_on_attr('get', 'network', 'shared', False)
     
    
bbca973986fd

Stop device_owner from being set to 'network:*'

https://github.com/openstack/neutronKevin BentonAug 26, 2015via ghsa
5 files changed · +26 1
  • etc/policy.json+3 0 modified
    @@ -56,7 +56,9 @@
         "update_network:router:external": "rule:admin_only",
         "delete_network": "rule:admin_or_owner",
     
    +    "network_device": "field:port:device_owner=~^network:",
         "create_port": "",
    +    "create_port:device_owner": "not rule:network_device or rule:admin_or_network_owner or rule:context_is_advsvc",
         "create_port:mac_address": "rule:admin_or_network_owner or rule:context_is_advsvc",
         "create_port:fixed_ips": "rule:admin_or_network_owner or rule:context_is_advsvc",
         "create_port:port_security_enabled": "rule:admin_or_network_owner or rule:context_is_advsvc",
    @@ -71,6 +73,7 @@
         "get_port:binding:host_id": "rule:admin_only",
         "get_port:binding:profile": "rule:admin_only",
         "update_port": "rule:admin_or_owner or rule:context_is_advsvc",
    +    "update_port:device_owner": "not rule:network_device or rule:admin_or_network_owner or rule:context_is_advsvc",
         "update_port:mac_address": "rule:admin_only or rule:context_is_advsvc",
         "update_port:fixed_ips": "rule:admin_or_network_owner or rule:context_is_advsvc",
         "update_port:port_security_enabled": "rule:admin_or_network_owner or rule:context_is_advsvc",
    
  • neutron/api/v2/attributes.py+1 1 modified
    @@ -731,7 +731,7 @@ def convert_to_list(data):
                           'is_visible': True},
             'device_owner': {'allow_post': True, 'allow_put': True,
                              'validate': {'type:string': DEVICE_OWNER_MAX_LEN},
    -                         'default': '',
    +                         'default': '', 'enforce_policy': True,
                              'is_visible': True},
             'tenant_id': {'allow_post': True, 'allow_put': False,
                           'validate': {'type:string': TENANT_ID_MAX_LEN},
    
  • neutron/policy.py+3 0 modified
    @@ -290,6 +290,7 @@ def __init__(self, kind, match):
     
             self.field = field
             self.value = conv_func(value)
    +        self.regex = re.compile(value[1:]) if value.startswith('~') else None
     
         def __call__(self, target_dict, cred_dict, enforcer):
             target_value = target_dict.get(self.field)
    @@ -299,6 +300,8 @@ def __call__(self, target_dict, cred_dict, enforcer):
                           "%(target_dict)s",
                           {'field': self.field, 'target_dict': target_dict})
                 return False
    +        if self.regex:
    +            return bool(self.regex.match(target_value))
             return target_value == self.value
     
     
    
  • neutron/tests/etc/policy.json+3 0 modified
    @@ -56,7 +56,9 @@
         "update_network:router:external": "rule:admin_only",
         "delete_network": "rule:admin_or_owner",
     
    +    "network_device": "field:port:device_owner=~^network:",
         "create_port": "",
    +    "create_port:device_owner": "not rule:network_device or rule:admin_or_network_owner or rule:context_is_advsvc",
         "create_port:mac_address": "rule:admin_or_network_owner or rule:context_is_advsvc",
         "create_port:fixed_ips": "rule:admin_or_network_owner or rule:context_is_advsvc",
         "create_port:port_security_enabled": "rule:admin_or_network_owner or rule:context_is_advsvc",
    @@ -71,6 +73,7 @@
         "get_port:binding:host_id": "rule:admin_only",
         "get_port:binding:profile": "rule:admin_only",
         "update_port": "rule:admin_or_owner or rule:context_is_advsvc",
    +    "update_port:device_owner": "not rule:network_device or rule:admin_or_network_owner or rule:context_is_advsvc",
         "update_port:mac_address": "rule:admin_only or rule:context_is_advsvc",
         "update_port:fixed_ips": "rule:admin_or_network_owner or rule:context_is_advsvc",
         "update_port:port_security_enabled": "rule:admin_or_network_owner or rule:context_is_advsvc",
    
  • neutron/tests/unit/test_policy.py+16 0 modified
    @@ -241,6 +241,7 @@ def _set_rules(self, **kwargs):
                 "regular_user": "role:user",
                 "shared": "field:networks:shared=True",
                 "external": "field:networks:router:external=True",
    +            "network_device": "field:port:device_owner=~^network:",
                 "default": '@',
     
                 "create_network": "rule:admin_or_owner",
    @@ -252,6 +253,7 @@ def _set_rules(self, **kwargs):
                 "create_subnet": "rule:admin_or_network_owner",
                 "create_port:mac": "rule:admin_or_network_owner or "
                                    "rule:context_is_advsvc",
    +            "create_port:device_owner": "not rule:network_device",
                 "update_port": "rule:admin_or_owner or rule:context_is_advsvc",
                 "get_port": "rule:admin_or_owner or rule:context_is_advsvc",
                 "delete_port": "rule:admin_or_owner or rule:context_is_advsvc",
    @@ -330,6 +332,20 @@ def test_nonadmin_write_on_shared_fails(self):
             self._test_nonadmin_action_on_attr('create', 'shared', True,
                                                oslo_policy.PolicyNotAuthorized)
     
    +    def test_create_port_device_owner_regex(self):
    +        blocked_values = ('network:', 'network:abdef', 'network:dhcp',
    +                          'network:router_interface')
    +        for val in blocked_values:
    +            self._test_advsvc_action_on_attr(
    +                'create', 'port', 'device_owner', val,
    +                oslo_policy.PolicyNotAuthorized
    +            )
    +        ok_values = ('network', 'networks', 'my_network:test', 'my_network:')
    +        for val in ok_values:
    +            self._test_advsvc_action_on_attr(
    +                'create', 'port', 'device_owner', val
    +            )
    +
         def test_advsvc_get_network_works(self):
             self._test_advsvc_action_on_attr('get', 'network', 'shared', False)
     
    

Vulnerability mechanics

Generated by null/stub on May 9, 2026. Inputs: CWE entries + fix-commit diffs from this CVE's patches. Citations validated against bundle.

References

12

News mentions

0

No linked articles in our index yet.